Quickstart

Test Cloudburst Autoscaler with a Kind cluster. You need Docker, Kind, kubectl, Tailscale, CloudBroker, and at least one provider (e.g. GCP, Hetzner, Scaleway).

1. Prerequisites

  • CloudBroker — running at http://localhost:8000. See CloudBroker quickstart.
  • Tailscaleinstall and create an auth key.
  • Provider credentials — e.g. GCP service account, HETZNER_API_TOKEN, or Scaleway keys.

2. Clone and configure

git clone https://github.com/braghettos/cloudburst-autoscaler
cd cloudburst-autoscaler
cp .env.example .env

Edit .env: set TAILSCALE_AUTHKEY, CLOUDBROKER_BASEURL=http://localhost:8000, and provider credentials (e.g. GCP_PROJECT, HETZNER_API_TOKEN).

3. Create Kind cluster and deploy

make kind-setup-and-deploy IMG=cloudburst-autoscaler:latest

Creates a Kind cluster, builds the controller image, installs CRDs, and deploys the controller. Uses host.docker.internal to reach CloudBroker and Tailscale from inside the cluster.

4. Apply NodeClass and NodePool

Use the samples for your provider. Example: Scaleway.

kubectl apply -f config/samples/secret-scaleway.yaml
kubectl apply -f config/samples/nodeclass-scaleway.yaml
kubectl apply -f config/samples/nodepool-scaleway.yaml

For GCP: config/samples/nodeclass-gcp.yaml, config/samples/nodepool-gcp.yaml. See config/samples/ for other providers.

5. Trigger burst

Create a pod that cannot be scheduled (requests more than any existing node has):

kubectl apply -f config/samples/workload-scaleway.yaml

(Use workload-gcp.yaml for GCP, etc.)

6. Verify

kubectl get nodeclaims -A
kubectl get nodes -l cloudburst.io/provider
kubectl get pods -A | grep -E 'Pending|Running'

You should see a NodeClaim move through Pending → Provisioning → Joining → Ready, then a new node, then the pod scheduled.

7. Scale down

kubectl delete -f config/samples/workload-scaleway.yaml

After ttlSecondsAfterEmpty (e.g. 60s in the sample), the controller cordons, drains, and deletes the burst node and VM.