~10 minutes
Quickstart
Test Cloudburst Autoscaler with a Kind cluster. You need Docker, Kind, kubectl, Tailscale, CloudBroker, and at least one provider (e.g. GCP, Hetzner, Scaleway).
-
1
Prerequisites
External dependency: CloudBroker must be running athttp://localhost:8000before Cloudburst can make recommendations. Follow the CloudBroker Quickstart first if not done.Tailscale is required for the VM bootstrap network path. The burst VM joins your tailnet with an auth key; the control plane is reached via Tailscale IP. No Tailscale = no kubeadm join.- CloudBroker — running at
http://localhost:8000. See CloudBroker quickstart. - Tailscale — install and create an auth key.
- Provider credentials — e.g. GCP service account,
HETZNER_API_TOKEN, or Scaleway keys.
- CloudBroker — running at
-
2
Clone and configure
git clone https://github.com/braghettos/cloudburst-autoscaler cd cloudburst-autoscaler cp .env.example .envEdit
.env: setTAILSCALE_AUTHKEY,CLOUDBROKER_BASEURL=http://localhost:8000, and provider credentials (e.g.GCP_PROJECT,HETZNER_API_TOKEN). -
3
Create Kind cluster and deploy
make kind-setup-and-deploy IMG=cloudburst-autoscaler:latestCreates a Kind cluster, builds the controller image, installs CRDs, and deploys the controller. Uses
host.docker.internalto reach CloudBroker and Tailscale from inside the cluster. -
4
Apply NodeClass and NodePool
Use the samples for your provider. Example: Scaleway.
kubectl apply -f config/samples/secret-scaleway.yaml kubectl apply -f config/samples/nodeclass-scaleway.yaml kubectl apply -f config/samples/nodepool-scaleway.yamlFor GCP:
config/samples/nodeclass-gcp.yaml,config/samples/nodepool-gcp.yaml. Seeconfig/samples/for other providers. -
5
Trigger burst
Create a pod that cannot be scheduled (requests more than any existing node has):
kubectl apply -f config/samples/workload-scaleway.yaml(Use
workload-gcp.yamlfor GCP, etc.) -
6
Verify
kubectl get nodeclaims -A kubectl get nodes -l cloudburst.io/provider kubectl get pods -A | grep -E 'Pending|Running'You should see a NodeClaim move through
Pending → Provisioning → Joining → Ready, then a new node, then the pod scheduled.Expected output (example):
# kubectl get nodeclaims -A NAME NODEPOOL PROVIDER PHASE AGE scaleway-nodepool-xxxxx scaleway-nodepool scaleway Ready 2m15s # kubectl get nodes -l cloudburst.io/provider NAME STATUS ROLES AGE VERSION scaleway-nodepool-xxxxx-xxxxx Ready <none> 2m v1.34.3 # kubectl get pods -A | grep -E 'Pending|Running' default scaleway-workload-xxx 1/1 Running 2m -
7
Scale down
kubectl delete -f config/samples/workload-scaleway.yamlAfter
ttlSecondsAfterEmpty(e.g. 60s in the sample), the controller cordons, drains, and deletes the burst node and VM.
What's next?
Common issues
Check Docker is running: docker ps. Check Kind is installed: kind version. Check the controller image builds: docker build -t cloudburst-autoscaler:latest .
Check controller logs: kubectl logs -n cloudburst-system deploy/cloudburst-autoscaler-controller-manager. Common causes: provider credentials missing or wrong in the NodeClass secret; CloudBroker not reachable; Tailscale auth key expired.
Check if the pod has a nodeAffinity that doesn't match the provisioned node's labels. Run kubectl describe pod <name> and look at the Events section. Verify kubectl get nodes --show-labels includes the expected labels.
No data ingested yet: from the CloudBroker repo run make ingest-hetzner (or another provider). Check CloudBroker is running: curl http://localhost:8000/health. Ensure CLOUDBROKER_BASEURL in .env is correct.
Create a new auth key at Tailscale admin.
Update the tailscale-auth secret:
kubectl create secret generic tailscale-auth --from-literal=authkey="tskey-auth-xxx" -n default --dry-run=client -o yaml | kubectl apply -f -
Ensure the NodeClass references the correct secret name and key.
Explore further