~10 minutes

Quickstart

Test Cloudburst Autoscaler with a Kind cluster. You need Docker, Kind, kubectl, Tailscale, CloudBroker, and at least one provider (e.g. GCP, Hetzner, Scaleway).

⏱ ~10 minutes 📦 Docker · Kind · kubectl · Tailscale · CloudBroker ☁️ At least one provider credential required
  1. 1

    Prerequisites

    External dependency: CloudBroker must be running at http://localhost:8000 before Cloudburst can make recommendations. Follow the CloudBroker Quickstart first if not done.
    Tailscale is required for the VM bootstrap network path. The burst VM joins your tailnet with an auth key; the control plane is reached via Tailscale IP. No Tailscale = no kubeadm join.
    • CloudBroker — running at http://localhost:8000. See CloudBroker quickstart.
    • Tailscaleinstall and create an auth key.
    • Provider credentials — e.g. GCP service account, HETZNER_API_TOKEN, or Scaleway keys.
  2. 2

    Clone and configure

    git clone https://github.com/braghettos/cloudburst-autoscaler
    cd cloudburst-autoscaler
    cp .env.example .env
    

    Edit .env: set TAILSCALE_AUTHKEY, CLOUDBROKER_BASEURL=http://localhost:8000, and provider credentials (e.g. GCP_PROJECT, HETZNER_API_TOKEN).

  3. 3

    Create Kind cluster and deploy

    make kind-setup-and-deploy IMG=cloudburst-autoscaler:latest
    

    Creates a Kind cluster, builds the controller image, installs CRDs, and deploys the controller. Uses host.docker.internal to reach CloudBroker and Tailscale from inside the cluster.

  4. 4

    Apply NodeClass and NodePool

    Use the samples for your provider. Example: Scaleway.

    kubectl apply -f config/samples/secret-scaleway.yaml
    kubectl apply -f config/samples/nodeclass-scaleway.yaml
    kubectl apply -f config/samples/nodepool-scaleway.yaml
    

    For GCP: config/samples/nodeclass-gcp.yaml, config/samples/nodepool-gcp.yaml. See config/samples/ for other providers.

  5. 5

    Trigger burst

    Create a pod that cannot be scheduled (requests more than any existing node has):

    kubectl apply -f config/samples/workload-scaleway.yaml
    

    (Use workload-gcp.yaml for GCP, etc.)

  6. 6

    Verify

    kubectl get nodeclaims -A
    kubectl get nodes -l cloudburst.io/provider
    kubectl get pods -A | grep -E 'Pending|Running'
    

    You should see a NodeClaim move through Pending → Provisioning → Joining → Ready, then a new node, then the pod scheduled.

    Expected output (example):

    # kubectl get nodeclaims -A
    NAME                     NODEPOOL         PROVIDER   PHASE       AGE
    scaleway-nodepool-xxxxx   scaleway-nodepool   scaleway   Ready       2m15s
    
    # kubectl get nodes -l cloudburst.io/provider
    NAME                          STATUS   ROLES    AGE   VERSION
    scaleway-nodepool-xxxxx-xxxxx  Ready    <none>   2m    v1.34.3
    
    # kubectl get pods -A | grep -E 'Pending|Running'
    default   scaleway-workload-xxx  1/1   Running   2m
  7. 7

    Scale down

    kubectl delete -f config/samples/workload-scaleway.yaml
    

    After ttlSecondsAfterEmpty (e.g. 60s in the sample), the controller cordons, drains, and deletes the burst node and VM.

What's next?

Common issues