Manifests & kubectl
Examples
With CloudBroker and Tailscale running, apply the CRDs, then these manifests. A high-resource pod triggers provisioning; kubectl get shows the result. Replace placeholders with your values.
GCP #
NodePool + NodeClass + Workload (nodeAffinity: gcp)
# NodePool
apiVersion: cloudburst.io/v1alpha1
kind: NodePool
metadata:
name: gcp-nodepool
namespace: default
spec:
requirements:
regionConstraint: "ANY"
arch: ["x86_64"]
maxPriceEurPerHour: 0.15
allowedProviders: ["gcp"]
limits:
maxNodes: 3
minNodes: 0
template:
labels:
cloudburst.io/nodepool: "gcp-nodepool"
cloudburst.io/provider: "gcp"
disruption:
ttlSecondsAfterEmpty: 60
weight: 1
---
# NodeClass
apiVersion: cloudburst.io/v1alpha1
kind: NodeClass
metadata:
name: gcp-nodeclass
namespace: default
spec:
gcp:
project: "your-gcp-project-id"
zone: "us-central1-a"
join:
hostApiServer: "https://<HOST_TAILSCALE_IP>:6443"
kindClusterName: "cloudburst"
tokenTtlMinutes: 60
tailscale:
authKeySecretRef:
name: tailscale-auth
key: authkey
bootstrap:
kubernetesVersion: "1.34.3"
---
# Workload (triggers burst; targets GCP nodes)
apiVersion: v1
kind: Pod
metadata:
name: gcp-workload
namespace: default
spec:
containers:
- name: workload
image: busybox:1.36
command: ["sleep", "infinity"]
resources:
requests:
cpu: "1500m"
memory: "2Gi"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: cloudburst.io/provider
operator: In
values: ["gcp"]
AWS #
NodePool + NodeClass + Workload (nodeAffinity: aws)
# NodePool
spec:
requirements:
allowedProviders: ["aws"]
template:
labels:
cloudburst.io/nodepool: "aws-nodepool"
cloudburst.io/provider: "aws"
---
# NodeClass (provider section)
spec:
aws:
region: "eu-west-1"
ami: "ami-xxxxxxxx"
subnetID: "subnet-xxxxxxxx"
join: { hostApiServer: "...", kindClusterName: "cloudburst", tokenTtlMinutes: 60 }
tailscale: { authKeySecretRef: { name: tailscale-auth, key: authkey } }
bootstrap: { kubernetesVersion: "1.34.3" }
---
# Workload
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: cloudburst.io/provider
operator: In
values: ["aws"]
Azure #
NodePool + NodeClass + Workload (nodeAffinity: azure)
# NodePool
spec:
requirements:
allowedProviders: ["azure"]
template:
labels:
cloudburst.io/nodepool: "azure-nodepool"
cloudburst.io/provider: "azure"
---
# NodeClass (provider section)
spec:
azure:
subscriptionID: "your-subscription-id"
resourceGroup: "your-resource-group"
location: "westeurope"
subnetID: "/subscriptions/.../subnets/your-subnet"
clientIDSecretRef: { name: azure-credentials, key: AZURE_CLIENT_ID }
clientSecretSecretRef: { name: azure-credentials, key: AZURE_CLIENT_SECRET }
join: { hostApiServer: "...", kindClusterName: "cloudburst", tokenTtlMinutes: 60 }
tailscale: { authKeySecretRef: { name: tailscale-auth, key: authkey } }
bootstrap: { kubernetesVersion: "1.34.3" }
---
# Workload
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: cloudburst.io/provider
operator: In
values: ["azure"]
Hetzner #
NodePool + NodeClass + Workload (nodeAffinity: hetzner)
# NodePool
spec:
requirements:
allowedProviders: ["hetzner"]
template:
labels:
cloudburst.io/nodepool: "hetzner-nodepool"
cloudburst.io/provider: "hetzner"
---
# NodeClass (provider section)
spec:
hetzner:
location: "fsn1"
apiTokenSecretRef: { name: hetzner-credentials, key: HETZNER_API_TOKEN }
image: "ubuntu-22.04"
join: { hostApiServer: "...", kindClusterName: "cloudburst", tokenTtlMinutes: 60 }
tailscale: { authKeySecretRef: { name: tailscale-auth, key: authkey } }
bootstrap: { kubernetesVersion: "1.34.3" }
---
# Workload
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: cloudburst.io/provider
operator: In
values: ["hetzner"]
Scaleway #
NodePool (Scaling policy: EU, x86_64, max €0.15/h, Scaleway)
apiVersion: cloudburst.io/v1alpha1
kind: NodePool
metadata:
name: scaleway-nodepool
namespace: default
spec:
requirements:
regionConstraint: "EU"
arch: ["x86_64"]
maxPriceEurPerHour: 0.15
allowedProviders: ["scaleway"]
limits:
maxNodes: 3
minNodes: 0
template:
labels:
cloudburst.io/nodepool: "scaleway-nodepool"
cloudburst.io/provider: "scaleway"
disruption:
ttlSecondsAfterEmpty: 60
weight: 1
NodeClass (Scaleway zone, join config, Tailscale auth ref)
apiVersion: cloudburst.io/v1alpha1
kind: NodeClass
metadata:
name: scaleway-nodeclass
namespace: default
spec:
scaleway:
zone: "fr-par-1"
projectID: "your-scaleway-project-id"
image: "ubuntu_jammy"
apiKeySecretRef:
name: scaleway-credentials
key: SCW_SECRET_KEY
join:
hostApiServer: "https://<HOST_TAILSCALE_IP>:6443"
kindClusterName: "cloudburst"
tokenTtlMinutes: 60
tailscale:
authKeySecretRef:
name: tailscale-auth
key: authkey
bootstrap:
kubernetesVersion: "1.34.3"
Workload (Pod that can't be scheduled — triggers burst)
apiVersion: v1
kind: Pod
metadata:
name: scaleway-workload
namespace: default
spec:
containers:
- name: workload
image: busybox:1.36
command: ["sleep", "infinity"]
resources:
requests:
cpu: "1500m"
memory: "2Gi"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: cloudburst.io/provider
operator: In
values: ["scaleway"]
DigitalOcean #
NodePool + NodeClass + Workload (nodeAffinity: digitalocean)
# NodePool
spec:
requirements:
allowedProviders: ["digitalocean"]
template:
labels:
cloudburst.io/nodepool: "digitalocean-nodepool"
cloudburst.io/provider: "digitalocean"
---
# NodeClass (provider section)
spec:
digitalocean:
region: "ams3"
apiTokenSecretRef: { name: digitalocean-credentials, key: DIGITALOCEAN_API_TOKEN }
image: "ubuntu-22-04-x64"
join: { hostApiServer: "...", kindClusterName: "cloudburst", tokenTtlMinutes: 60 }
tailscale: { authKeySecretRef: { name: tailscale-auth, key: authkey } }
bootstrap: { kubernetesVersion: "1.34.3" }
---
# Workload
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: cloudburst.io/provider
operator: In
values: ["digitalocean"]
OVH #
NodePool + NodeClass + Workload (nodeAffinity: ovh)
# NodePool
spec:
requirements:
allowedProviders: ["ovh"]
template:
labels:
cloudburst.io/nodepool: "ovh-nodepool"
cloudburst.io/provider: "ovh"
---
# NodeClass (provider section)
spec:
ovh:
projectID: "your-ovh-project-id"
region: "GRA"
applicationKeySecretRef: { name: ovh-credentials, key: OVH_APPLICATION_KEY }
applicationSecretSecretRef: { name: ovh-credentials, key: OVH_APPLICATION_SECRET }
consumerKeySecretRef: { name: ovh-credentials, key: OVH_CONSUMER_KEY }
join: { hostApiServer: "...", kindClusterName: "cloudburst", tokenTtlMinutes: 60 }
tailscale: { authKeySecretRef: { name: tailscale-auth, key: authkey } }
bootstrap: { kubernetesVersion: "1.34.3" }
---
# Workload
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: cloudburst.io/provider
operator: In
values: ["ovh"]
Workload without nodeAffinity #
Pods without nodeSelector or nodeAffinity can land on any burst node with capacity. When multiple NodePools exist, the scheduler may place the pod on any provider. Use this when you don't care which cloud hosts the workload.
apiVersion: v1
kind: Pod
metadata:
name: any-provider-workload
namespace: default
spec:
containers:
- name: workload
image: busybox:1.36
command: ["sleep", "infinity"]
resources:
requests:
cpu: "1500m"
memory: "2Gi"
Apply & results #
Apply manifests
kubectl apply -f config/crd/bases/
kubectl apply -f config/samples/nodeclass-scaleway.yaml
kubectl apply -f config/samples/nodepool-scaleway.yaml
kubectl apply -f config/samples/workload-scaleway.yaml
Result of kubectl apply
customresourcedefinition.apiextensions.k8s.io/nodeclaims.cloudburst.io created
customresourcedefinition.apiextensions.k8s.io/nodeclasses.cloudburst.io created
customresourcedefinition.apiextensions.k8s.io/nodepools.cloudburst.io created
nodeclass.cloudburst.io/scaleway-nodeclass created
nodepool.cloudburst.io/scaleway-nodepool created
pod/scaleway-workload created
Result of kubectl get nodeclaims
NAME NODEPOOL PROVIDER REGION INSTANCE TYPE PHASE AGE
scaleway-nodepool-xxxx scaleway-nodepool scaleway fr-par-1 DEV1-M Running 2m15s
Result of kubectl get nodes -l cloudburst.io/provider=scaleway
NAME STATUS ROLES AGE VERSION
scaleway-nodepool-xxxx-xxxxx Ready <none> 2m v1.34.3
Pod labels and NodePool matching #
Pods do not need to specify cloudburst.io/nodepool or cloudburst.io/provider. These labels are applied to nodes from the NodePool template, not required on pods.
Demand detection
Cloudburst treats any Pending pod with reason Unschedulable as demand (except kube-system and pods annotated cloudburst.io/ignore=true). Each NodePool sees all unschedulable pods and may create capacity. There is no per-pod filtering by NodePool or provider labels.
Scheduling behavior
When a NodePool provisions a node, that node gets the NodePool's template labels (e.g. cloudburst.io/nodepool, cloudburst.io/provider). For the scheduler to place a pod on it:
- No nodeSelector/nodeAffinity — the pod can land on any node, including burst nodes.
- Matching nodeSelector/nodeAffinity — the pod will only land on nodes that satisfy it (e.g.
cloudburst.io/provider: gcp). - Non-matching affinity — if the pod requires
cloudburst.io/provider: gcpbut the node is from a Scaleway NodePool, the pod stays unschedulable until a matching node exists.
Multiple NodePools
With several NodePools (e.g. GCP, Scaleway, Hetzner), each sees the same global unschedulable demand and may create nodes independently. Which node the pod lands on depends on which nodes match the pod's affinity (if any) and scheduler decisions. Use nodeSelector or nodeAffinity on pods when you want to target specific providers or NodePools.
No nodeSelector/nodeAffinity
Pod can be scheduled on any burst node with capacity.
cloudburst.io/provider: gcp
Pod will only land on nodes from a NodePool whose template includes that label (typically GCP).
cloudburst.io/nodepool: my-pool
Pod will only land on nodes created by that NodePool.
Secrets (Tailscale auth key, provider API keys) go in Kubernetes secrets; reference them from the NodeClass. Full manifests in the repo config/samples/.
Use cases
When Cloudburst fits best. Full use cases on the homepage.