This article explains how to use cluster autoscaler on Cluster API Provider OpenStack.
Cluster Autoscaler on Cluster API is developed as one of cloud provider in Kubernetes Autoscaler repository. We update the sample manifest and use it.
Target clusters
Management cluster is created by kind. Kubernetes version is v1.19.1:
ubuntu@capi:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready master 37h v1.19.1
Workload cluster is created by cluster-api-provider-openstack v0.3.3. Kubernetes version is v1.19.1:
export CLUSTER_NAME=external
ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get nodes
NAME STATUS ROLES AGE VERSION
external-control-plane-g26fc Ready master 21h v1.19.1
external-control-plane-q57tc Ready master 21h v1.19.1
external-control-plane-sxfr6 Ready master 21h v1.19.1
external-md-0-fnwgx Ready <none> 20h v1.19.1
Please note that both kubernetes versions are v1.19.1. Because cluster-autoscaler is not tagged with all kubernetes versions. I am not sure version equality is mandatory.
Edit manifest
Configuring node group auto discovery
clusterName external
is deployed in my environment, so that –node-group-auto-discovery=clusterapi:clusterName=external
is added to the command /cluster-autoscaler. The following is the piece of result.
spec:
containers:
- image: ${AUTOSCALER_IMAGE}
name: cluster-autoscaler
command:
- /cluster-autoscaler
args:
- --cloud-provider=clusterapi
- --node-group-auto-discovery=clusterapi:clusterName=external
serviceAccountName: cluster-autoscaler
Connecting cluster-autoscaler to Cluster API management and workload Clusters
Create workload cluster kubeconfig secret
kubectl create secret generic kubeconfig --from-file=kubeconfig=external.kubeconfig -n kube-system
Edit sample manifest to use the secret. --clusterapi-cloud-config-authoritative
is for management cluster.
spec:
containers:
- image: ${AUTOSCALER_IMAGE}
name: cluster-autoscaler
command:
- /cluster-autoscaler
args:
- --cloud-provider=clusterapi
- --kubeconfig=/mnt/workload.kubeconfig
- --clusterapi-cloud-config-authoritative
- --node-group-auto-discovery=clusterapi:clusterName=external
volumeMounts:
- name: kubeconfig
mountPath: /mnt
readOnly: true
volumes:
- name: kubeconfig
secret:
secretName: kubeconfig
Apply the manifest
ubuntu@capi:~/autoscaler/cluster-autoscaler/cloudprovider/clusterapi$ export AUTOSCALER_NS=kube-system
ubuntu@capi:~/autoscaler/cluster-autoscaler/cloudprovider/clusterapi$ export AUTOSCALER_IMAGE=us.gcr.io/k8s-artifacts-prod/autoscaling/cluster-autoscaler:v1.19.1
ubuntu@capi:~/autoscaler/cluster-autoscaler/cloudprovider/clusterapi$ envsubst < examples/deployment.yaml | kubectl apply -f -
deployment.apps/cluster-autoscaler created
clusterrolebinding.rbac.authorization.k8s.io/cluster-autoscaler-workload created
clusterrolebinding.rbac.authorization.k8s.io/cluster-autoscaler-management created
serviceaccount/cluster-autoscaler created
clusterrole.rbac.authorization.k8s.io/cluster-autoscaler-workload created
clusterrole.rbac.authorization.k8s.io/cluster-autoscaler-management created
ubuntu@capi:~/autoscaler/cluster-autoscaler/cloudprovider/clusterapi$
ubuntu@capi:~$ kubectl get deploy,pod -n kube-system -l app=cluster-autoscaler
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/cluster-autoscaler 1/1 1 1 88s
NAME READY STATUS RESTARTS AGE
pod/cluster-autoscaler-7fb94d8ccb-bf6tz 1/1 Running 0 87s
ubuntu@capi:~$
Enable MachineDeployment to autoscacle
The following annotations has to be applied to MachineDeployment.
metadata:
annotations:
cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "1"
cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "2"
....
ubuntu@capi:~/autoscaler/cluster-autoscaler/cloudprovider/clusterapi$ kubectl edit machinedeployment external-md-0
machinedeployment.cluster.x-k8s.io/external-md-0 edited
ubuntu@capi:~/autoscaler/cluster-autoscaler/cloudprovider/clusterapi$
Test
Target workload cluster
ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get nodes
NAME STATUS ROLES AGE VERSION
external-control-plane-g26fc Ready master 26h v1.19.1
external-control-plane-q57tc Ready master 26h v1.19.1
external-control-plane-sxfr6 Ready master 26h v1.19.1
external-md-0-vwrht Ready <none> 54m v1.19.1
Scale up
Worker node external-md-0-vwrht has 2 cpus allocatable in my environment. Let’s create 3 pods requests 500m cpu by applying the following manifest.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-7bdc947757-bg749 1/1 Running 0 12s
nginx-deployment-7bdc947757-kptx4 1/1 Running 0 12s
nginx-deployment-7bdc947757-sl5kk 1/1 Running 0 12s
kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig describe node external-md-0-vwrht
...
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1750m (87%) 3 (150%)
memory 0 (0%) 0 (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Add one more pod requesting 500m cpu to apply the following manifest. This triggers cluster scale.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-1
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-1
image: nginx
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig apply -f nginx-1.yaml
deployment.apps/nginx-deployment-1 created
Worker nodes are actually scaled up:
ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get nodes
NAME STATUS ROLES AGE VERSION
external-control-plane-g26fc Ready master 26h v1.19.1
external-control-plane-q57tc Ready master 26h v1.19.1
external-control-plane-sxfr6 Ready master 26h v1.19.1
external-md-0-v4778 Ready <none> 34s v1.19.1
external-md-0-vwrht Ready <none> 61m v1.19.1
ubuntu@capi:~$
ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-1-67f86978ff-p2ptt 1/1 Running 0 113s
nginx-deployment-7bdc947757-bg749 1/1 Running 0 2m36s
nginx-deployment-7bdc947757-kptx4 1/1 Running 0 2m36s
nginx-deployment-7bdc947757-sl5kk 1/1 Running 0 2m36s
ubuntu@capi:~$
Scale-up message is logged in cluster-autoscaler log.
I0113 07:28:25.494543 1 scale_up.go:663] Scale-up: setting group MachineDeployment/kube-system/external-md-0 size to 2
Scale down
Delete last added pod:
ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig delete -f nginx-1.yaml
deployment.apps "nginx-deployment-1" deleted
ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-7bdc947757-4xfzk 1/1 Running 0 9m46s
nginx-deployment-7bdc947757-59jk9 1/1 Running 0 13m
nginx-deployment-7bdc947757-t5nms 1/1 Running 0 9m46s
ubuntu@capi:~$
Worker nodes are actually scaled down:
ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get nodes
NAME STATUS ROLES AGE VERSION
external-control-plane-g26fc Ready master 27h v1.19.1
external-control-plane-q57tc Ready master 27h v1.19.1
external-control-plane-sxfr6 Ready master 27h v1.19.1
external-md-0-sqg8n Ready <none> 13m v1.19.1
ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-7bdc947757-bxcm9 1/1 Running 0 2m33s
nginx-deployment-7bdc947757-t5nms 1/1 Running 0 14m
nginx-deployment-7bdc947757-v8sh4 1/1 Running 0 2m33s
ubuntu@capi:~$