投稿者「hidekazuna」のアーカイブ

Configure pinniped supervisor with Keycloak

This article explains how to configure pinniped supervisor with Keycloak on Tanzu Kubernetes Grid.

Prerequisites

Software versions

  • vSphere 7.0.3
  • Tanzu kubernetes Grid 1.6.1
  • Avi 21.1.4 2p3
  • Keycloak: 21.0.0

Environment

  • bootstrap machine hostname: tkg161(bootstrap machine is used to run management cluster)
  • domain name: hidekazun.jp

Run Keycloak docker image on bootstrap machine

Keycloak must be running with TLS which is required at least for TKG.
Run docker if not running. Create work directory and move to it.

mkdir keycloak
cd keycloak

create self-signed certificate.

echo "subjectAltName = DNS:tkg161.hidekazun.jp" > san.txt
# create key
openssl genrsa 2048 > server.key
 
# create csr
openssl req -new -key server.key > server.csr
 
# self sign
cat server.csr | openssl x509 -req -signkey server.key -extfile san.txt > server.crt

create Dockerfile

FROM quay.io/keycloak/keycloak:21.0 as builder
 
# Enable health and metrics support
ENV KC_HEALTH_ENABLED=false
ENV KC_METRICS_ENABLED=false
 
# Configure a database vendor
# ENV KC_DB=postgres
 
WORKDIR /opt/keycloak
RUN /opt/keycloak/bin/kc.sh build
 
FROM quay.io/keycloak/keycloak:latest
COPY --from=builder /opt/keycloak/ /opt/keycloak/
 
ENV KC_HOSTNAME=tkg161.hidekazun.jp
 
COPY server.key /opt/keycloak/server.key
COPY server.crt /opt/keycloak/server.crt
 
ENV KC_HTTPS_CERTIFICATE_FILE=/opt/keycloak/server.crt
ENV KC_HTTPS_CERTIFICATE_KEY_FILE=/opt/keycloak/server.key
 
ENTRYPOINT ["/opt/keycloak/bin/kc.sh"]

Let’s build.

docker build . -t mykeycloak:21.0

Run docker image

docker run --name mykeycloak -p 8443:8443 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin mykeycloak:21.0 start --optimized

Open the URL https://tkg161.hidekazun.jp:8443/ , click Administrator console and login admin/admin.

Configure Keycloak

Create myrealm realm, and myclient client, hidekazun user for authentication. At this time, Root URL, Home URL, Valid redirect URLs, Web origins of myclient client are all blank.

Create management cluster

Update Cluster configuration file as the following and create management cluster as usual by tanzu mc create. preferred_username is the claim in the ID token correspond to the client name.

IDENTITY_MANAGEMENT_TYPE: oidc
...
OIDC_IDENTITY_PROVIDER_CLIENT_ID: "myclient"
OIDC_IDENTITY_PROVIDER_CLIENT_SECRET: "SIQxCyNlFlk02Smgkmit7ZugC6Lqxm92"
OIDC_IDENTITY_PROVIDER_ISSUER_URL: "https://tkg161.hidekazun.jp:8443/realms/myrealm"
OIDC_IDENTITY_PROVIDER_NAME: "keycloak"
OIDC_IDENTITY_PROVIDER_SCOPES: "openid"
OIDC_IDENTITY_PROVIDER_USERNAME_CLAIM: "preferred_username"

After management cluster is created, pinniped-post-deploy-job is completed.

hidekazun@tkg161:~/keycloak$ k get job pinniped-post-deploy-job -n pinniped-supervisor
NAME                       COMPLETIONS   DURATION   AGE
pinniped-post-deploy-job   1/1           7s         2d7h
hidekazun@tkg161:~/keycloak$

Update Keycloak setting

Confirm EXTERNAL_IP of pinniped-supervisor service

hidekazun@tkg161:~/keycloak$ k get svc -n pinniped-supervisor
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
pinniped-supervisor LoadBalancer 100.65.87.238 192.168.103.100 443:31639/TCP 3d2h
hidekazun@tkg161:~/keycloak$

Update Root URL, Home URL, Valid redirect URLs, Web origins of myclient client to  https://192.168.103.100/callback. This ip address is vary by the environment.

Update pinniped package configuration

As of now keycloak is running by self-signed certificate so that oidcidentityprovider resource ends with error. Pinniped package configuration must be updated to add self-signed certificate.

Get base64 encoded certificate.

hidekazun@tkg161:~/keycloak$ cat server.crt | base64 -w 0

Get values.yaml of pinniped-addon. mgmt1 is managment cluster’s name.

kubectl get secret mgmt1-pinniped-addon -n tkg-system -o jsonpath="{.data.values\.yaml}" | base64 -d > values.yaml

Update values .yaml to add upstream_oidc_tls_ca_data value.

pinniped:
  cert_duration: 2160h
  cert_renew_before: 360h
  supervisor_svc_endpoint: https://0.0.0.0:31234
  supervisor_ca_bundle_data: ca_bundle_data_of_supervisor_svc
  supervisor_svc_external_ip: 0.0.0.0
  supervisor_svc_external_dns: null
  upstream_oidc_client_id: myclient
  upstream_oidc_client_secret: SIQxCyNlFlk02Smgkmit7ZugC6Lqxm92
  upstream_oidc_issuer_url: https://tkg161.hidekazun.jp:8443/realms/myrealm
  upstream_oidc_tls_ca_data: LS0tLS......

Let’s apply.

kubectl patch secret/mgmt1-pinniped-addon -n tkg-system -p "{\"data\":{\"values.yaml\":\"$(base64 -w 0 < values.yaml)\"}}" --type=merge

After a while Pinniped package is reconciled and oidcidentityprovider becomes Ready status.

hidekazun@tkg161:~/keycloak$ kubectl get oidcidentityprovider -n pinniped-supervisor
NAME                              ISSUER                                            STATUS   AGE
upstream-oidc-identity-provider   https://tkg161.hidekazun.jp:8443/realms/myrealm   Ready    70s
hidekazun@tkg161:~/keycloak$

Confirm authentication is running

Execute the following due to running the command on the bootstrap machine.

export TANZU_CLI_PINNIPED_AUTH_LOGIN_SKIP_BROWSER=true

Get kubeconfig file. The point is that –admin option is not added.

tanzu mc kubeconfig get --export-file /tmp/mgmt1-kubeconfig

Let’s execute kubectl.

kubectl get pods -A --kubeconfig /tmp/mgmt1-kubeconfig

URL is shown like the following.

hidekazun@tkg161:~/keycloak$ kubectl get pods -A --kubeconfig /tmp/mgmt1-kubeconfig
Log in by visiting this link:
 
    https://192.168.103.100/oauth2/authorize?access_type=offline&client_id=pinniped-cli&code_challenge=LB5HqUDWH4Kuv58F3FLDcep8h827PVFJbjSmlOmrp6M&code_challenge_method=S256&nonce=385e448cc0071a83475620c4d0d20efa&redirect_uri=http%3A%2F%2F127.0.0.1%3A36991%2Fcallback&response_mode=form_post&response_type=code&scope=offline_access+openid+pinniped%3Arequest-audience&state=3b8edd9d472ef250da6a674e2d3a6a0b
 
    Optionally, paste your authorization code:

Open the URL. Additional operations would be needed due to self-signed authenticate.

Login by hidekazun user created on Keycloak.

Code is shown.

Copy the code and paste after Optionally, paste your authorization code:

hidekazun@tkg161:~/keycloak$ kubectl get pods -A --kubeconfig /tmp/mgmt1-kubeconfig
Log in by visiting this link:
 
    https://192.168.103.100/oauth2/authorize?access_type=offline&client_id=pinniped-cli&code_challenge=LB5HqUDWH4Kuv58F3FLDcep8h827PVFJbjSmlOmrp6M&code_challenge_method=S256&nonce=385e448cc0071a83475620c4d0d20efa&redirect_uri=http%3A%2F%2F127.0.0.1%3A36991%2Fcallback&response_mode=form_post&response_type=code&scope=offline_access+openid+pinniped%3Arequest-audience&state=3b8edd9d472ef250da6a674e2d3a6a0b
 
    Optionally, paste your authorization code: WegmTqpnyjleGPzW8xDFVkhTlJJHOdVEo2Yny4xDb00.iU5GK34xlVAGwmHSMEz6OHCINqfhP6Rv36zJWFg4ltM
 
Error from server (Forbidden): pods is forbidden: User "hidekazun" cannot list resource "pods" in API group "" at the cluster scope
hidekazun@tkg161:~/keycloak$

hidekazun user does not have any privileges so that the error is shown. If appropriate privilages are added, pods would be shown.

Certified Kubernetes Administrator (CKA) を更新しました

2019年1月6日に取得したCertified Kubernetes Administrator (CKA) が3年で期限切れになるため、2021年12月23日に再度合格して更新しました。更新後の期限は2024年12月23日になりました。3年前のKubernetesバージョンはv1.15でしたが、今回はv1.22でした。バージョンもありますが、2020年9月1日に試験が変更されたのもあり、同様にパフォーマンスベースといいつつ出題内容は別試験なくらい違う印象でした。

試験概要

Important Instructions: CKA and CKADFrequently Asked Questions: CKA and CKAD & CKSにあります。3年前と比べると、試験時間が3時間から2時間になりました。また、UIが若干変更されて使いやすくなっていました。

方法オンライン
時間2時間
形式パフォーマンスベース(Linuxのコマンドを実行して課題を解決する)
問題数15-20

出題範囲

Deploymentばかりいじっていた3年前とは異なり、出題範囲、割合が実践的になっていました。Certified Kubernetes Administrator (CKA)には下記のように書いてありました。IngressやNetwork PolicyはCKSで出たのでCKAでもでるのかな?と半信半疑でしたが、本当に出るんですね。

Storage10%
Troubleshooting30%
Workloads & Scheduling15%
Cluster Architecture, Installation & Configuration25%
Services & Networking20%

学習方法

UdemyのCertified Kubernetes Administrator (CKA) with Practice Testsをやりました。
2021年6月2日からExam Simulatorが試験に含まれるようになっていましたが、My Portalからアクセスする方法が試験の3時間前にわかったためほとんどやりませんでした。

所感

3年前とは違い、1回で89点で合格できたので成長したようでうれしかったです。

Certified Kubernetes Security Specialist (CKS) を取得しました

Kubernetesプラットフォームの構築、デプロイ、ランタイム時のセキュリティを確保する知識を持ち、必要となるタスクを実行する能力を証明するCertified Kubernetes Security Specialist  (CKS) 資格を取得しました。

2021/2/12に1回目を受験し41%、3/5に2回目を受験し80%でPassしました(合格は67%以上)。

試験概要

Candidate HandbookImportant Instructions: CKSFrequently Asked Questions: CKA and CKAD & CKSにあります。最新情報はリンク先を参照いただくとして受験時は下記のようでした。

受験前提CKA認定
方法オンライン
時間2時間
形式パフォーマンスベース(コマンドを実行して課題を解決する)
問題数15 – 20問
言語英語
再受験購入から12ヶ月の間に1回無料

特筆すべきは参照先の多さです。CKAではKubernetes.ioのdocs,blogだけだったと思うのですが、Kubernetes以外のツールに関するドキュメントも参照可能となっていました。つまりは出るってことですね。

  • Trivy documentation https://github.com/aquasecurity/trivy
  • Sysdig documentation https://docs.sysdig.com/
  • Falco documentation https://falco.org/docs/
  • App Armor https://gitlab.com/apparmor/apparmor/-/wikis/Documentation

学習方法

スケジュール

1/9 – 2/11Kubernetes Security Essentials (LFS260) 受講
2/121回目受験
2/13 – 3/1Kubernetes CKS 2021 Complete Course + Simulator 受講
3/2Killer Shell Simulator
3/3,4Kubernetes CKS 2021 Complete Course + Simulator 復習
3/52回目受験

Kubernetes Security Essentials (LFS260) 受講

CKS試験とバンドル購入していたKubernetes Security Essentials (LFS260)を一通り読み、手元のNUCのKVM上にKubernetesクラスタをインストールしてExerciseをやっとあと、復習しました。ググるとみなそろってKubernetes CKS 2021 Complete Course + Simulator (以下Killer Shell)をやっているので不安でしたが、まずはLFS260だけでどれだけ行けるかやってみました。

1回目受験

与えられたタスクを遂行する手順が書かれており、それをたんたんと実行すれば良いのですが、ついついタスクを実行する手順を自分で考え始めてしまったりして時間がたち、そもそも2/3しか問題に手がつきませんでした。途中から2回目に備えてすべての問題を見ておくのが精一杯でした。
結果41%、さすがにこれは学習方法が悪いと思い、即日Kubernetes CKS 2021 Complete Course + Simulatorを購入しました。

Kubernetes CKS 2021 Complete Course + Simulator 受講

Killer ShellについているインストールスクリプトでNUCにKubernetesをインストール後、スナップショットを取得しておき、演習して環境がよくわからなくなるとスナップショットに戻して演習しました。一度やった後はPracticeの冒頭だけ見て再度やってみて、早送り再生して回答を確認しました。

Killer Shell Simulator 実施

Killer ShellにはSimulatorという、本試験同様2時間の模擬試験がついています。やってみるとなんと46%!1回目試験から5%しか上がっていない、、本試験より難しいと書いてあるとはいえ驚愕しました。この時点ですでに2回目試験をスケジュール済でした。Simulatorは36時間環境が保持されるので、回答を見ながらすべての内容を確認しました。

Kubernetes CKS 2021 Complete Course + Simulator 復習

私はとにかくたんたんと素早く行うことが苦手なので、Killer Shellの演習を冒頭だけ見て再度やるだけではなく、演習内容と回答部分を1.75倍速再生、ときどきやむを得ず一時停止して行いました。これにより余計なことを考えず素早くたんたんとやるくせをつけるようになったと思っています。

2回目受験

タスクは適当にみて、手順をたんたんと、つまったら次に進むように心がけたらすべてのタスクを行うことができました。途中とにかくコピペするのにCtrl+Insert、Shift+Insertするのが苦痛でした。Macは ⌘+C、⌘+Vで済むのに、WindowsはCtrl+Insert、Shift+Insert。

所感

合格しなかったらコピペしやすいM1 Mac買っちゃうところだったので合格してよかったです。

Cluster Autoscaler on Cluster API Provider OpenStack

This article explains how to use cluster autoscaler on Cluster API Provider OpenStack.

Cluster Autoscaler on Cluster API is developed as one of cloud provider in Kubernetes Autoscaler  repository. We update the sample manifest and use it.

Target clusters

Management cluster is created by kind. Kubernetes version is v1.19.1:

ubuntu@capi:~$ kubectl get nodes
NAME                 STATUS   ROLES    AGE   VERSION
kind-control-plane   Ready    master   37h   v1.19.1

Workload cluster is created by cluster-api-provider-openstack v0.3.3. Kubernetes version is v1.19.1:

export CLUSTER_NAME=external
ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get nodes
NAME                           STATUS   ROLES    AGE   VERSION
external-control-plane-g26fc   Ready    master   21h   v1.19.1
external-control-plane-q57tc   Ready    master   21h   v1.19.1
external-control-plane-sxfr6   Ready    master   21h   v1.19.1
external-md-0-fnwgx            Ready    <none>   20h   v1.19.1

Please note that both kubernetes versions are v1.19.1. Because cluster-autoscaler is not tagged with all kubernetes versions. I am not sure version equality is mandatory.

Edit manifest

Configuring node group auto discovery

clusterName external is deployed in my environment, so that –node-group-auto-discovery=clusterapi:clusterName=external is added to the command /cluster-autoscaler. The following is the piece of result.

    spec:
      containers:
      - image: ${AUTOSCALER_IMAGE}
        name: cluster-autoscaler
        command:
        - /cluster-autoscaler
        args:
        - --cloud-provider=clusterapi
        - --node-group-auto-discovery=clusterapi:clusterName=external
      serviceAccountName: cluster-autoscaler

Connecting cluster-autoscaler to Cluster API management and workload Clusters

Create workload cluster kubeconfig secret

kubectl create secret generic kubeconfig --from-file=kubeconfig=external.kubeconfig -n kube-system

Edit sample manifest to use the secret. --clusterapi-cloud-config-authoritative is for management cluster.

    spec:
      containers:
      - image: ${AUTOSCALER_IMAGE}
        name: cluster-autoscaler
        command:
        - /cluster-autoscaler
        args:
        - --cloud-provider=clusterapi
        - --kubeconfig=/mnt/workload.kubeconfig
        - --clusterapi-cloud-config-authoritative
        - --node-group-auto-discovery=clusterapi:clusterName=external
        volumeMounts:
        - name: kubeconfig
          mountPath: /mnt
          readOnly: true
      volumes:
      - name: kubeconfig
        secret:
          secretName: kubeconfig

Apply the manifest

ubuntu@capi:~/autoscaler/cluster-autoscaler/cloudprovider/clusterapi$ export AUTOSCALER_NS=kube-system
ubuntu@capi:~/autoscaler/cluster-autoscaler/cloudprovider/clusterapi$ export AUTOSCALER_IMAGE=us.gcr.io/k8s-artifacts-prod/autoscaling/cluster-autoscaler:v1.19.1
ubuntu@capi:~/autoscaler/cluster-autoscaler/cloudprovider/clusterapi$ envsubst < examples/deployment.yaml | kubectl apply -f -
deployment.apps/cluster-autoscaler created
clusterrolebinding.rbac.authorization.k8s.io/cluster-autoscaler-workload created
clusterrolebinding.rbac.authorization.k8s.io/cluster-autoscaler-management created
serviceaccount/cluster-autoscaler created
clusterrole.rbac.authorization.k8s.io/cluster-autoscaler-workload created
clusterrole.rbac.authorization.k8s.io/cluster-autoscaler-management created
ubuntu@capi:~/autoscaler/cluster-autoscaler/cloudprovider/clusterapi$
ubuntu@capi:~$ kubectl get deploy,pod -n kube-system -l app=cluster-autoscaler
NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/cluster-autoscaler   1/1     1            1           88s

NAME                                      READY   STATUS    RESTARTS   AGE
pod/cluster-autoscaler-7fb94d8ccb-bf6tz   1/1     Running   0          87s
ubuntu@capi:~$

Enable MachineDeployment to autoscacle

The following annotations has to be applied to MachineDeployment.

metadata:
  annotations:
    cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "1"
    cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "2"
    ....
ubuntu@capi:~/autoscaler/cluster-autoscaler/cloudprovider/clusterapi$ kubectl edit machinedeployment external-md-0
machinedeployment.cluster.x-k8s.io/external-md-0 edited
ubuntu@capi:~/autoscaler/cluster-autoscaler/cloudprovider/clusterapi$

Test

Target workload cluster

ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get nodes
NAME                           STATUS   ROLES    AGE   VERSION
external-control-plane-g26fc   Ready    master   26h   v1.19.1
external-control-plane-q57tc   Ready    master   26h   v1.19.1
external-control-plane-sxfr6   Ready    master   26h   v1.19.1
external-md-0-vwrht            Ready    <none>   54m   v1.19.1

Scale up

Worker node external-md-0-vwrht has 2 cpus allocatable in my environment. Let’s create 3 pods requests 500m cpu by applying the following manifest.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        resources:
          limits:
            cpu: 1000m
          requests:
            cpu: 500m
ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get pod
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-7bdc947757-bg749   1/1     Running   0          12s
nginx-deployment-7bdc947757-kptx4   1/1     Running   0          12s
nginx-deployment-7bdc947757-sl5kk   1/1     Running   0          12s
kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig describe node external-md-0-vwrht

...
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                1750m (87%)  3 (150%)
  memory             0 (0%)       0 (0%)
  ephemeral-storage  0 (0%)       0 (0%)
  hugepages-1Gi      0 (0%)       0 (0%)
  hugepages-2Mi      0 (0%)       0 (0%)

Add one more pod requesting 500m cpu to apply the following manifest. This triggers cluster scale.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-1
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx-1
        image: nginx
        resources:
          limits:
            cpu: 1000m
          requests:
            cpu: 500m
ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig apply -f nginx-1.yaml
deployment.apps/nginx-deployment-1 created

Worker nodes are actually scaled up:

ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get nodes
NAME                           STATUS   ROLES    AGE   VERSION
external-control-plane-g26fc   Ready    master   26h   v1.19.1
external-control-plane-q57tc   Ready    master   26h   v1.19.1
external-control-plane-sxfr6   Ready    master   26h   v1.19.1
external-md-0-v4778            Ready    <none>   34s   v1.19.1
external-md-0-vwrht            Ready    <none>   61m   v1.19.1
ubuntu@capi:~$

ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get pod
NAME                                  READY   STATUS    RESTARTS   AGE
nginx-deployment-1-67f86978ff-p2ptt   1/1     Running   0          113s
nginx-deployment-7bdc947757-bg749     1/1     Running   0          2m36s
nginx-deployment-7bdc947757-kptx4     1/1     Running   0          2m36s
nginx-deployment-7bdc947757-sl5kk     1/1     Running   0          2m36s
ubuntu@capi:~$

Scale-up message is logged in cluster-autoscaler log.

I0113 07:28:25.494543       1 scale_up.go:663] Scale-up: setting group MachineDeployment/kube-system/external-md-0 size to 2

Scale down

Delete last added pod:

ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig delete -f nginx-1.yaml
deployment.apps "nginx-deployment-1" deleted
ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get pod
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-7bdc947757-4xfzk   1/1     Running   0          9m46s
nginx-deployment-7bdc947757-59jk9   1/1     Running   0          13m
nginx-deployment-7bdc947757-t5nms   1/1     Running   0          9m46s
ubuntu@capi:~$

Worker nodes are actually scaled down:

ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get nodes
NAME                           STATUS   ROLES    AGE   VERSION
external-control-plane-g26fc   Ready    master   27h   v1.19.1
external-control-plane-q57tc   Ready    master   27h   v1.19.1
external-control-plane-sxfr6   Ready    master   27h   v1.19.1
external-md-0-sqg8n            Ready    <none>   13m   v1.19.1
ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get pod
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-7bdc947757-bxcm9   1/1     Running   0          2m33s
nginx-deployment-7bdc947757-t5nms   1/1     Running   0          14m
nginx-deployment-7bdc947757-v8sh4   1/1     Running   0          2m33s
ubuntu@capi:~$

Upgrade workload cluster on Cluster API Provider OpenStack

This article explains how to upgrade workload cluster deployed by cluster-api-provider-openstack.

Prerequisites

Workload cluster

Workload cluster is deployed by cluster-api-provider-openstack v0.3.3 as follows.

ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get nodes
NAME                           STATUS   ROLES    AGE   VERSION
external-control-plane-8l9pd   Ready    master   9d    v1.17.11
external-control-plane-8qcfq   Ready    master   9d    v1.17.11
external-control-plane-ddzxd   Ready    master   9d    v1.17.11
external-md-0-th94m            Ready    <none>   9d    v1.17.11

This cluster will be upgraded to v1.18.14.

Image

New image is built using Kubernetes Image Builder for OpenStack which specified v1.18.14. In this article, new image is named to ubuntu-1804-kube-v1.18.14.

Upgrade control plane machines

Create new OpenStackMachineTemplate

ubuntu@capi:~$ kubectl get openstackmachinetemplates
NAME                     AGE
external-control-plane   9d
external-md-0            9d
ubuntu@capi:~$ kubectl get openstackmachinetemplates external-control-plane -o yaml > file.yaml
ubuntu@capi:~$ cp file.yaml external-control-plane-v11814.yaml

Edit metadata.name and spec.template.spec.image and remove unnecessary annotations.

The result external-control-plane-v11814.yaml is the following.

apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: OpenStackMachineTemplate
metadata:
  annotations:
  name: external-control-plane-v11814
  namespace: default
spec:
  template:
    spec:
      cloudName: openstack
      cloudsSecret:
        name: external-cloud-config
        namespace: default
      flavor: small
      image: ubuntu-1804-kube-v1.18.14
      sshKeyName: mykey

Apply new OpenStackMachineTemplate

ubuntu@capi:~$ kubectl apply -f external-control-plane-v11814.yaml
openstackmachinetemplate.infrastructure.cluster.x-k8s.io/external-control-plane-v11814 created
ubuntu@capi:~$ kubectl get openstackmachinetemplates
NAME                            AGE
external-control-plane          9d
external-control-plane-v11814   25s
external-md-0                   9d
ubuntu@capi:~$

Edit KubeadmControlPlane

ubuntu@capi:~$ kubectl edit kubeadmcontrolplane
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/external-control-plane edited

Edit spec.infrastructureTemplate.name and spec.version as follows.

spec:
  infrastructureTemplate:
    apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
    kind: OpenStackMachineTemplate
    name: external-control-plane-v11814
    namespace: default
  kubeadmConfigSpec:
    clusterConfiguration:
      apiServer:
        extraArgs:
          cloud-provider: external
      controllerManager:
        extraArgs:
          cloud-provider: external
      dns: {}
      etcd: {}
      imageRepository: k8s.gcr.io
      networking: {}
      scheduler: {}
    initConfiguration:
      localAPIEndpoint:
        advertiseAddress: ""
        bindPort: 0
      nodeRegistration:
        kubeletExtraArgs:
          cloud-provider: external
        name: '{{ local_hostname }}'
    joinConfiguration:
      discovery: {}
      nodeRegistration:
        kubeletExtraArgs:
          cloud-provider: external
        name: '{{ local_hostname }}'
  replicas: 3
  version: v1.18.14

Rolling upgrade should start.

ubuntu@capi:~$ kubectl get kubeadmcontrolplane
NAME                     INITIALIZED   API SERVER AVAILABLE   VERSION    REPLICAS   READY   UPDATED   UNAVAILABLE
external-control-plane   true          true                   v1.18.14   4          3       1         1
ubuntu@capi:~$

ubuntu@capi:~$ kubectl get openstackmachines
NAME                                  CLUSTER    INSTANCESTATE   READY   PROVIDERID                                         MACHINE
external-control-plane-8l9pd          external   ACTIVE          true    openstack://e6b2445b-17aa-419f-9cdf-f2e7f517b5b7   external-control-plane-cwlfw
external-control-plane-8qcfq          external   ACTIVE          true    openstack://211d1a59-97a2-4f0c-8dbb-a44a42106746   external-control-plane-9fm4f
external-control-plane-ddzxd          external   ACTIVE          true    openstack://ab4664e3-7e32-481c-8810-414cc5632df2   external-control-plane-v9dld
external-control-plane-v11814-d2qtq   external                                                                              external-control-plane-2ksn7
external-md-0-th94m                   external   ACTIVE          true    openstack://7579259a-8da2-4538-82f6-f9a174cc0707   external-md-0-84b9fff89c-r5kdz
ubuntu@capi:~$
ubuntu@capi:~$ kubectl get kubeadmcontrolplane
NAME                     INITIALIZED   API SERVER AVAILABLE   VERSION    REPLICAS   READY   UPDATED   UNAVAILABLE
external-control-plane   true          true                   v1.18.14   3          3       3         0
ubuntu@capi:~$ kubectl get openstackmachines -w
NAME                                  CLUSTER    INSTANCESTATE   READY   PROVIDERID                                         MACHINE
external-control-plane-v11814-ctbkw   external   ACTIVE          true    openstack://94a1cbb1-2726-4ac3-be6f-20dc7b2dc9a4   external-control-plane-9pb66
external-control-plane-v11814-d2qtq   external   ACTIVE          true    openstack://24f82d1d-567a-4968-94ae-db1bd2d782c3   external-control-plane-2ksn7
external-control-plane-v11814-qn9xb   external   ACTIVE          true    openstack://1061e602-e38f-4e57-a1ed-9639f6d37052   external-control-plane-b5m8s
external-md-0-th94m                   external   ACTIVE          true    openstack://7579259a-8da2-4538-82f6-f9a174cc0707   external-md-0-84b9fff89c-r5kdz
ubuntu@capi:~$

Control plane machines are upgraded successfully.

ubuntu@capi:~$ kubectl get machines
NAME                             PROVIDERID                                         PHASE          VERSION
external-control-plane-2ksn7     openstack://24f82d1d-567a-4968-94ae-db1bd2d782c3   Running        v1.18.14
external-control-plane-9pb66     openstack://94a1cbb1-2726-4ac3-be6f-20dc7b2dc9a4   Running        v1.18.14
external-control-plane-b5m8s     openstack://1061e602-e38f-4e57-a1ed-9639f6d37052   Running        v1.18.14
external-md-0-84b9fff89c-r5kdz   openstack://7579259a-8da2-4538-82f6-f9a174cc0707   Running        v1.17.11
ubuntu@capi:~$

Delete old OpenStackMachineTemplate

ubuntu@capi:~$ kubectl delete openstackmachinetemplates external-control-plane
openstackmachinetemplate.infrastructure.cluster.x-k8s.io "external-control-plane" deleted

Upgrade machines managed by a MachineDeployment

Create new OpenStackMachineTemplate

ubuntu@capi:~$ kubectl get openstackmachinetemplates external-md-0 -o yaml > external-md-0.yaml
ubuntu@capi:~$ cp external-md-0.yaml external-md-0-v11814.yaml
ubuntu@capi:~$ vi external-md-0-v11814.yaml

Edit metadata.name and spec.template.spec.image and remove unnecessary annotations. The result is the following.

apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: OpenStackMachineTemplate
metadata:
  annotations:
  name: external-md-0-v11814
  namespace: default
spec:
  template:
    spec:
      cloudName: openstack
      cloudsSecret:
        name: external-cloud-config
        namespace: default
      flavor: small
      image: ubuntu-1804-kube-v1.18.14
      sshKeyName: mykey

Apply  new OpenStackMachineTemplate

ubuntu@capi:~$ kubectl apply -f external-md-0-v11814.yaml
openstackmachinetemplate.infrastructure.cluster.x-k8s.io/external-md-0-v11814 created
ubuntu@capi:~$
ubuntu@capi:~$ kubectl get openstackmachinetemplates
NAME                            AGE
external-control-plane-v11814   34m
external-md-0                   9d
external-md-0-v11814            33s
ubuntu@capi:~$

Edit MachineDeployment

ubuntu@capi:~$ kubectl edit machinedeployment external-md-0
machinedeployment.cluster.x-k8s.io/external-md-0 edited
ubuntu@capi:~$

Edit spec.template.spec.infrastructureTemplate.name and spec.template.spec.version as follows.

  template:
    metadata:
      labels:
        cluster.x-k8s.io/cluster-name: external
        cluster.x-k8s.io/deployment-name: external-md-0
    spec:
      bootstrap:
        configRef:
          apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
          kind: KubeadmConfigTemplate
          name: external-md-0
      clusterName: external
      failureDomain: nova
      infrastructureRef:
        apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
        kind: OpenStackMachineTemplate
        name: external-md-0-v11814
      version: v1.18.14

Upgrade should start.

ubuntu@capi:~$ kubectl get machinedeployment external-md-0
NAME            PHASE     REPLICAS   READY   UPDATED   UNAVAILABLE
external-md-0   Running   2          1       1         1
ubuntu@capi:~$

ubuntu@capi:~$ kubectl get openstackmachines
NAME                                  CLUSTER    INSTANCESTATE   READY   PROVIDERID                                         MACHINE
external-control-plane-v11814-ctbkw   external   ACTIVE          true    openstack://94a1cbb1-2726-4ac3-be6f-20dc7b2dc9a4   external-control-plane-9pb66
external-control-plane-v11814-d2qtq   external   ACTIVE          true    openstack://24f82d1d-567a-4968-94ae-db1bd2d782c3   external-control-plane-2ksn7
external-control-plane-v11814-qn9xb   external   ACTIVE          true    openstack://1061e602-e38f-4e57-a1ed-9639f6d37052   external-control-plane-b5m8s
external-md-0-th94m                   external   ACTIVE          true    openstack://7579259a-8da2-4538-82f6-f9a174cc0707   external-md-0-84b9fff89c-r5kdz
external-md-0-v11814-fjlmp            external   ACTIVE          true    openstack://10522a90-de2a-473c-88df-213f4942df99   external-md-0-6d87f687f5-pmmnz
ubuntu@capi:~$
ubuntu@capi:~$ kubectl get machinedeployment
NAME            PHASE     REPLICAS   READY   UPDATED   UNAVAILABLE
external-md-0   Running   1          1       1
ubuntu@capi:~$
ubuntu@capi:~$ kubectl get openstackmachines
NAME                                  CLUSTER    INSTANCESTATE   READY   PROVIDERID                                         MACHINE
external-control-plane-v11814-ctbkw   external   ACTIVE          true    openstack://94a1cbb1-2726-4ac3-be6f-20dc7b2dc9a4   external-control-plane-9pb66
external-control-plane-v11814-d2qtq   external   ACTIVE          true    openstack://24f82d1d-567a-4968-94ae-db1bd2d782c3   external-control-plane-2ksn7
external-control-plane-v11814-qn9xb   external   ACTIVE          true    openstack://1061e602-e38f-4e57-a1ed-9639f6d37052   external-control-plane-b5m8s
external-md-0-v11814-fjlmp            external   ACTIVE          true    openstack://10522a90-de2a-473c-88df-213f4942df99   external-md-0-6d87f687f5-pmmnz
ubuntu@capi:~$

Machines managed by MachineDeployment are upgraded successfully.

ubuntu@capi:~$ kubectl get machines
NAME                             PROVIDERID                                         PHASE          VERSION
external-control-plane-2ksn7     openstack://24f82d1d-567a-4968-94ae-db1bd2d782c3   Running        v1.18.14
external-control-plane-9pb66     openstack://94a1cbb1-2726-4ac3-be6f-20dc7b2dc9a4   Running        v1.18.14
external-control-plane-b5m8s     openstack://1061e602-e38f-4e57-a1ed-9639f6d37052   Running        v1.18.14
external-md-0-6d87f687f5-pmmnz   openstack://10522a90-de2a-473c-88df-213f4942df99   Running        v1.18.14
ubuntu@capi:~$

Delete old OpenStackMachineTemplate

ubuntu@capi:~$ kubectl delete openstackmachinetemplates external-md-0
openstackmachinetemplate.infrastructure.cluster.x-k8s.io "external-md-0" deleted
ubuntu@capi:~$

Configure MachineHealthCheck on Cluster API Provider OpenStack

This article explains how to configure MachineHealthCheck for kubernetes cluster deployed by cluster-api-provider-openstack.

Target environment

ubuntu@capi:~$ clusterctl upgrade plan
Checking cert-manager version...
Cert-Manager is already up to date

Checking new release availability...

Management group: capi-system/cluster-api, latest release available for the v1alpha3 API Version of Cluster API (contract):

NAME                       NAMESPACE                           TYPE                     CURRENT VERSION   NEXT VERSION
bootstrap-kubeadm          capi-kubeadm-bootstrap-system       BootstrapProvider        v0.3.12           Already up to date
control-plane-kubeadm      capi-kubeadm-control-plane-system   ControlPlaneProvider     v0.3.12           Already up to date
cluster-api                capi-system                         CoreProvider             v0.3.12           Already up to date
infrastructure-openstack   capo-system                         InfrastructureProvider   v0.3.3            Already up to date

You are already up to date!


New clusterctl version available: v0.3.10 -> v0.3.12
https://github.com/kubernetes-sigs/cluster-api/releases/tag/v0.3.12
ubuntu@capi:~$ kubectl get machines
NAME                             PROVIDERID                                         PHASE      VERSION
external-control-plane-fffhl     openstack://33170cf7-8112-4163-91ab-c9fb4b4f1f81   Running    v1.17.11
external-control-plane-q9fms     openstack://6a5bd72f-7926-413e-92a6-d4d6b8ab5c7d   Running    v1.17.11
external-control-plane-zd7hd     openstack://8eca63d1-fac4-4979-ad76-447bc803d98e   Running    v1.17.11
external-md-0-84b9fff89c-snh5m   openstack://af280c1f-fe8b-46a6-993b-6cea5e015505   Running    v1.17.11

Creating a MachineHealthCheck

Create a manifest file and apply it.

apiVersion: cluster.x-k8s.io/v1alpha3
kind: MachineHealthCheck
metadata:
  name: external-node-unhealthy-5m
spec:
  clusterName: external
  maxUnhealthy: 100%
  selector:
    matchLabels:
      cluster.x-k8s.io/deployment-name: external-md-0
  unhealthyConditions:
  - type: Ready
    status: Unknown
    timeout: 300s
  - type: Ready
    status: "False"
    timeout: 300s

See MachineHealthCheck resource

ubuntu@capi:~$ kubectl get mhc
NAME                         MAXUNHEALTHY   EXPECTEDMACHINES   CURRENTHEALTHY
external-node-unhealthy-5m   100%           1                  1

Test

Login to the worker node and stop kubelet for marking the worker node unhealthy. you may add bastion and login worker node via bastion.

Let’s observe event and machine. Cluster API tries to delete worker node and create new worker ones. Since kubelet was stopped, cluster can not drain Machine’s node.

ubuntu@capi:~$ kubectl get machines
NAME                             PROVIDERID                                         PHASE      VERSION
external-control-plane-fffhl     openstack://33170cf7-8112-4163-91ab-c9fb4b4f1f81   Running    v1.17.11
external-control-plane-q9fms     openstack://6a5bd72f-7926-413e-92a6-d4d6b8ab5c7d   Running    v1.17.11
external-control-plane-zd7hd     openstack://8eca63d1-fac4-4979-ad76-447bc803d98e   Running    v1.17.11
external-md-0-84b9fff89c-97sz2   openstack://956ebe53-a3d0-4e02-8e65-4561fbfc6f9a   Running    v1.17.11
external-md-0-84b9fff89c-snh5m   openstack://af280c1f-fe8b-46a6-993b-6cea5e015505   Deleting   v1.17.11
kubectl get events --sort-by=.metadata.creationTimestamp

4m48s       Normal    MachineMarkedUnhealthy          machine/external-md-0-84b9fff89c-snh5m          Machine default/external-node-unhealthy-5m/external-md-0-84b9fff89c-snh5m/external-md-0-rcct8 has been marked as unhealthy
4m48s       Normal    SuccessfulCreate                machineset/external-md-0-84b9fff89c             Created machine "external-md-0-84b9fff89c-97sz2"
111s        Normal    DetectedUnhealthy               machine/external-md-0-84b9fff89c-97sz2          Machine default/external-node-unhealthy-5m/external-md-0-84b9fff89c-97sz2/ has unhealthy node
4m35s       Normal    SuccessfulCreateServer          openstackmachine/external-md-0-nfwds            Created server external-md-0-nfwds with id 956ebe53-a3d0-4e02-8e65-4561fbfc6f9a
6s          Warning   FailedDrainNode                 machine/external-md-0-84b9fff89c-snh5m          error draining Machine's node "external-md-0-rcct8": requeue in 20s
111s        Normal    SuccessfulSetNodeRef            machine/external-md-0-84b9fff89c-97sz2          external-md-0-nfwds

After few minuites, worker node is replaced.

ubuntu@capi:~$ kubectl get machines
NAME                             PROVIDERID                                         PHASE     VERSION
external-control-plane-fffhl     openstack://33170cf7-8112-4163-91ab-c9fb4b4f1f81   Running   v1.17.11
external-control-plane-q9fms     openstack://6a5bd72f-7926-413e-92a6-d4d6b8ab5c7d   Running   v1.17.11
external-control-plane-zd7hd     openstack://8eca63d1-fac4-4979-ad76-447bc803d98e   Running   v1.17.11
external-md-0-84b9fff89c-97sz2   openstack://956ebe53-a3d0-4e02-8e65-4561fbfc6f9a   Running   v1.17.11
kubectl get events --sort-by=.metadata.creationTimestamp

10m         Normal    DetectedUnhealthy               machine/external-md-0-84b9fff89c-snh5m          Machine default/external-node-unhealthy-5m/external-md-0-84b9fff89c-snh5m/external-md-0-rcct8 has unhealthy node external-md-0-rcct8
75s         Normal    MachineMarkedUnhealthy          machine/external-md-0-84b9fff89c-snh5m          Machine default/external-node-unhealthy-5m/external-md-0-84b9fff89c-snh5m/external-md-0-rcct8 has been marked as unhealthy
7m40s       Normal    SuccessfulCreate                machineset/external-md-0-84b9fff89c             Created machine "external-md-0-84b9fff89c-97sz2"
4m43s       Normal    DetectedUnhealthy               machine/external-md-0-84b9fff89c-97sz2          Machine default/external-node-unhealthy-5m/external-md-0-84b9fff89c-97sz2/ has unhealthy node
7m27s       Normal    SuccessfulCreateServer          openstackmachine/external-md-0-nfwds            Created server external-md-0-nfwds with id 956ebe53-a3d0-4e02-8e65-4561fbfc6f9a
2m38s       Warning   FailedDrainNode                 machine/external-md-0-84b9fff89c-snh5m          error draining Machine's node "external-md-0-rcct8": requeue in 20s
4m43s       Normal    SuccessfulSetNodeRef            machine/external-md-0-84b9fff89c-97sz2          external-md-0-nfwds
73s         Normal    SuccessfulDrainNode             machine/external-md-0-84b9fff89c-snh5m          success draining Machine's node "external-md-0-rcct8"
73s         Normal    SuccessfulDeleteServer          openstackmachine/external-md-0-rcct8            Deleted server external-md-0-rcct8 with id af280c1f-fe8b-46a6-993b-6cea5e015505
ubuntu@capi:~$

MachineHealthCheck worked successfully.

Deploy kubernetes cluster using cluster-api-provider-openstack v0.3.3

cluster-api-provider-openstack v0.3.3 has been released on November 25th. This article explains how to deploy kubernetes cluster using cluster-api-provider-openstack with;

  • External Cloud Provider
  • Authenticate with application credential
  • Service of LoadBalancer type
  • Cinder CSI

Prerequisites

Installation

Install kubectl, Docker, and Kind

Deploy kubernetes cluster

ubuntu@capi:~$ kind create cluster

Install clusterctl

ubuntu@capi:~$ curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.3.11/clusterctl-linux-amd64 -o clusterctl
ubuntu@capi:~$ chmod +x ./clusterctl
ubuntu@capi:~$ sudo mv ./clusterctl /usr/local/bin/clusterctl

Initialize the management cluster

ubuntu@capi:~$ clusterctl init --infrastructure openstack

Create workload cluster

Configure environment variables

My clouds.yaml is the following.

ubuntu@capi:~$ cat clouds.yaml
clouds:
  openstack:
    auth:
      auth_url: http://controller.hidekazuna.test:5000
      auth_type: "v3applicationcredential"
      application_credential_id: "49a25feadeb24bb1b8490ff1813e8265"
      application_credential_secret: "cluster-api-provider-openstack"
    region_name: RegionOne
ubuntu@capi:~$

cluster-api-provider-openstack provides useful script env.rc to set environment variables from clouds.yaml

ubuntu@capi:~$ wget https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-openstack/master/templates/env.rc -O /tmp/env.rc
ubuntu@capi:~$ source /tmp/env.rc clouds.yaml openstack

Other environment variables are needed.

ubuntu@capi:~$ export OPENSTACK_DNS_NAMESERVERS=10.0.0.11
ubuntu@capi:~$ export OPENSTACK_FAILURE_DOMAIN=nova
ubuntu@capi:~$ export OPENSTACK_CONTROL_PLANE_MACHINE_FLAVOR=small
ubuntu@capi:~$ export OPENSTACK_NODE_MACHINE_FLAVOR=small
ubuntu@capi:~$ export OPENSTACK_IMAGE_NAME=u1804-kube-v1.17.11
ubuntu@capi:~$ export OPENSTACK_SSH_KEY_NAME=mykey

The environment variables are self-explanatory. But one thing to note is that the image is need to be built using Kubernetes Image Builder for OpenStack.

Create manifest

ubuntu@capi:~$ clusterctl config cluster external --flavor external-cloud-provider --kubernetes-version v1.17.11 --control-plane-machine-count=3 --worker-machine-count=1 > external.yaml

Delete disableServerTags from the manifest

Unfortunately the template for external cloud provider has a bug. We need to delete disableServerTags from the manifest manually.

Apply the manifest

ubuntu@capi:~$ kubectl apply -f external.yaml
cluster.cluster.x-k8s.io/external unchanged
openstackcluster.infrastructure.cluster.x-k8s.io/external created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/external-control-plane created
openstackmachinetemplate.infrastructure.cluster.x-k8s.io/external-control-plane created
machinedeployment.cluster.x-k8s.io/external-md-0 created
openstackmachinetemplate.infrastructure.cluster.x-k8s.io/external-md-0 created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/external-md-0 created
secret/external-cloud-config created

Check if we can continue

Check if OpenStackCluster READY is true.

ubuntu@capi:~$ kubectl get openstackcluster
NAME       CLUSTER    READY   NETWORK                                SUBNET                                 BASTION
external   external   true    7f7cc336-4778-4732-93eb-a8ecd18b8017   f35c75ab-8761-49fd-82d2-faa47138fe42

It is OK machine PHASE is still Provisioning as follows.

ubuntu@capi:~$ kubectl get machine
NAME                             PROVIDERID                                         PHASE          VERSION
external-control-plane-c4dnt     openstack://a112dd08-f2ff-4f71-a639-68cf6504d36a   Provisioning   v1.17.11
external-md-0-84b9fff89c-w8nwc   openstack://437dfe44-2aa4-4337-b740-0a729709ea61   Provisioning   v1.17.11

Get workload cluster kubeconfig

ubuntu@capi:~$ export CLUSTER_NAME=external
ubuntu@capi:~$ clusterctl get kubeconfig ${CLUSTER_NAME} --namespace default > ./${CLUSTER_NAME}.kubeconfig

Deploy CNI

ubuntu@capi:~$ curl https://docs.projectcalico.org/v3.16/manifests/calico.yaml | sed "s/veth_mtu:.*/veth_mtu: \"1430\"/g" | kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig apply -f -
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  183k  100  183k    0     0   150k      0  0:00:01  0:00:01 --:--:--  150k
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
ubuntu@capi:~$

Deploy External OpenStack Cloud Provider

Create secret

ubuntu@capi:~$ wget https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-openstack/v0.3.3/templates/create_cloud_conf.sh -O /tmp/create_cloud_conf.sh
ubuntu@capi:~$ bash /tmp/create_cloud_conf.sh clouds.yaml openstack > /tmp/cloud.conf
ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig create secret -n kube-system generic cloud-config --from-file=/tmp/cloud.conf
secret/cloud-config created
ubuntu@capi:~$ rm /tmp/cloud.conf

Create RBAC resources and openstack-cloud-controller-manager deamonset

ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/cluster/addons/rbac/cloud-controller-manager-roles.yaml
clusterrole.rbac.authorization.k8s.io/system:cloud-controller-manager created
clusterrole.rbac.authorization.k8s.io/system:cloud-node-controller created
clusterrole.rbac.authorization.k8s.io/system:pvl-controller created
ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/cluster/addons/rbac/cloud-controller-manager-role-bindings.yaml
clusterrolebinding.rbac.authorization.k8s.io/system:cloud-node-controller created
clusterrolebinding.rbac.authorization.k8s.io/system:pvl-controller created
clusterrolebinding.rbac.authorization.k8s.io/system:cloud-controller-manager created
ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/openstack-cloud-controller-manager-ds.yaml
serviceaccount/cloud-controller-manager created
daemonset.apps/openstack-cloud-controller-manager created
ubuntu@capi:~$

Waiting for all the pods in kube-system namespace up and running

ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get pod -n kube-system
NAME                                                   READY   STATUS    RESTARTS   AGE
calico-kube-controllers-544658cf79-fkv2d               1/1     Running   1          6m19s
calico-node-5r55n                                      1/1     Running   1          6m19s
calico-node-fld8j                                      1/1     Running   1          6m19s
coredns-6955765f44-4zr75                               1/1     Running   1          16m
coredns-6955765f44-nqth4                               1/1     Running   1          16m
etcd-external-control-plane-2cdjf                      1/1     Running   1          17m
kube-apiserver-external-control-plane-2cdjf            1/1     Running   1          17m
kube-controller-manager-external-control-plane-2cdjf   1/1     Running   1          17m
kube-proxy-jn79q                                       1/1     Running   1          13m
kube-proxy-xxxbw                                       1/1     Running   1          16m
kube-scheduler-external-control-plane-2cdjf            1/1     Running   1          17m
openstack-cloud-controller-manager-q58px               1/1     Running   1          18s
$

Wait for all machines up and running

Time has passed. All machines should be running.

ubuntu@capi:~$ kubectl get machines
NAME                             PROVIDERID                                         PHASE     VERSION
external-control-plane-5f6tg     openstack://75935c02-9d51-4745-921e-9db0fbc868c1   Running   v1.17.11
external-control-plane-j6s2v     openstack://8e36167d-b922-418d-9c1b-de907c9a0fc2   Running   v1.17.11
external-control-plane-p4kq7     openstack://7fcb9e4b-e107-4d5c-a559-8e55e1018c2c   Running   v1.17.11
external-md-0-84b9fff89c-ghm5t   openstack://55255fae-af71-40ba-bcdc-1a0393b2aaf0   Running   v1.17.11

Congratulations! Kubernetes cluster has been deployed successfully.
Let’s go on to use Load Balancer.

Using service of LoadBalancer type

 Create deployment and service

ubuntu@capi:~$ wget https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/examples/loadbalancers/external-http-nginx.yaml

external-http-nginx.yaml is following. Update loadbalancer.openstack.org/floating-network-id value to your network id.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: external-http-nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
  name: external-http-nginx-service
  annotations:
    service.beta.kubernetes.io/openstack-internal-load-balancer: "false"
    loadbalancer.openstack.org/floating-network-id: "9be23551-38e2-4d27-b5ea-ea2ea1321bd6"
spec:
  selector:
    app: nginx
  type: LoadBalancer
  ports:
  - name: http
    port: 80
    targetPort: 80

Apply the manifest.

ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig apply -f external-http-nginx.yaml
deployment.apps/external-http-nginx-deployment created
service/external-http-nginx-service created

Wait for external IP

ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get service
NAME                          TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes                    ClusterIP      10.96.0.1        <none>        443/TCP        23h
external-http-nginx-service   LoadBalancer   10.103.116.174   10.0.0.235    80:32044/TCP   31s

Check if external IP really works

ubuntu@capi:~$ curl http://10.0.0.235
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
$

Delete deployment and service

ubuntu@capi:~$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig delete -f external-http-nginx.yaml
deployment.apps "external-http-nginx-deployment" deleted
service "external-http-nginx-service" deleted

Cinder CSI

Clone cloud-provider-openstack repository and checkout release-1.17 branch

ubuntu@capi:~$ git clone https://github.com/kubernetes/cloud-provider-openstack.git
Cloning into 'cloud-provider-openstack'...
remote: Enumerating objects: 50, done.
remote: Counting objects: 100% (50/50), done.
remote: Compressing objects: 100% (46/46), done.
remote: Total 13260 (delta 20), reused 11 (delta 2), pack-reused 13210
Receiving objects: 100% (13260/13260), 3.57 MiB | 473.00 KiB/s, done.
Resolving deltas: 100% (6921/6921), done.
ubuntu@capi:~$
ubuntu@capi:~$ cd cloud-provider-openstack
ubuntu@capi:~/cloud-provider-openstack$ git checkout -b release-1.17 origin/release-1.17
Branch 'release-1.17' set up to track remote branch 'release-1.17' from 'origin'.
Switched to a new branch 'release-1.17'

Remove manifests/cinder-csi-plugin/csi-secret-cinderplugin.yaml because cloud-config secret was already created.

buntu@capi:~/cloud-provider-openstack$ rm manifests/cinder-csi-plugin/csi-secret-cinderplugin.yaml

Apply the manifests

ubuntu@capi:~/cloud-provider-openstack$ kubectl --kubeconfig=/home/ubuntu/${CLUSTER_NAME}.kubeconfig apply -f manifests/cinder-csi-plugin/
serviceaccount/csi-cinder-controller-sa created
clusterrole.rbac.authorization.k8s.io/csi-attacher-role created
clusterrolebinding.rbac.authorization.k8s.io/csi-attacher-binding created
clusterrole.rbac.authorization.k8s.io/csi-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/csi-provisioner-binding created
clusterrole.rbac.authorization.k8s.io/csi-snapshotter-role created
clusterrolebinding.rbac.authorization.k8s.io/csi-snapshotter-binding created
clusterrole.rbac.authorization.k8s.io/csi-resizer-role created
clusterrolebinding.rbac.authorization.k8s.io/csi-resizer-binding created
role.rbac.authorization.k8s.io/external-resizer-cfg created
rolebinding.rbac.authorization.k8s.io/csi-resizer-role-cfg created
service/csi-cinder-controller-service created
statefulset.apps/csi-cinder-controllerplugin created
serviceaccount/csi-cinder-node-sa created
clusterrole.rbac.authorization.k8s.io/csi-nodeplugin-role created
clusterrolebinding.rbac.authorization.k8s.io/csi-nodeplugin-binding created
daemonset.apps/csi-cinder-nodeplugin created
csidriver.storage.k8s.io/cinder.csi.openstack.org created
ubuntu@capi:~/cloud-provider-openstack$

Check if csi-cinder-controllerplugin and csi-cinder-nodeplugin are running.

ubuntu@capi:~/cloud-provider-openstack$ kubectl --kubeconfig=/home/ubuntu/${CLUSTER_NAME}.kubeconfig get pod -n kube-system -l 'app in (csi-cinder-controllerplugin,csi-cinder-nodeplugin)'
NAME                            READY   STATUS    RESTARTS   AGE
csi-cinder-controllerplugin-0   5/5     Running   0          88s
csi-cinder-nodeplugin-gvdd5     2/2     Running   0          88s
ubuntu@capi:~/cloud-provider-openstack$
ubuntu@capi:~/cloud-provider-openstack$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get csidrivers.storage.k8s.io
NAME                       CREATED AT
cinder.csi.openstack.org   2020-12-23T07:10:26Z
ubuntu@capi:~/cloud-provider-openstack$

Using Cinder CSI

ubuntu@capi:~/cloud-provider-openstack$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig -f examples/cinder-csi-plugin/nginx.yaml create
storageclass.storage.k8s.io/csi-sc-cinderplugin created
persistentvolumeclaim/csi-pvc-cinderplugin created
pod/nginx created
ubuntu@capi:~/cloud-provider-openstack$
ubuntu@capi:~/cloud-provider-openstack$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get pvc
NAME                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
csi-pvc-cinderplugin   Bound    pvc-5b359b6a-ed6b-4ccb-9a3f-9de6925e1713   1Gi        RWO            csi-sc-cinderplugin   4m4s
ubuntu@capi:~/cloud-provider-openstack$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig exec -it nginx -- bash
root@nginx:/#
root@nginx:/#
root@nginx:/#
root@nginx:/# ls /var/lib/www/html
lost+found
root@nginx:/# touch /var/lib/www/html/index.html
root@nginx:/# exit
exit
ubuntu@capi:~/cloud-provider-openstack$

Delete created resources

ubuntu@capi:~/cloud-provider-openstack$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig delete -f examples/cinder-csi-plugin/nginx.yaml
storageclass.storage.k8s.io "csi-sc-cinderplugin" deleted
persistentvolumeclaim "csi-pvc-cinderplugin" deleted
pod "nginx" deleted
ubuntu@capi:~/cloud-provider-openstack$

ubuntu@capi:~/cloud-provider-openstack$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get pvc
No resources found in default namespace.
ubuntu@capi:~/cloud-provider-openstack$

Installing Istio and it’s add-ons kubernetes on OpenStack – Part 2

This article explains how to install Istio and it’s add-ons on Cloud Provider OpenStack with Load-balancer service(Octavia).
This is the second part, Installing Istio.

There are two ways to install istio. In this article, we follows Quick Start Evaluation Install without using Helm, since we just evaluate Istio.

Download the release

$ curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.3.3 sh -

The following is shown and end with command prompt.

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  3013  100  3013    0     0   2769      0  0:00:01  0:00:01 --:--:--  173k
Downloading istio-1.3.3 from https://github.com/istio/istio/releases/download/1.3.3/istio-1.3.3-linux.tar.gz ...  % Total    % Received
 % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   614    0   614    0     0   1900      0 --:--:-- --:--:-- --:--:--  1900
100 36.3M  100 36.3M    0     0  4315k      0  0:00:08  0:00:08 --:--:-- 5414k
Istio 1.3.3 Download Complete!

Istio has been successfully downloaded into the istio-1.3.3 folder on your system.

Next Steps:
See https://istio.io/docs/setup/kubernetes/install/ to add Istio to your Kubernetes cluster.

To configure the istioctl client tool for your workstation,
add the /home/ubuntu/istio-1.3.3/bin directory to your environment path variable with:
         export PATH="$PATH:/home/ubuntu/istio-1.3.3/bin"

Begin the Istio pre-installation verification check by running:
         istioctl verify-install

Need more information? Visit https://istio.io/docs/setup/kubernetes/install/
$

Let’s follow the instructions shown above.

$ export PATH="$PATH:/home/ubuntu/istio-1.3.3/bin"
$ istioctl verify-install

Successful message should be shown like the following.

Checking the cluster to make sure it is ready for Istio installation...

#1. Kubernetes-api
-----------------------
Can initialize the Kubernetes client.
Can query the Kubernetes API Server.

#2. Kubernetes-version
-----------------------
Istio is compatible with Kubernetes: v1.15.3.

#3. Istio-existence
-----------------------
Istio will be installed in the istio-system namespace.

#4. Kubernetes-setup
-----------------------
Can create necessary Kubernetes configurations: Namespace,ClusterRole,ClusterRoleBinding,CustomResourceDefinition,Role,ServiceAccount,Service,Deployments,ConfigMap.

#5. Sidecar-Injector
-----------------------
This Kubernetes cluster supports automatic sidecar injection. To enable automatic sidecar injection see https://istio.io/docs/setup/kubernetes/additional-setup/sidecar-injection/#deploying-an-app

-----------------------
Install Pre-Check passed! The cluster is ready for Istio installation.

Installing Istio CRDs

:~$ cd istio-1.3.3
:~/istio-1.3.3$ export PATH=$PWD/bin:$PATH
:~/istio-1.3.3$ for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done
customresourcedefinition.apiextensions.k8s.io/virtualservices.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/destinationrules.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/serviceentries.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/gateways.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/envoyfilters.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/clusterrbacconfigs.rbac.istio.io created
customresourcedefinition.apiextensions.k8s.io/policies.authentication.istio.io created
customresourcedefinition.apiextensions.k8s.io/meshpolicies.authentication.istio.io created
customresourcedefinition.apiextensions.k8s.io/httpapispecbindings.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/httpapispecs.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/quotaspecbindings.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/quotaspecs.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/rules.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/attributemanifests.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/rbacconfigs.rbac.istio.io created
customresourcedefinition.apiextensions.k8s.io/serviceroles.rbac.istio.io created
customresourcedefinition.apiextensions.k8s.io/servicerolebindings.rbac.istio.io created
customresourcedefinition.apiextensions.k8s.io/adapters.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/instances.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/templates.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/handlers.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/sidecars.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/authorizationpolicies.rbac.istio.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/issuers.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/certificates.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/orders.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/challenges.certmanager.k8s.io created

Installing the demo profile

$ kubectl apply -f install/kubernetes/istio-demo.yaml
namespace/istio-system created
customresourcedefinition.apiextensions.k8s.io/virtualservices.networking.istio.io unchanged
customresourcedefinition.apiextensions.k8s.io/destinationrules.networking.istio.io unchanged
customresourcedefinition.apiextensions.k8s.io/serviceentries.networking.istio.io unchanged
customresourcedefinition.apiextensions.k8s.io/gateways.networking.istio.io unchanged
customresourcedefinition.apiextensions.k8s.io/envoyfilters.networking.istio.io unchanged
customresourcedefinition.apiextensions.k8s.io/clusterrbacconfigs.rbac.istio.io unchanged
customresourcedefinition.apiextensions.k8s.io/policies.authentication.istio.io unchanged
.....
instance.config.istio.io/attributes created
destinationrule.networking.istio.io/istio-policy created
destinationrule.networking.istio.io/istio-telemetry created

Verify installation

:~/istio-1.3.3$ kubectl get svc -n istio-system
NAME                     TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                                                                                      AGE
grafana                  ClusterIP      10.233.22.253   <none>        3000/TCP                                                                                                                                     2m49s
istio-citadel            ClusterIP      10.233.12.250   <none>        8060/TCP,15014/TCP                                                                                                                           2m49s
istio-egressgateway      ClusterIP      10.233.26.207   <none>        80/TCP,443/TCP,15443/TCP                                                                                                                     2m49s
istio-galley             ClusterIP      10.233.36.243   <none>        443/TCP,15014/TCP,9901/TCP                                                                                                                   2m49s
istio-ingressgateway     LoadBalancer   10.233.50.231   10.0.0.213    15020:30474/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:31241/TCP,15030:30585/TCP,15031:31475/TCP,15032:32366/TCP,15443:30889/TCP   2m49s
istio-pilot              ClusterIP      10.233.23.43    <none>        15010/TCP,15011/TCP,8080/TCP,15014/TCP                                                                                                       2m49s
istio-policy             ClusterIP      10.233.44.135   <none>        9091/TCP,15004/TCP,15014/TCP                                                                                                                 2m49s
istio-sidecar-injector   ClusterIP      10.233.10.98    <none>        443/TCP,15014/TCP                                                                                                                            2m49s
istio-telemetry          ClusterIP      10.233.12.249   <none>        9091/TCP,15004/TCP,15014/TCP,42422/TCP                                                                                                       2m49s
jaeger-agent             ClusterIP      None            <none>        5775/UDP,6831/UDP,6832/UDP                                                                                                                   2m49s
jaeger-collector         ClusterIP      10.233.23.10    <none>        14267/TCP,14268/TCP                                                                                                                          2m49s
jaeger-query             ClusterIP      10.233.44.204   <none>        16686/TCP                                                                                                                                    2m49s
kiali                    ClusterIP      10.233.38.170   <none>        20001/TCP                                                                                                                                    2m49s
prometheus               ClusterIP      10.233.63.59    <none>        9090/TCP                                                                                                                                     2m49s
tracing                  ClusterIP      10.233.16.87    <none>        80/TCP                                                                                                                                       2m49s
zipkin                   ClusterIP      10.233.31.122   <none>        9411/TCP                                                                                                                                     2m49s
:~/istio-1.3.3$
:~/istio-1.3.3$ kubectl get pods -n istio-system
NAME                                      READY   STATUS      RESTARTS   AGE
grafana-59d57c5c56-m762w                  1/1     Running     0          3m24s
istio-citadel-66f699cf68-24h6c            1/1     Running     0          3m23s
istio-egressgateway-7fbcf68b68-x99v5      1/1     Running     0          3m24s
istio-galley-fd94bc888-mhpp7              1/1     Running     0          3m24s
istio-grafana-post-install-1.3.3-l8524    0/1     Completed   0          3m25s
istio-ingressgateway-587c9fbc85-khd8z     1/1     Running     0          3m24s
istio-pilot-74cb5d88bc-44lqh              2/2     Running     0          3m24s
istio-policy-5865b8c696-tp4b6             2/2     Running     3          3m24s
istio-security-post-install-1.3.3-qmcf4   0/1     Completed   0          3m25s
istio-sidecar-injector-d8856c48f-nxqwb    1/1     Running     0          3m23s
istio-telemetry-95689668-p5ww4            2/2     Running     2          3m24s
istio-tracing-6bbdc67d6c-m9fdc            1/1     Running     0          3m23s
kiali-8c9d6fbf6-5ks85                     1/1     Running     0          3m24s
prometheus-7d7b9f7844-s8wt6               1/1     Running     0          3m23s

Installing Istio and it’s add-ons on kubernetes on OpenStack – Part 1

This article explains how to install Istio and it’s add-ons on Cloud Provider OpenStack with Load-balancer service(Octavia).
This is the first part, Installing Kubernetes and metrics-server.

Installing kubernetes via kubespray

Prerequites

Kubespray does not manage networks, VMs, and security groups. Before running playbook, You MUST create virtual networks, VM, and security groups manually. VMs:

+--------------------------------------+---------+--------+--------------------------------------+--------+--------+
| ID                                   | Name    | Status | Networks                             | Image  | Flavor |
+--------------------------------------+---------+--------+--------------------------------------+--------+--------+
| a4bf51f1-d780-47f5-9343-b05cb7a8ce05 | node4   | ACTIVE | selfservice=172.16.1.212             | bionic | large  |
| ec07d07d-61b3-4fe8-a2cc-697ef903fb9c | node3   | ACTIVE | selfservice=172.16.1.252             | bionic | small  |
| 2afd275d-d90a-4467-bf94-6140e3141cdc | node2   | ACTIVE | selfservice=172.16.1.227             | bionic | small  |
| fe5d746a-ab9c-4726-98a2-4d27810fb129 | node1   | ACTIVE | selfservice=172.16.1.166, 10.0.0.249 | bionic | small  |
| af1c3286-d5f1-4cdc-9057-25a02a498931 | bastion | ACTIVE | selfservice=172.16.1.173, 10.0.0.216 | bionic |        |
+--------------------------------------+---------+--------+--------------------------------------+--------+--------+

node1 is master, node2 is etcd, and node3, node4 are worker. bastion is bastion node where in running ansible playbooks. Flavors:

+----+---------+------+------+-----------+-------+-----------+
| ID | Name    |  RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+---------+------+------+-----------+-------+-----------+
| 1  | small   | 2048 |   40 |         0 |     2 | True      |
| 2  | large   | 6144 |   40 |         0 |     4 | True      |
+----+---------+------+------+-----------+-------+-----------+

Security groups:
Create security groups for each role according to check-required-ports. In addition to the ports, VMs must be ssh and ping for kubespray requirement. I created three security groups and added ping and ssh to default group.

+--------------------------------------+---------+------------------------+----------------------------------+------+
| ID                                   | Name    | Description            | Project                          | Tags |
+--------------------------------------+---------+------------------------+----------------------------------+------+
| 12dfb5a7-094f-456b-81f1-db413a7fe1d8 | node    |                        | 87a002c8d3e14363be864888f853fe33 | []   |
| 42f0de07-0b29-45fe-aac2-bc79fdcdc2e9 | etcd    |                        | 87a002c8d3e14363be864888f853fe33 | []   |
| c9476a09-36c4-4d27-8838-e0ebcb52b912 | default | Default security group | 87a002c8d3e14363be864888f853fe33 | []   |
| f09c7876-41b3-42be-b12a-5a0b1aff6699 | master  |                        | 87a002c8d3e14363be864888f853fe33 | []   |
+--------------------------------------+---------+------------------------+----------------------------------+------+

You need to do additional steps for using Calico on OpenStack. see: https://kubespray.io/#/docs/openstack

Finally, VMs must be logged in without password.

Creating your own inventory

Generally follows the Quick Start. After git cloning the kubespray repository,

Install dependencies

sudo pip install -r requirements.txt

Copy “inventory/sample“ as “inventory/mycluster“

cp -rfp inventory/sample inventory/mycluster

Update Ansible inventory file with inventory builder

declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}

Update inventory/mycluster/hosts.yml. My file is the followings.

all:
  hosts:
    node1:
      ansible_host: 10.0.0.249
      ip: 172.16.1.166
      access_ip: 10.0.0.249
    node2:
      ansible_host: 172.16.1.227
      ip: 172.16.1.227
    node3:
      ansible_host: 172.16.1.252
      ip: 172.16.1.252
    node4:
      ansible_host: 172.16.1.212
      ip: 172.16.1.212
  children:
    kube-master:
      hosts:
        node1:
    kube-node:
      hosts:
        node3:
        node4:
    etcd:
      hosts:
        node2:
    k8s-cluster:
      children:
        kube-master:
        kube-node:
    calico-rr:
      hosts: {}

Update openstack.yml

Update mycluster/group_vars/all/openstack.yml for using Octavia through Loadbalancer service type. My file is the followings.

openstack_lbaas_enabled: True
openstack_lbaas_subnet_id: "b39e2994-bcfb-41ff-b300-dcf36ce98ce6"
## To enable automatic floating ip provisioning, specify a subnet.
openstack_lbaas_floating_network_id: "5f11f552-7254-47ac-bda3-e8c03b1443cd"
## Override default LBaaS behavior
openstack_lbaas_use_octavia: True

Update k8s-net-calico.yml

You may need to update group_vars/k8s-cluster/k8s-net-calico.yml for configuring MTU value, if your OpenStack environment use VXLAN
(If you follows OpenStack installation guide, your need to update. see https://docs.projectcalico.org/v3.8/networking/mtu#mtu-configuration)

Run ansible playbook

ansible-playbook -i inventory/mycluster/hosts.yml --become --become-user=root cluster.yml

Access your environment

Kubeadm creates kubeconfig file named admin.conf under /etc/kubernetes in master node. Copy the file into your bastion node and rename to $HOME/.kube/config. You also need to install kubectl in the bastion node.

(python) ubuntu@bastion:~$ kubectl get nodes
NAME    STATUS   ROLES    AGE    VERSION
node1   Ready    master   4d6h   v1.15.3
node3   Ready    <none>   4d6h   v1.15.3
node4   Ready    <none>   3d     v1.15.3

Installing metrics-server

Metrics Server implements the Metrics API you can get the amount of resource currently used by a given node or a given pod. To install, clone the repository from GitHub and checkout latest tags since no branch created.

git clone https://github.com/kubernetes-incubator/metrics-server.git
cd metrics-server
git checkout -b v0.3.6 regs/tags/v0.3.6

Edit deploy/1.8+/metrics-server-deployment.yaml for testing purpose.

       - name: metrics-server
+        args:
+        - --kubelet-insecure-tls
+        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
         image: k8s.gcr.io/metrics-server-amd64:v0.3.6

Let’s deploy metric-server.

kubectl create -f deploy/1.8+/

After short period of time, You can see the metrics of nodes.

(python) ubuntu@bastion:~/metrics-server$ kubectl top nodes
NAME    CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
node1   203m         11%    1158Mi          82%       
node3   108m         2%     1313Mi          23%       
node4   128m         3%     1293Mi          23%

Using a Service of LoadBalancer type on Cloud Provider OpenStack

This explains how to configure cluster-api-provider-openstack to use a Service of LoadBalancer type on Cloud Provider OpenStack.
OpenStack Octavia intallation is based on Installing OpenStack Octavia Stein release on Ubuntu 18.04 manually .

Configure

Cloud Provider OpenStack configuration is defined in the generate-yaml.sh file. You must add LoadBalancer section to the OPENSTACK_CLOUD_PROVIDER_CONF_PLAIN environment variable value like the following. By adding this, the /etc/kubernetes/cloud.conf file in the master node will be updated.

# Basic cloud.conf, no LB configuration as that data is not known yet.
OPENSTACK_CLOUD_PROVIDER_CONF_PLAIN="[Global]
auth-url=$AUTH_URL
username=\"$USERNAME\"
password=\"$PASSWORD\"
region=\"$REGION\"
tenant-id=\"$PROJECT_ID\"
domain-name=\"$DOMAIN_NAME\"
[LoadBalancer]
lb-version=v2
use-octavia=true
floating-network-id=5f11f552-7254-47ac-bda3-e8c03b1443cd
create-monitor=true
monitor-delay=60s
monitor-timeout=30s
monitor-max-retries=5
"

After creating a Kubernetes cluster by cluster-api-provider-openstack, login to the master node and check /etc/kubernetes/cloud.conf file that LoadBalancer section was added.

ubuntu@openstack-master-xmd6r:~$ sudo cat /etc/kubernetes/cloud.conf
[Global]
auth-url=http://<controller>:5000/
username="demo"
password="<password>""
region="RegionOne"
tenant-id="<tenant-id>"
domain-name="Default"
[LoadBalancer]
lb-version=v2
use-octavia=true
floating-network-id=<floating-network-id>
create-monitor=true
monitor-delay=60s
monitor-timeout=30s
monitor-max-retries=5

Testing deployment

This is based on Exposing applications using services of LoadBalancer type

ubuntu@k8s:~/go/src/sigs.k8s.io/cluster-api-provider-openstack/cmd/clusterctl$ kubectl --kubeconfig=kubeconfig run echoserver --image=gcr.io/google-containers/echoserver:1.10 --port=8080
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/echoserver created
ubuntu@k8s:~/go/src/sigs.k8s.io/cluster-api-provider-openstack/cmd/clusterctl$ kubectl --kubeconfig=kubeconfig get deploy
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
echoserver   1/1     1            1           29s

ubuntu@k8s:~/go/src/sigs.k8s.io/cluster-api-provider-openstack/cmd/clusterctl$ cat <<EOF > loadbalancer.yaml
> ---
> kind: Service
> apiVersion: v1
> metadata:
>   name: loadbalanced-service
> spec:
>   selector:
>     run: echoserver
>   type: LoadBalancer
>   ports:
>   - port: 80
>     targetPort: 8080
>     protocol: TCP
> EOF
ubuntu@k8s:~/go/src/sigs.k8s.io/cluster-api-provider-openstack/cmd/clusterctl$ cat loadbalancer.yaml
---
kind: Service
apiVersion: v1
metadata:
  name: loadbalanced-service
spec:
  selector:
    run: echoserver
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
ubuntu@k8s:~/go/src/sigs.k8s.io/cluster-api-provider-openstack/cmd/clusterctl$ kubectl --kubeconfig=kubeconfig apply -f loadbalancer.yaml
service/loadbalanced-service created
ubuntu@k8s:~/go/src/sigs.k8s.io/cluster-api-provider-openstack/cmd/clusterctl$ kubectl --kubeconfig=kubeconfig get service loadbalanced-service
NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
loadbalanced-service   LoadBalancer   10.100.237.171   <pending>     80:31500/TCP   22s
ubuntu@k8s:~/go/src/sigs.k8s.io/cluster-api-provider-openstack/cmd/clusterctl$ kubectl --kubeconfig=kubeconfig get service loadbalanced
-service -w
NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
loadbalanced-service   LoadBalancer   10.100.237.171   <pending>     80:31500/TCP   29s
loadbalanced-service   LoadBalancer   10.100.237.171   10.0.0.228    80:31500/TCP   47s

ubuntu@k8s:~/go/src/sigs.k8s.io/cluster-api-provider-openstack/cmd/clusterctl$ curl 10.0.0.228


Hostname: echoserver-785b4d845-dgr9p

Pod Information:
        -no pod information available-

Server values:
        server_version=nginx: 1.13.3 - lua: 10008

Request Information:
        client_address=172.16.1.126
        method=GET
        real path=/
        query=
        request_version=1.1
        request_scheme=http
        request_uri=http://10.0.0.228:8080/

Request Headers:
        accept=*/*
        host=10.0.0.228
        user-agent=curl/7.58.0

Request Body:
        -no body in request-

ubuntu@k8s:~/go/src/sigs.k8s.io/cluster-api-provider-openstack/cmd/clusterctl$