Installing Istio and it’s add-ons on kubernetes on OpenStack – Part 1

This article explains how to install Istio and it’s add-ons on Cloud Provider OpenStack with Load-balancer service(Octavia).
This is the first part, Installing Kubernetes and metrics-server.

Installing kubernetes via kubespray

Prerequites

Kubespray does not manage networks, VMs, and security groups. Before running playbook, You MUST create virtual networks, VM, and security groups manually. VMs:

+--------------------------------------+---------+--------+--------------------------------------+--------+--------+
| ID                                   | Name    | Status | Networks                             | Image  | Flavor |
+--------------------------------------+---------+--------+--------------------------------------+--------+--------+
| a4bf51f1-d780-47f5-9343-b05cb7a8ce05 | node4   | ACTIVE | selfservice=172.16.1.212             | bionic | large  |
| ec07d07d-61b3-4fe8-a2cc-697ef903fb9c | node3   | ACTIVE | selfservice=172.16.1.252             | bionic | small  |
| 2afd275d-d90a-4467-bf94-6140e3141cdc | node2   | ACTIVE | selfservice=172.16.1.227             | bionic | small  |
| fe5d746a-ab9c-4726-98a2-4d27810fb129 | node1   | ACTIVE | selfservice=172.16.1.166, 10.0.0.249 | bionic | small  |
| af1c3286-d5f1-4cdc-9057-25a02a498931 | bastion | ACTIVE | selfservice=172.16.1.173, 10.0.0.216 | bionic |        |
+--------------------------------------+---------+--------+--------------------------------------+--------+--------+

node1 is master, node2 is etcd, and node3, node4 are worker. bastion is bastion node where in running ansible playbooks. Flavors:

+----+---------+------+------+-----------+-------+-----------+
| ID | Name    |  RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+---------+------+------+-----------+-------+-----------+
| 1  | small   | 2048 |   40 |         0 |     2 | True      |
| 2  | large   | 6144 |   40 |         0 |     4 | True      |
+----+---------+------+------+-----------+-------+-----------+

Security groups:
Create security groups for each role according to check-required-ports. In addition to the ports, VMs must be ssh and ping for kubespray requirement. I created three security groups and added ping and ssh to default group.

+--------------------------------------+---------+------------------------+----------------------------------+------+
| ID                                   | Name    | Description            | Project                          | Tags |
+--------------------------------------+---------+------------------------+----------------------------------+------+
| 12dfb5a7-094f-456b-81f1-db413a7fe1d8 | node    |                        | 87a002c8d3e14363be864888f853fe33 | []   |
| 42f0de07-0b29-45fe-aac2-bc79fdcdc2e9 | etcd    |                        | 87a002c8d3e14363be864888f853fe33 | []   |
| c9476a09-36c4-4d27-8838-e0ebcb52b912 | default | Default security group | 87a002c8d3e14363be864888f853fe33 | []   |
| f09c7876-41b3-42be-b12a-5a0b1aff6699 | master  |                        | 87a002c8d3e14363be864888f853fe33 | []   |
+--------------------------------------+---------+------------------------+----------------------------------+------+

You need to do additional steps for using Calico on OpenStack. see: https://kubespray.io/#/docs/openstack

Finally, VMs must be logged in without password.

Creating your own inventory

Generally follows the Quick Start. After git cloning the kubespray repository,

Install dependencies

sudo pip install -r requirements.txt

Copy “inventory/sample“ as “inventory/mycluster“

cp -rfp inventory/sample inventory/mycluster

Update Ansible inventory file with inventory builder

declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}

Update inventory/mycluster/hosts.yml. My file is the followings.

all:
  hosts:
    node1:
      ansible_host: 10.0.0.249
      ip: 172.16.1.166
      access_ip: 10.0.0.249
    node2:
      ansible_host: 172.16.1.227
      ip: 172.16.1.227
    node3:
      ansible_host: 172.16.1.252
      ip: 172.16.1.252
    node4:
      ansible_host: 172.16.1.212
      ip: 172.16.1.212
  children:
    kube-master:
      hosts:
        node1:
    kube-node:
      hosts:
        node3:
        node4:
    etcd:
      hosts:
        node2:
    k8s-cluster:
      children:
        kube-master:
        kube-node:
    calico-rr:
      hosts: {}

Update openstack.yml

Update mycluster/group_vars/all/openstack.yml for using Octavia through Loadbalancer service type. My file is the followings.

openstack_lbaas_enabled: True
openstack_lbaas_subnet_id: "b39e2994-bcfb-41ff-b300-dcf36ce98ce6"
## To enable automatic floating ip provisioning, specify a subnet.
openstack_lbaas_floating_network_id: "5f11f552-7254-47ac-bda3-e8c03b1443cd"
## Override default LBaaS behavior
openstack_lbaas_use_octavia: True

Update k8s-net-calico.yml

You may need to update group_vars/k8s-cluster/k8s-net-calico.yml for configuring MTU value, if your OpenStack environment use VXLAN
(If you follows OpenStack installation guide, your need to update. see https://docs.projectcalico.org/v3.8/networking/mtu#mtu-configuration)

Run ansible playbook

ansible-playbook -i inventory/mycluster/hosts.yml --become --become-user=root cluster.yml

Access your environment

Kubeadm creates kubeconfig file named admin.conf under /etc/kubernetes in master node. Copy the file into your bastion node and rename to $HOME/.kube/config. You also need to install kubectl in the bastion node.

(python) ubuntu@bastion:~$ kubectl get nodes
NAME    STATUS   ROLES    AGE    VERSION
node1   Ready    master   4d6h   v1.15.3
node3   Ready    <none>   4d6h   v1.15.3
node4   Ready    <none>   3d     v1.15.3

Installing metrics-server

Metrics Server implements the Metrics API you can get the amount of resource currently used by a given node or a given pod. To install, clone the repository from GitHub and checkout latest tags since no branch created.

git clone https://github.com/kubernetes-incubator/metrics-server.git
cd metrics-server
git checkout -b v0.3.6 regs/tags/v0.3.6

Edit deploy/1.8+/metrics-server-deployment.yaml for testing purpose.

       - name: metrics-server
+        args:
+        - --kubelet-insecure-tls
+        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
         image: k8s.gcr.io/metrics-server-amd64:v0.3.6

Let’s deploy metric-server.

kubectl create -f deploy/1.8+/

After short period of time, You can see the metrics of nodes.

(python) ubuntu@bastion:~/metrics-server$ kubectl top nodes
NAME    CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
node1   203m         11%    1158Mi          82%       
node3   108m         2%     1313Mi          23%       
node4   128m         3%     1293Mi          23%

コメントを残す

メールアドレスが公開されることはありません。 が付いている欄は必須項目です

このサイトはスパムを低減するために Akismet を使っています。コメントデータの処理方法の詳細はこちらをご覧ください