Instalare Kubernetes local sub LXD

Postat la Sun 12 August 2018 in tutoriale, lxd, lxc

Kubernetes este un serviciu ce prinde tot mai mult interes in ziua de azi.

Normal kubernetes ruleaza pe masini/Vm multiple care pot cere resurse ce nu sunt disponibile in scop didactic si din documentatia sa aflam ca putem crea un cluster pe LXD pentru o utilizare mult mai eficienta a resurselor disponibile.

Pentru test am folosit o instanta m4.2xlarge disponibila pe AWS ca instanta spot cu urmatoarea configuratie: 8 coruri, 32GB RAM si 100GB SSD si cu Ubuntu 16.04 LTS. (cu un cost mult mai mic de utilizare)

Ca versiune de LXD vom folosi 3.x si inainte vom dezinstala versiunea 2.x ce vine defaut cu sistemul de operare:

$sudo apt purge lxd lxd-client lxcfs lxc-common liblxc1
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be REMOVED:
  liblxc1* lxc-common* lxcfs* lxd* lxd-client*
0 upgraded, 0 newly installed, 5 to remove and 0 not upgraded.
After this operation, 27.0 MB disk space will be freed.
Do you want to continue? [Y/n] y

Pentru Ubuntu 16.04 lxd - versiunile noi - sunt disponibile ca pachete snap:

sudo snap install lxd --channel=3.0/stable
sudo usermod -a -G lxd $(whoami)

Vom configura LXD care va folosi sistem de fisiere dir in loc de ZFS, din cauza unor balbe din pachetul kubernetes.

$ sudo lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]: kube
Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]: dir
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

Instalam prin conjure-up custerul de kubernetes:

sudo snap install conjure-up --classic
conjure-up kubernetes

Selectam eventualele aplicatii de monitorizare (eu nu am selectat nimic):

Selectam pentru deploy localhost:

Selectam plugin-ul de retea ce va fi folosit de kubernetes:

Parola userului:

Aplicatiile necesare pentru kubernetes, ce pot fi configurare fiecare:

Am solicitat 5 pod-uri fiecare cu 2 core-uri, 2GB de RAM si 10G de disc:

Procesul incepe:

Pe timpul instalarii sistemul este folosit intens:

Etapele instalarii se pot vizualiza pentru fiecare nod/serviciu:

La final sunt copiate local configurare clusterului pentru a putea fi administrat de pe masina host cu kubectl

La final containerele pornite pentru clusterul kubernetes sunt:

$ lxc list
+----------------+---------+-----------------------+------+------------+-----------+
|      NAME      |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+----------------+---------+-----------------------+------+------------+-----------+
| juju-433f68-0  | RUNNING | 10.170.64.81 (eth0)   |      | PERSISTENT | 0         |
+----------------+---------+-----------------------+------+------------+-----------+
| juju-433f68-1  | RUNNING | 10.170.64.227 (eth0)  |      | PERSISTENT | 0         |
+----------------+---------+-----------------------+------+------------+-----------+
| juju-433f68-10 | RUNNING | 172.17.0.1 (docker0)  |      | PERSISTENT | 0         |
|                |         | 10.170.64.215 (eth0)  |      |            |           |
|                |         | 10.1.84.1 (cni0)      |      |            |           |
|                |         | 10.1.84.0 (flannel.1) |      |            |           |
+----------------+---------+-----------------------+------+------------+-----------+
| juju-433f68-11 | RUNNING | 172.17.0.1 (docker0)  |      | PERSISTENT | 0         |
|                |         | 10.170.64.139 (eth0)  |      |            |           |
|                |         | 10.1.28.0 (flannel.1) |      |            |           |
+----------------+---------+-----------------------+------+------------+-----------+
| juju-433f68-2  | RUNNING | 10.170.64.208 (eth0)  |      | PERSISTENT | 0         |
+----------------+---------+-----------------------+------+------------+-----------+
| juju-433f68-3  | RUNNING | 10.170.64.196 (eth0)  |      | PERSISTENT | 0         |
+----------------+---------+-----------------------+------+------------+-----------+
| juju-433f68-4  | RUNNING | 10.170.64.167 (eth0)  |      | PERSISTENT | 0         |
+----------------+---------+-----------------------+------+------------+-----------+
| juju-433f68-5  | RUNNING | 10.170.64.105 (eth0)  |      | PERSISTENT | 0         |
|                |         | 10.1.27.0 (flannel.1) |      |            |           |
+----------------+---------+-----------------------+------+------------+-----------+
| juju-433f68-6  | RUNNING | 10.170.64.123 (eth0)  |      | PERSISTENT | 0         |
|                |         | 10.1.66.0 (flannel.1) |      |            |           |
+----------------+---------+-----------------------+------+------------+-----------+
| juju-433f68-7  | RUNNING | 172.17.0.1 (docker0)  |      | PERSISTENT | 0         |
|                |         | 10.170.64.182 (eth0)  |      |            |           |
|                |         | 10.1.31.1 (cni0)      |      |            |           |
|                |         | 10.1.31.0 (flannel.1) |      |            |           |
+----------------+---------+-----------------------+------+------------+-----------+
| juju-433f68-8  | RUNNING | 172.17.0.1 (docker0)  |      | PERSISTENT | 0         |
|                |         | 10.170.64.50 (eth0)   |      |            |           |
|                |         | 10.1.64.1 (cni0)      |      |            |           |
|                |         | 10.1.64.0 (flannel.1) |      |            |           |
+----------------+---------+-----------------------+------+------------+-----------+
| juju-433f68-9  | RUNNING | 172.17.0.1 (docker0)  |      | PERSISTENT | 0         |
|                |         | 10.170.64.148 (eth0)  |      |            |           |
|                |         | 10.1.39.1 (cni0)      |      |            |           |
|                |         | 10.1.39.0 (flannel.1) |      |            |           |
+----------------+---------+-----------------------+------+------------+-----------+
| juju-fd86fa-0  | RUNNING | 10.170.64.157 (eth0)  |      | PERSISTENT | 0         |
+----------------+---------+-----------------------+------+------------+-----------+

Mai multe despre clusterul kubernetes putem afla prin comanda:

$kubectl cluster-info
Kubernetes master is running at https://10.170.64.167:443
Heapster is running at https://10.170.64.167:443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://10.170.64.167:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubernetes-dashboard is running at https://10.170.64.167:443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
Metrics-server is running at https://10.170.64.167:443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
Grafana is running at https://10.170.64.167:443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
InfluxDB is running at https://10.170.64.167:443/api/v1/namespaces/kube-system/services/monitoring-influxdb:http/proxy

Pentru conectare la interfata dashboard avem nevoie de userul si parola pe care o gasim in fisierul ~/.kube/config

$ cat ~/.kube/config
apiVersion: v1
.....
users:
  - name: conjure-canonical-kubern-39c
    user:
      password: zrq856BAHcjgCwEV99l7ML9LxiH8vZhZ
      username: admin

Statusul serviciilor il putem afla cu utilitarul kubectl:

$ kubectl get no,po,services,rc
 NAME                  STATUS    ROLES     AGE       VERSION
 node/juju-433f68-10   Ready     <none>    31m       v1.11.1
 node/juju-433f68-11   Ready     <none>    31m       v1.11.1
 node/juju-433f68-7    Ready     <none>    31m       v1.11.1
 node/juju-433f68-8    Ready     <none>    31m       v1.11.1
 node/juju-433f68-9    Ready     <none>    31m       v1.11.1

 NAME                                                   READY     STATUS    RESTARTS   AGE
 pod/default-http-backend-7f9xb                         1/1       Running   0          31m
 pod/nginx-ingress-kubernetes-worker-controller-2nzvf   1/1       Running   0          31m
 pod/nginx-ingress-kubernetes-worker-controller-b4l22   1/1       Running   0          31m
 pod/nginx-ingress-kubernetes-worker-controller-khlqg   1/1       Running   0          31m
 pod/nginx-ingress-kubernetes-worker-controller-r8s2b   1/1       Running   0          31m
 pod/nginx-ingress-kubernetes-worker-controller-xv277   1/1       Running   0          31m

 NAME                           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
 service/default-http-backend   ClusterIP   10.152.183.6   <none>        80/TCP    31m
 service/kubernetes             ClusterIP   10.152.183.1   <none>        443/TCP   33m

 NAME                                         DESIRED   CURRENT   READY     AGE
 replicationcontroller/default-http-backend   1         1         1         31m

Avand clusterul ruland puteti rula, testa toate serviciile puse la dispozitie de kubernetes. Mult success

O prezentare interesanta de acest subiect puteti urmari pe youtube

Mai multe legat de LXD puteti gasi in seria de articole Virtualizare cu LXD