Kubernetes - instalare cluster
Postat la Tue 12 May 2020 in tutoriale
Instalarea cluster-ului Kubernetes este facila cu kubeadm
Clusterul este format din 3 instante - Ubuntu 18.04 LTS - pregatite cu Ansible astfel:
- k8s-master - nod de administrare
- k8s-slave1 - nod worker
- k8s-slave2 - nod worker
Nodurile au fiecare o interfata interna (10.209.214.0/16) si un externa (192.168.25.0/16).
Clusterul este initiat pe k8s-master:
# kubeadm init --pod-network-cidr=10.180.0.0/16 --apiserver-cert-extra-sans 192.168.25.180
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.209.214.31 192.168.25.180]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [10.209.214.31 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [10.209.214.31 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0512 14:55:02.267212 19375 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0512 14:55:02.268448 19375 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 38.031506 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 5ucugc.6tj2hgqgi85lgf94
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.209.214.31:6443 --token 5ucugc.6tj2hgqgi85lgf94 \
--discovery-token-ca-cert-hash sha256:23d7fbbd736c8a06e8c66c8d132b1a6e131666571874602822f8e6f4412d1987
unde:
- pod-network-cidr - clasa de retea ce va fi folosita de pod-uri (10.180.0.0/16)
- apiserver-cert-extra-sans - domeniile/adresele suplimentare incluse in certificatul SSL generat (192.168.25.180).
Copiem fisierul de config admin.conf in directorul utilizatorului si care va fi utilizat de kubectl:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Nota daca fisierul este transferat pe statia locala, sa modificam daca este cazul linia (cu IP extern al nodului master):
server: https://192.168.25.180:6443
nu inainte sa ne asiguram ca portul 6443 este accesibil si securizat.
Instalam serviciul pentru reteaua interna. Am ales flannel.
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
Verificam functionarea corecta a servciilor:
# kubectl get no,po -A -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node/k8s-master Ready master 39m v1.18.2 10.209.214.31 <none> Ubuntu 18.04.4 LTS 5.3.0-51-generic docker://19.3.8
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system pod/coredns-66bff467f8-4tz99 1/1 Running 0 38m 10.180.0.2 k8s-master <none> <none>
kube-system pod/coredns-66bff467f8-txqrb 1/1 Running 0 38m 10.180.0.3 k8s-master <none> <none>
kube-system pod/etcd-k8s-master 1/1 Running 0 39m 10.209.214.31 k8s-master <none> <none>
kube-system pod/kube-apiserver-k8s-master 1/1 Running 0 39m 10.209.214.31 k8s-master <none> <none>
kube-system pod/kube-controller-manager-k8s-master 1/1 Running 0 39m 10.209.214.31 k8s-master <none> <none>
kube-system pod/kube-flannel-ds-amd64-8chrg 1/1 Running 0 12m 10.209.214.31 k8s-master <none> <none>
kube-system pod/kube-proxy-t2kvh 1/1 Running 0 38m 10.209.214.31 k8s-master <none> <none>
kube-system pod/kube-scheduler-k8s-master 1/1 Running 0 39m 10.209.214.31 k8s-master <none> <none>
Acum puntem trece la configurarea nodurilor worker. Rulam pe fiecare nod
# kubeadm join 10.209.214.31:6443 --token 5ucugc.6tj2hgqgi85lgf94 \
--discovery-token-ca-cert-hash sha256:23d7fbbd736c8a06e8c66c8d132b1a6e131666571874602822f8e6f4412d1987
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Verificam daca nodul este in cluster
# kubectl get no,po -A -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node/k8s-master Ready master 46m v1.18.2 10.209.214.31 <none> Ubuntu 18.04.4 LTS 5.3.0-51-generic docker://19.3.8
node/k8s-slave1 Ready <none> 7m53s v1.18.2 10.209.214.201 <none> Ubuntu 18.04.4 LTS 5.3.0-51-generic docker://19.3.8
node/k8s-slave2 Ready <none> 2m13s v1.18.2 10.209.214.112 <none> Ubuntu 18.04.4 LTS 5.3.0-51-generic docker://19.3.8
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system pod/coredns-66bff467f8-4tz99 1/1 Running 0 46m 10.180.0.2 k8s-master <none> <none>
kube-system pod/coredns-66bff467f8-txqrb 1/1 Running 0 46m 10.180.0.3 k8s-master <none> <none>
kube-system pod/etcd-k8s-master 1/1 Running 0 46m 10.209.214.31 k8s-master <none> <none>
kube-system pod/kube-apiserver-k8s-master 1/1 Running 0 46m 10.209.214.31 k8s-master <none> <none>
kube-system pod/kube-controller-manager-k8s-master 1/1 Running 0 46m 10.209.214.31 k8s-master <none> <none>
kube-system pod/kube-flannel-ds-amd64-8chrg 1/1 Running 0 19m 10.209.214.31 k8s-master <none> <none>
kube-system pod/kube-flannel-ds-amd64-9njz7 1/1 Running 0 92s 10.209.214.112 k8s-slave2 <none> <none>
kube-system pod/kube-flannel-ds-amd64-fbfxf 1/1 Running 0 7m53s 10.209.214.201 k8s-slave1 <none> <none>
kube-system pod/kube-proxy-j4m2l 1/1 Running 0 7m53s 10.209.214.201 k8s-slave1 <none> <none>
kube-system pod/kube-proxy-t2kvh 1/1 Running 0 46m 10.209.214.31 k8s-master <none> <none>
kube-system pod/kube-proxy-xg6hn 1/1 Running 0 92s 10.209.214.112 k8s-slave2 <none> <none>
kube-system pod/kube-scheduler-k8s-master 1/1 Running 0 46m 10.209.214.31 k8s-master <none> <none>
Acum ca avem clusterul instalat putem rula containerele dorite. Va doresc spor.