Ssh into the master node with ssh cluster1-master1
.
Check how the master components kubelet, kube-apiserver,
kube-scheduler, kube-controller-manager and etcd are started/installed
on the master node. Also find out the name of the DNS application and
how it's started/installed on the master node.
Write your findings into file /opt/course/8/master-components.txt
. The file should be structured like:
# /opt/course/8/master-components.txt
kubelet: [TYPE]
kube-apiserver: [TYPE]
kube-scheduler: [TYPE]
kube-controller-manager: [TYPE]
etcd: [TYPE]
dns: [TYPE] [NAME]
Choices of [TYPE]
are: not-installed
, process
, static-pod
, pod
Answer:
We could start by finding processes of the requested components, especially the kubelet at first:
➜ ssh cluster1-master1
root@cluster1-master1:~# ps aux | grep kubelet # shows kubelet process
We can see which components are controlled via systemd looking at /etc/systemd/system
directory:
➜ root@cluster1-master1:~# find /etc/systemd/system/ | grep kube
/etc/systemd/system/kubelet.service.d
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
/etc/systemd/system/multi-user.target.wants/kubelet.service
➜ root@cluster1-master1:~# find /etc/systemd/system/ | grep etcd
This shows kubelet is controlled via systemd, but no other service named kube nor etcd. It seems that this cluster has been setup using kubeadm, so we check in the default manifests directory:
➜ root@cluster1-master1:~# find /etc/kubernetes/manifests/
/etc/kubernetes/manifests/
/etc/kubernetes/manifests/kube-controller-manager.yaml
/etc/kubernetes/manifests/etcd.yaml
/etc/kubernetes/manifests/kube-scheduler-special.yaml
/etc/kubernetes/manifests/kube-apiserver.yaml
/etc/kubernetes/manifests/kube-scheduler.yaml
(The kubelet could also have a different manifests directory specified via parameter --pod-manifest-path
in it's systemd startup config)
This means the main 4 master services are setup as static Pods. There also seems to be a second scheduler kube-scheduler-special
existing.
Actually, let's check all Pods running on in the kube-system
Namespace on the master node:
➜ root@cluster1-master1:~# kubectl -n kube-system get pod -o wide | grep master1
coredns-5644d7b6d9-c4f68 1/1 Running ... cluster1-master1
coredns-5644d7b6d9-t84sc 1/1 Running ... cluster1-master1
etcd-cluster1-master1 1/1 Running ... cluster1-master1
kube-apiserver-cluster1-master1 1/1 Running ... cluster1-master1
kube-controller-manager-cluster1-master1 1/1 Running ... cluster1-master1
kube-proxy-q955p 1/1 Running ... cluster1-master1
kube-scheduler-cluster1-master1 1/1 Running ... cluster1-master1
kube-scheduler-special-cluster1-master1 0/1 CrashLoopBackOff ... cluster1-master1
weave-net-mwj47 2/2 Running ... cluster1-master1
There we see the 5 static pods, with -cluster1-master1
as suffix.
We also see that the dns application seems to be coredns, but how is it controlled?
➜ root@cluster1-master1$ kubectl -n kube-system get ds
NAME DESIRED CURRENT ... NODE SELECTOR AGE
kube-proxy 3 3 ... kubernetes.io/os=linux 155m
weave-net 3 3 ... <none> 155m
➜ root@cluster1-master1$ kubectl -n kube-system get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
coredns 2/2 2 2 155m
Seems like coredns is controlled via a Deployment. We combine our findings in the requested file:
# /opt/course/8/master-components.txt
kubelet: process
kube-apiserver: static-pod
kube-scheduler: static-pod
kube-scheduler-special: static-pod (status CrashLoopBackOff)
kube-controller-manager: static-pod
etcd: static-pod
dns: pod coredns
You should be comfortable investigating a running cluster, know different methods on how a cluster and its services can be setup and be able to troubleshoot and find error sources.
Comments
Post a Comment
https://gengwg.blogspot.com/