Your coworker said node cluster3-worker2
is running an older Kubernetes version and is not even part of the
cluster. Update Kubernetes on that node to the exact version that's
running on cluster3-master1
. Then add this node to the cluster. Use kubeadm for this.
Answer:
Upgrade Kubernetes to cluster3-master1 version
Search in the docs for kubeadm upgrade: https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade
➜ k get node
NAME STATUS ROLES AGE VERSION
cluster3-master1 Ready control-plane,master 116m v1.22.1
cluster3-worker1 NotReady <none> 112m v1.22.1
Master node seems to be running Kubernetes 1.22.1 and cluster3-worker2 is not yet part of the cluster.
➜ ssh cluster3-worker2
➜ root@cluster3-worker2:~# kubeadm version
ubeadm version: &version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"clean", BuildDate:"2021-08-19T15:44:22Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
➜ root@cluster3-worker2:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.4", GitCommit:"3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae", GitTreeState:"clean", BuildDate:"2021-08-11T18:16:05Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
➜ root@cluster3-worker2:~# kubelet --version
Kubernetes v1.21.4
Here kubeadm is already installed in the wanted version, so we can run:
➜ root@cluster3-worker2:~# kubeadm upgrade node
couldn't create a Kubernetes client from file "/etc/kubernetes/kubelet.conf": failed to load admin kubeconfig: open /etc/kubernetes/kubelet.conf: no such file or directory
To see the stack trace of this error execute with --v=5 or higher
This
is usually the proper command to upgrade a node. But this error means
that this node was never even initialised, so nothing to update here.
This will be done later using kubeadm join
. For now we can continue with kubelet and kubectl:
➜ root@cluster3-worker2:~# apt-get update
...
➜ root@cluster3-worker2:~# apt-cache show kubectl | grep 1.22
Version: 1.22.1-00
Filename: pool/kubectl_1.22.1-00_amd64_2a00cd912bfa610fe4932bc0a557b2dd7b95b2c8bff9d001dc6b3d34323edf7d.deb
Version: 1.22.0-00
Filename: pool/kubectl_1.22.0-00_amd64_052395d9ddf0364665cf7533aa66f96b310ec8a2b796d21c42f386684ad1fc56.deb
Filename: pool/kubectl_1.17.1-00_amd64_0dc19318c9114db2931552bb8bf650a14227a9603cb73fe0917ac7868ec7fcf0.deb
SHA256: 0dc19318c9114db2931552bb8bf650a14227a9603cb73fe0917ac7868ec7fcf0
...
➜ root@cluster3-worker2:~# apt-get install kubectl=1.22.1-00 kubelet=1.22.1-00
Reading package lists... Done
Building dependency tree
Reading state information... Done
...
Preparing to unpack .../kubectl_1.22.1-00_amd64.deb ...
Unpacking kubectl (1.22.1-00) over (1.21.4-00) ...
Preparing to unpack .../kubelet_1.22.1-00_amd64.deb ...
Unpacking kubelet (1.22.1-00) over (1.21.4-00) ...
Setting up kubectl (1.22.1-00) ...
Setting up kubelet (1.22.1-00) ...
➜ root@cluster3-worker2:~# kubelet --version
Kubernetes v1.22.1
Now we're up to date with kubeadm, kubectl and kubelet. Restart the kubelet:
➜ root@cluster3-worker2:~# systemctl restart kubelet
➜ root@cluster3-worker2:~# service kubelet status
...$KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
Main PID: 21457 (code=exited, status=255)
...
Apr 30 22:15:08 cluster3-worker2 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Apr 30 22:15:08 cluster3-worker2 systemd[1]: kubelet.service: Failed with result 'exit-code'.
We can ignore the errors and move into next step to generate the join command.
Add cluster3-master2 to cluster
First we log into the master1 and generate a new TLS bootstrap token, also printing out the join command:
➜ ssh cluster3-master1
➜ root@cluster3-master1:~# kubeadm token create --print-join-command
kubeadm join 192.168.100.31:6443 --token leqq1l.1hlg4rw8mu7brv73 --discovery-token-ca-cert-hash sha256:2e2c3407a256fc768f0d8e70974a8e24d7b9976149a79bd08858c4d7aa2ff79a
➜ root@cluster3-master1:~# kubeadm token list
TOKEN TTL EXPIRES ...
mnkpfu.d2lpu8zypbyumr3i 23h 2020-05-01T22:43:45Z ...
poa13f.hnrs6i6ifetwii75 <forever> <never> ...
We see the expiration of 23h for our token, we could adjust this by passing the ttl argument.
Next we connect again to worker2 and simply execute the join command:
➜ ssh cluster3-worker2
➜ root@cluster3-worker2:~# kubeadm join 192.168.100.31:6443 --token leqq1l.1hlg4rw8mu7brv73 --discovery-token-ca-cert-hash sha256:2e2c3407a256fc768f0d8e70974a8e24d7b9976149a79bd08858c4d7aa2ff79a
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
➜ root@cluster3-worker2:~# service kubelet status
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Wed 2021-09-15 17:12:32 UTC; 42s ago
Docs: https://kubernetes.io/docs/home/
Main PID: 24771 (kubelet)
Tasks: 13 (limit: 467)
Memory: 68.0M
CGroup: /system.slice/kubelet.service
└─24771 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kuber>
If you have troubles with kubeadm join
you might need to run kubeadm reset
.
This looks great though for us. Finally we head back to the main terminal and check the node status:
➜ k get node
NAME STATUS ROLES AGE VERSION
cluster3-master1 Ready control-plane,master 24h v1.22.1
cluster3-worker1 Ready <none> 24h v1.22.1
cluster3-worker2 NotReady <none> 32s v1.22.1
Give it a bit of time till the node is ready.
➜ k get node
NAME STATUS ROLES AGE VERSION
cluster3-master1 Ready control-plane,master 24h v1.22.1
cluster3-worker1 Ready <none> 24h v1.22.1
cluster3-worker2 Ready <none> 107s v1.22.1
We see cluster3-worker2
is now available and up to date.
Comments
Post a Comment
https://gengwg.blogspot.com/