The cluster admin asked you to find out the following information about etcd running on cluster2-master1:
- Server private key location
- Server certificate expiration date
- Is client certificate authentication enabled
Write these information into /opt/course/p1/etcd-info.txt
Finally you're asked to save an etcd snapshot at /etc/etcd-snapshot.db
on cluster2-master1 and display its status.
Answer:
Find out etcd information
Let's check the nodes:
➜ k get node
NAME STATUS ROLES AGE VERSION
cluster2-master1 Ready master 89m v1.22.1
cluster2-worker1 Ready <none> 87m v1.22.1
➜ ssh cluster2-master1
First we check how etcd is setup in this cluster:
➜ root@cluster2-master1:~# kubectl -n kube-system get pod
NAME READY STATUS RESTARTS AGE
coredns-66bff467f8-k8f48 1/1 Running 0 26h
coredns-66bff467f8-rn8tr 1/1 Running 0 26h
etcd-cluster2-master1 1/1 Running 0 26h
kube-apiserver-cluster2-master1 1/1 Running 0 26h
kube-controller-manager-cluster2-master1 1/1 Running 0 26h
kube-proxy-qthfg 1/1 Running 0 25h
kube-proxy-z55lp 1/1 Running 0 26h
kube-scheduler-cluster2-master1 1/1 Running 1 26h
weave-net-cqdvt 2/2 Running 0 26h
weave-net-dxzgh 2/2 Running 1 25h
We see its running as a Pod, more specific a static Pod. So we check for the default kubelet directory for static manifests:
➜ root@cluster2-master1:~# find /etc/kubernetes/manifests/
/etc/kubernetes/manifests/
/etc/kubernetes/manifests/kube-controller-manager.yaml
/etc/kubernetes/manifests/kube-apiserver.yaml
/etc/kubernetes/manifests/etcd.yaml
/etc/kubernetes/manifests/kube-scheduler.yaml
➜ root@cluster2-master1:~# vim /etc/kubernetes/manifests/etcd.yaml
So we look at the yaml and the parameters with which etcd is started:
# /etc/kubernetes/manifests/etcd.yaml
apiVersion v1
kind Pod
metadata
creationTimestamp null
labels
component etcd
tier control-plane
name etcd
namespace kube-system
spec
containers
command
etcd
--advertise-client-urls=https://192.168.102.11:2379
# server certificate --cert-file=/etc/kubernetes/pki/etcd/server.crt
# enabled --client-cert-auth=true
--data-dir=/var/lib/etcd
--initial-advertise-peer-urls=https://192.168.102.11:2380
--initial-cluster=cluster2-master1=https://192.168.102.11:2380
# server private key --key-file=/etc/kubernetes/pki/etcd/server.key
--listen-client-urls=https://127.0.0.1:2379,https://192.168.102.11:2379
--listen-metrics-urls=http://127.0.0.1:2381
--listen-peer-urls=https://192.168.102.11:2380
--name=cluster2-master1
--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
--peer-client-cert-auth=true
--peer-key-file=/etc/kubernetes/pki/etcd/peer.key
--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
--snapshot-count=10000
--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
...
We see that client authentication is enabled and also the requested path to the server private key, now let's find out the expiration of the server certificate:
➜ root@cluster2-master1:~# openssl x509 -noout -text -in /etc/kubernetes/pki/etcd/server.crt | grep Validity -A2
Validity
Not Before: Sep 13 13:01:31 2021 GMT
Not After : Sep 13 13:01:31 2022 GMT
There we have it. Let's write the information into the requested file:
# /opt/course/p1/etcd-info.txt
Server private key location: /etc/kubernetes/pki/etcd/server.key
Server certificate expiration date: Sep 13 13:01:31 2022 GMT
Is client certificate authentication enabled: yes
Create etcd snapshot
First we try:
ETCDCTL_API=3 etcdctl snapshot save /etc/etcd-snapshot.db
We get the endpoint also from the yaml. But we need to specify more parameters, all of which we can find the yaml declaration above:
ETCDCTL_API=3 etcdctl snapshot save /etc/etcd-snapshot.db \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--cert /etc/kubernetes/pki/etcd/server.crt \
--key /etc/kubernetes/pki/etcd/server.key
This worked. Now we can output the status of the backup file:
➜ root@cluster2-master1:~# ETCDCTL_API=3 etcdctl snapshot status /etc/etcd-snapshot.db
4d4e953, 7213, 1291, 2.7 MB
The status shows:
- Hash: 4d4e953
- Revision: 7213
- Total Keys: 1291
- Total Size: 2.7 MB
Comments
Post a Comment
https://gengwg.blogspot.com/