You're asked to confirm that kube-proxy is running correctly on all nodes. For this perform the following in Namespace project-hamster
:
Create a new Pod named p2-pod
with two containers, one of image nginx:1.21.3-alpine
and one of image busybox:1.31
. Make sure the busybox container keeps running for some time.
Create a new Service named p2-service
which exposes that Pod internally in the cluster on port 3000->80.
Find the kube-proxy container on all nodes cluster1-master1
, cluster1-worker1
and cluster1-worker2
and make sure that it's using iptables. Use command crictl
for this.
Write the iptables rules of all nodes belonging the created Service p2-service
into file /opt/course/p2/iptables.txt
.
Finally delete the Service and confirm that the iptables rules are gone from all nodes.
Answer:
Create the Pod
First we create the Pod:
# check out export statement on top which allows us to use $do
k run p2-pod --image=nginx:1.21.3-alpine $do > p2.yaml
vim p2.yaml
Next we add the requested second container:
# p2.yaml
apiVersion v1
kind Pod
metadata
creationTimestamp null
labels
run p2-pod
name p2-pod
namespace project-hamster # add
spec
containers
image nginx 1.21.3-alpine
name p2-pod
image busybox 1.31 # add
name c2 # add
command"sh" "-c" "sleep 1d" # add
resources
dnsPolicy ClusterFirst
restartPolicy Always
status
And we create the Pod:
k -f p2.yaml create
Create the Service
Next we create the Service:
k -n project-hamster expose pod p2-pod --name p2-service --port 3000 --target-port 80
This will create a yaml like:
apiVersion v1
kind Service
metadata
creationTimestamp"2020-04-30T20:58:14Z"
labels
run p2-pod
managedFields
...
operation Update
time"2020-04-30T20:58:14Z"
name p2-service
namespace project-hamster
resourceVersion"11071"
selfLink /api/v1/namespaces/project-hamster/services/p2-service
uid 2a1c0842-7fb6-4e94-8cdb-1602a3b1e7d2
spec
clusterIP10.97.45.18
ports
port3000
protocol TCP
targetPort80
selector
run p2-pod
sessionAffinity None
type ClusterIP
status
loadBalancer
We should confirm Pods and Services are connected, hence the Service should have Endpoints.
k -n project-hamster get pod,svc,ep
Confirm kube-proxy is running and is using iptables
First we get nodes in the cluster:
➜ k get node
NAME STATUS ROLES AGE VERSION
cluster1-master1 Ready master 98m v1.22.1
cluster1-worker1 Ready <none> 96m v1.22.1
cluster1-worker2 Ready <none> 95m v1.22.1
The idea here is to log into every node, find the kube-proxy container and check its logs:
➜ ssh cluster1-master1
➜ root@cluster1-master1$ crictl ps | grep kube-proxy
27b6a18c0f89c 36c4ebbc9d979 3 hours ago Running kube-proxy
➜ root@cluster1-master1~# crictl logs 27b6a18c0f89c
...
I0913 12:53:03.096620 1 server_others.go:212] Using iptables Proxier.
...
This should be repeated on every node and result in the same output Using iptables Proxier
.
Check kube-proxy is creating iptables rules
Now we check the iptables rules on every node first manually:
➜ ssh cluster1-master1 iptables-save | grep p2-service
-A KUBE-SEP-6U447UXLLQIKP7BB -s 10.44.0.20/32 -m comment --comment "project-hamster/p2-service:" -j KUBE-MARK-MASQ
-A KUBE-SEP-6U447UXLLQIKP7BB -p tcp -m comment --comment "project-hamster/p2-service:" -m tcp -j DNAT --to-destination 10.44.0.20:80
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.97.45.18/32 -p tcp -m comment --comment "project-hamster/p2-service: cluster IP" -m tcp --dport 3000 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.97.45.18/32 -p tcp -m comment --comment "project-hamster/p2-service: cluster IP" -m tcp --dport 3000 -j KUBE-SVC-2A6FNMCK6FDH7PJH
-A KUBE-SVC-2A6FNMCK6FDH7PJH -m comment --comment "project-hamster/p2-service:" -j KUBE-SEP-6U447UXLLQIKP7BB
➜ ssh cluster1-worker1 iptables-save | grep p2-service
-A KUBE-SEP-6U447UXLLQIKP7BB -s 10.44.0.20/32 -m comment --comment "project-hamster/p2-service:" -j KUBE-MARK-MASQ
-A KUBE-SEP-6U447UXLLQIKP7BB -p tcp -m comment --comment "project-hamster/p2-service:" -m tcp -j DNAT --to-destination 10.44.0.20:80
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.97.45.18/32 -p tcp -m comment --comment "project-hamster/p2-service: cluster IP" -m tcp --dport 3000 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.97.45.18/32 -p tcp -m comment --comment "project-hamster/p2-service: cluster IP" -m tcp --dport 3000 -j KUBE-SVC-2A6FNMCK6FDH7PJH
-A KUBE-SVC-2A6FNMCK6FDH7PJH -m comment --comment "project-hamster/p2-service:" -j KUBE-SEP-6U447UXLLQIKP7BB
➜ ssh cluster1-worker2 iptables-save | grep p2-service
-A KUBE-SEP-6U447UXLLQIKP7BB -s 10.44.0.20/32 -m comment --comment "project-hamster/p2-service:" -j KUBE-MARK-MASQ
-A KUBE-SEP-6U447UXLLQIKP7BB -p tcp -m comment --comment "project-hamster/p2-service:" -m tcp -j DNAT --to-destination 10.44.0.20:80
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.97.45.18/32 -p tcp -m comment --comment "project-hamster/p2-service: cluster IP" -m tcp --dport 3000 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.97.45.18/32 -p tcp -m comment --comment "project-hamster/p2-service: cluster IP" -m tcp --dport 3000 -j KUBE-SVC-2A6FNMCK6FDH7PJH
-A KUBE-SVC-2A6FNMCK6FDH7PJH -m comment --comment "project-hamster/p2-service:" -j KUBE-SEP-6U447UXLLQIKP7BB
Great. Now let's write these logs into the requested file:
➜ ssh cluster1-master1 iptables-save | grep p2-service >> /opt/course/p2/iptables.txt
➜ ssh cluster1-worker1 iptables-save | grep p2-service >> /opt/course/p2/iptables.txt
➜ ssh cluster1-worker2 iptables-save | grep p2-service >> /opt/course/p2/iptables.txt
Delete the Service and confirm iptables rules are gone
Delete the Service:
k -n project-hamster delete svc p2-service
And confirm the iptables rules are gone:
➜ ssh cluster1-master1 iptables-save | grep p2-service
➜ ssh cluster1-worker1 iptables-save | grep p2-service
➜ ssh cluster1-worker2 iptables-save | grep p2-service
Done.
Kubernetes Services are implemented using iptables rules (with default config) on all nodes. Every time a Service has been altered, created, deleted or Endpoints of a Service have changed, the kube-apiserver contacts every node's kube-proxy to update the iptables rules according to the current state.
Comments
Post a Comment
https://gengwg.blogspot.com/