Skip to main content

CKA Simulator Kubernetes 1.22

 

https://killer.sh

Pre Setup

Once you've gained access to your terminal it might be wise to spend ~1 minute to setup your environment. You could set these:

Vim

To make vim use 2 spaces for a tab edit ~/.vimrc to contain:

More setup suggestions are in the tips section.

 

 

Question 1 | Contexts

Task weight: 1%

 

You have access to multiple clusters from your main terminal through kubectl contexts. Write all those context names into /opt/course/1/contexts.

Next write a command to display the current context into /opt/course/1/context_default_kubectl.sh, the command should use kubectl.

Finally write a second command doing the same thing into /opt/course/1/context_default_no_kubectl.sh, but without the use of kubectl.

 

Answer:

Maybe the fastest way is just to run:

Or using jsonpath:

The content should then look like:

Next create the first command:

And the second one:

In the real exam you might need to filter and find information from bigger lists of resources, hence knowing a little jsonpath and simple bash filtering will be helpful.

The second command could also be improved to:

 

 

Question 2 | Schedule Pod on Master Node

Task weight: 3%

 

Use context: kubectl config use-context k8s-c1-H

 

Create a single Pod of image httpd:2.4.41-alpine in Namespace default. The Pod should be named pod1 and the container should be named pod1-container. This Pod should only be scheduled on a master node, do not add new labels any nodes.

Shortly write the reason on why Pods are by default not scheduled on master nodes into /opt/course/2/master_schedule_reason .

 

Answer:

First we find the master node(s) and their taints:

Next we create the Pod template:

Perform the necessary changes manually. Use the Kubernetes docs and search for example for tolerations and nodeSelector to find examples:

Important here to add the toleration for running on master nodes, but also the nodeSelector to make sure it only runs on master nodes. If we only specify a toleration the Pod can be scheduled on master or worker nodes.

Now we create it:

Let's check if the pod is scheduled:

Finally the short reason why Pods are not scheduled on master nodes by default:

 

 

Question 3 | Scale down StatefulSet

Task weight: 1%

 

Use context: kubectl config use-context k8s-c1-H

 

There are two Pods named o3db-* in Namespace project-c13. C13 management asked you to scale the Pods down to one replica to save resources. Record the action.

 

Answer:

If we check the Pods we see two replicas:

From their name it looks like these are managed by a StatefulSet. But if we're not sure we could also check for the most common resources which manage Pods:

Confirmed, we have to work with a StatefulSet. To find this out we could also look at the Pod labels:

To fulfil the task we simply run:

The --record created an annotation:

C13 Mangement is happy again.

 

 

Question 4 | Pod Ready if Service is reachable

Task weight: 4%

 

Use context: kubectl config use-context k8s-c1-H

 

Do the following in Namespace default. Create a single Pod named ready-if-service-ready of image nginx:1.16.1-alpine. Configure a LivenessProbe which simply runs true. Also configure a ReadinessProbe which does check if the url http://service-am-i-ready:80 is reachable, you can use wget -T2 -O- http://service-am-i-ready:80 for this. Start the Pod and confirm it isn't ready because of the ReadinessProbe.

Create a second Pod named am-i-ready of image nginx:1.16.1-alpine with label id: cross-server-ready. The already existing Service service-am-i-ready should now have that second Pod as endpoint.

Now the first Pod should be in ready state, confirm that.

 

Answer:

It's a bit of an anti-pattern for one Pod to check another Pod for being ready using probes, hence the normally available readinessProbe.httpGet doesn't work for absolute remote urls. Still the workaround requested in this task should show how probes and Pod<->Service communication works.

First we create the first Pod:

Next perform the necessary additions manually:

Then create the Pod:

And confirm its in a non-ready state:

We can also check the reason for this using describe:

Now we create the second Pod:

The already existing Service service-am-i-ready should now have an Endpoint:

Which will result in our first Pod being ready, just give it a minute for the Readiness probe to check again:

Look at these Pods coworking together!

 

 

Question 5 | Kubectl sorting

Task weight: 1%

 

Use context: kubectl config use-context k8s-c1-H

 

There are various Pods in all namespaces. Write a command into /opt/course/5/find_pods.sh which lists all Pods sorted by their AGE (metadata.creationTimestamp).

Write a second command into /opt/course/5/find_pods_uid.sh which lists all Pods sorted by field metadata.uid. Use kubectl sorting for both commands.

 

Answer:

A good resources here (and for many other things) is the kubectl-cheat-sheet. You can reach it fast when searching for "cheat sheet" in the Kubernetes docs.

And to execute:

For the second command:

And to execute:

 

 

Question 6 | Storage, PV, PVC, Pod volume

Task weight: 8%

 

Use context: kubectl config use-context k8s-c1-H

 

Create a new PersistentVolume named safari-pv. It should have a capacity of 2Gi, accessMode ReadWriteOnce, hostPath /Volumes/Data and no storageClassName defined.

Next create a new PersistentVolumeClaim in Namespace project-tiger named safari-pvc . It should request 2Gi storage, accessMode ReadWriteOnce and should not define a storageClassName. The PVC should bound to the PV correctly.

Finally create a new Deployment safari in Namespace project-tiger which mounts that volume at /tmp/safari-data. The Pods of that Deployment should be of image httpd:2.4.41-alpine.

 

Answer

Find an example from https://kubernetes.io/docs and alter it:

Then create it:

Next the PersistentVolumeClaim:

Find an example from https://kubernetes.io/docs and alter it:

Then create:

And check that both have the status Bound:

Next we create a Deployment and mount that volume:

Alter the yaml to mount the volume:

We can confirm its mounting correctly:

 

 

Question 7 | Node and Pod Resource Usage

Task weight: 1%

 

Use context: kubectl config use-context k8s-c1-H

 

The metrics-server hasn't been installed yet in the cluster, but it's something that should be done soon. Your college would already like to know the kubectl commands to:

  1. show node resource usage
  2. show Pod and their containers resource usage

Please write the commands into /opt/course/7/node.sh and /opt/course/7/pod.sh.

 

Answer:

The command we need to use here is top:

We see that the metrics server is not configured yet:

But we trust the kubectl documentation and create the first file:

For the second file we might need to check the docs again:

With this we can finish this task:

 

 

Question 8 | Get Master Information

Task weight: 2%

 

Use context: kubectl config use-context k8s-c1-H

 

Ssh into the master node with ssh cluster1-master1. Check how the master components kubelet, kube-apiserver, kube-scheduler, kube-controller-manager and etcd are started/installed on the master node. Also find out the name of the DNS application and how it's started/installed on the master node.

Write your findings into file /opt/course/8/master-components.txt. The file should be structured like:

Choices of [TYPE] are: not-installed, process, static-pod, pod

 

Answer:

We could start by finding processes of the requested components, especially the kubelet at first:

We can see which components are controlled via systemd looking at /etc/systemd/system directory:

This shows kubelet is controlled via systemd, but no other service named kube nor etcd. It seems that this cluster has been setup using kubeadm, so we check in the default manifests directory:

(The kubelet could also have a different manifests directory specified via parameter --pod-manifest-path in it's systemd startup config)

This means the main 4 master services are setup as static Pods. There also seems to be a second scheduler kube-scheduler-special existing.

Actually, let's check all Pods running on in the kube-system Namespace on the master node:

There we see the 5 static pods, with -cluster1-master1 as suffix.

We also see that the dns application seems to be coredns, but how is it controlled?

Seems like coredns is controlled via a Deployment. We combine our findings in the requested file:

You should be comfortable investigating a running cluster, know different methods on how a cluster and its services can be setup and be able to troubleshoot and find error sources.

 

 

Question 9 | Kill Scheduler, Manual Scheduling

Task weight: 5%

 

Use context: kubectl config use-context k8s-c2-AC

 

Ssh into the master node with ssh cluster2-master1. Temporarily stop the kube-scheduler, this means in a way that you can start it again afterwards.

Create a single Pod named manual-schedule of image httpd:2.4-alpine, confirm its created but not scheduled on any node.

Now you're the scheduler and have all its power, manually schedule that Pod on node cluster2-master1. Make sure it's running.

Start the kube-scheduler again and confirm its running correctly by creating a second Pod named manual-schedule2 of image httpd:2.4-alpine and check if it's running on cluster2-worker1.

 

Answer:
Stop the Scheduler

First we find the master node:

Then we connect and check if the scheduler is running:

Kill the Scheduler (temporarily):

And it should be stopped:

 

Create a Pod

Now we create the Pod:

And confirm it has no node assigned:

 

Manually schedule the Pod

Let's play the scheduler now:

The only thing a scheduler does, is that it sets the nodeName for a Pod declaration. How it finds the correct node to schedule on, that's a very much complicated matter and takes many variables into account.

As we cannot kubectl apply or kubectl edit , in this case we need to delete and create or replace:

How does it look?

It looks like our Pod is running on the master now as requested, although no tolerations were specified. Only the scheduler takes tains/tolerations/affinity into account when finding the correct node name. That's why its still possible to assign Pods manually directly to a master node and skip the scheduler.

 

Start the scheduler again

Checks its running:

Schedule a second test Pod:

Back to normal.

 

 

Question 10 | RBAC ServiceAccount Role RoleBinding

Task weight: 6%

 

Use context: kubectl config use-context k8s-c1-H

 

Create a new ServiceAccount processor in Namespace project-hamster. Create a Role and RoleBinding, both named processor as well. These should allow the new SA to only create Secrets and ConfigMaps in that Namespace.

 

Answer:
Let's talk a little about RBAC resources

A ClusterRole|Role defines a set of permissions and where it is available, in the whole cluster or just a single Namespace.

A ClusterRoleBinding|RoleBinding connects a set of permissions with an account and defines where it is applied, in the whole cluster or just a single Namespace.

Because of this there are 4 different RBAC combinations and 3 valid ones:

  1. Role + RoleBinding (available in single Namespace, applied in single Namespace)
  2. ClusterRole + ClusterRoleBinding (available cluster-wide, applied cluster-wide)
  3. ClusterRole + RoleBinding (available cluster-wide, applied in single Namespace)
  4. Role + ClusterRoleBinding (NOT POSSIBLE: available in single Namespace, applied cluster-wide)

To the solution

We first create the ServiceAccount:

Then for the Role:

So we execute:

Which will create a Role like:

Now we bind the Role to the ServiceAccount:

So we create it:

This will create a RoleBinding like:

To test our RBAC setup we can use kubectl auth can-i:

Like this:

Done.

 

 

Question 11 | DaemonSet on all Nodes

Task weight: 4%

 

Use context: kubectl config use-context k8s-c1-H

 

Use Namespace project-tiger for the following. Create a DaemonSet named ds-important with image httpd:2.4-alpine and labels id=ds-important and uuid=18426a0b-5f59-4e10-923f-c0e078e82462. The Pods it creates should request 10 millicore cpu and 10 mebibyte memory. The Pods of that DaemonSet should run on all nodes, master and worker.

 

Answer:

As of now we aren't able to create a DaemonSet directly using kubectl, so we create a Deployment and just change it up:

(Sure you could also search for a DaemonSet example yaml in the Kubernetes docs and alter it.)

Then we adjust the yaml to:

It was requested that the DaemonSet runs on all nodes, so we need to specify the toleration for this.

Let's confirm:

 

 

Question 12 | Deployment on all Nodes

Task weight: 6%

 

Use context: kubectl config use-context k8s-c1-H

 

Use Namespace project-tiger for the following. Create a Deployment named deploy-important with label id=very-important (the Pods should also have this label) and 3 replicas. It should contain two containers, the first named container1 with image nginx:1.17.6-alpine and the second one named container2 with image kubernetes/pause.

There should be only ever one Pod of that Deployment running on one worker node. We have two worker nodes: cluster1-worker1 and cluster1-worker2. Because the Deployment has three replicas the result should be that on both nodes one Pod is running. The third Pod won't be scheduled, unless a new worker node will be added.

In a way we kind of simulate the behaviour of a DaemonSet here, but using a Deployment and a fixed number of replicas.

 

Answer:

The idea here is that we create a "Inter-pod anti-affinity" which allows us to say a Pod should only be scheduled on a node where another Pod of a specific label (here the same label) is not already running.

Let's begin by creating the Deployment template:

Then change the yaml to:

Specify a topologyKey, which is a pre-populated Kubernetes label, you can find this by describing a node.

Let's run it:

Then we check the Deployment status where it shows 2/3 ready count:

And running the following we see one Pod on each worker node and one not scheduled.

If we kubectl describe the Pod deploy-important-58db9db6fc-lnxdb it will show us the reason for not scheduling is our implemented pod affinity/anti-affinity ruling:

 

 

Question 13 | Multi Containers and Pod shared Volume

Task weight: 4%

 

Use context: kubectl config use-context k8s-c1-H

 

Create a Pod named multi-container-playground in Namespace default with three containers, named c1, c2 and c3. There should be a volume attached to that Pod and mounted into every container, but the volume shouldn't be persisted or shared with other Pods.

Container c1 should be of image nginx:1.17.6-alpine and have the name of the node where its Pod is running on value available as environment variable MY_NODE_NAME.

Container c2 should be of image busybox:1.31.1 and write the output of the date command every second in the shared volume into file date.log. You can use while true; do date >> /your/vol/path/date.log; sleep 1; done for this.

Container c3 should be of image busybox:1.31.1 and constantly send the content of file date.log from the shared volume to stdout. You can use tail -f /your/vol/path/date.log for this.

Check the logs of container c3 to confirm correct setup.

 

Answer:

First we create the Pod template:

And add the other containers and the commands they should execute:

Oh boy, lot's of requested things. We check if everything is good with the Pod:

Good, then we check if container c1 has the requested node name as env variable:

And finally we check the logging:

 

 

Question 14 | Find out Cluster Information

Task weight: 2%

 

Use context: kubectl config use-context k8s-c1-H

 

You're ask to find out following information about the cluster k8s-c1-H:

  1. How many master nodes are available?
  2. How many worker nodes are available?
  3. What is the Service CIDR?
  4. Which Networking (or CNI Plugin) is configured and where is its config file?
  5. Which suffix will static pods have that run on cluster1-worker1?

Write your answers into file /opt/course/14/cluster-info, structured like this:

 

Answer:
How many master and worker nodes are available?

We see one master and two workers.

 

What is the Service CIDR?

 

Which Networking (or CNI Plugin) is configured and where is its config file?

By default the kubelet looks into /etc/cni/net.d to discover the CNI plugins. This will be the same on every master and worker nodes.

 

Which suffix will static pods have that run on cluster1-worker1?

The suffix is the node hostname with a leading hyphen. It used to be -static in earlier Kubernetes versions.

 

Result

The resulting /opt/course/14/cluster-info could look like:

 

 

Question 15 | Cluster Event Logging

Task weight: 3%

 

Use context: kubectl config use-context k8s-c2-AC

 

Write a command into /opt/course/15/cluster_events.sh which shows the latest events in the whole cluster, ordered by time. Use kubectl for it.

Now kill the kube-proxy Pod running on node cluster2-worker1 and write the events this caused into /opt/course/15/pod_kill.log.

Finally kill the containerd container of the kube-proxy Pod on node cluster2-worker1 and write the events into /opt/course/15/container_kill.log.

Do you notice differences in the events both actions caused?

 

Answer:

Now we kill the kube-proxy Pod:

Now check the events:

Write the events the killing caused into /opt/course/15/pod_kill.log:

Finally we will try to provoke events by killing the container belonging to the container of the kube-proxy Pod:

We killed the main container (1e020b43c4423), but also noticed that a new container (0ae4245707910) was directly created. Thanks Kubernetes!

Now we see if this caused events again and we write those into the second file:

Comparing the events we see that when we deleted the whole Pod there were more things to be done, hence more events. For example was the DaemonSet in the game to re-create the missing Pod. Where when we manually killed the main container of the Pod, the Pod would still exist but only its container needed to be re-created, hence less events.

 

 

Question 16 | Namespaces and Api Resources

Task weight: 2%

 

Use context: kubectl config use-context k8s-c1-H

 

Create a new Namespace called cka-master.

Write the names of all namespaced Kubernetes resources (like Pod, Secret, ConfigMap...) into /opt/course/16/resources.txt.

Find the project-* Namespace with the highest number of Roles defined in it and write its name and amount of Roles into /opt/course/16/crowded-namespace.txt.

 

Answer:
Namespace and Namespaces Resources

We create a new Namespace:

Now we can get a list of all resources like:

Which results in the file:

 

Namespace with most Roles

Finally we write the name and amount into the file:

 

 

Question 17 | Find Container of Pod and check info

Task weight: 3%

 

Use context: kubectl config use-context k8s-c1-H

 

In Namespace project-tiger create a Pod named tigers-reunite of image httpd:2.4.41-alpine with labels pod=container and container=pod. Find out on which node the Pod is scheduled. Ssh into that node and find the containerd container belonging to that Pod.

Using command crictl:

  1. Write the ID of the container and the info.runtimeType into /opt/course/17/pod-container.txt
  2. Write the logs of the container into /opt/course/17/pod-container.log

 

Answer:

First we create the Pod:

Next we find out the node it's scheduled on:

Then we ssh into that node and and check the container info:

Then we fill the requested file (on the main terminal):

Finally we write the container logs in the second file:

The &> in above's command redirects both the standard output and standard error.

You could also simply run crictl logs on the node and copy the content manually, if its not a lot. The file should look like:

 

 

Question 18 | Fix Kubelet

Task weight: 8%

 

Use context: kubectl config use-context k8s-c3-CCC

 

There seems to be an issue with the kubelet not running on cluster3-worker1. Fix it and confirm that cluster has node cluster3-worker1 available in Ready state afterwards. You should be able to schedule a Pod on cluster3-worker1 afterwards.

Write the reason of the issue into /opt/course/18/reason.txt.

 

Answer:

The procedure on tasks like these should be to check if the kubelet is running, if not start it, then check its logs and correct errors if there are some.

Always helpful to check if other clusters already have some of the components defined and running, so you can copy and use existing config files. Though in this case it might not need to be necessary.

Check node status:

First we check if the kubelet is running:

Nope, so we check if its configured using systemd as service:

Yes, its configured as a service with config at /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, but we see its inactive. Let's try to start it:

We see its trying to execute /usr/local/bin/kubelet with some parameters defined in its service config file. A good way to find errors and get more logs is to run the command manually (usually also with its parameters).

Another way would be to see the extended logging of a service like using journalctl -u kubelet.

Well, there we have it, wrong path specified. Correct the path in file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf and run:

Also the node should be available for the api server, give it a bit of time though:

Finally we write the reason into the file:

 

 

Question 19 | Create Secret and mount into Pod

Task weight: 3%

 

Use context: kubectl config use-context k8s-c3-CCC

 

Do the following in a new Namespace secret. Create a Pod named secret-pod of image busybox:1.31.1 which should keep running for some time. It should be able to run on master nodes as well, create the proper toleration.

There is an existing Secret located at /opt/course/19/secret1.yaml, create it in the secret Namespace and mount it readonly into the Pod at /tmp/secret1.

Create a new Secret in Namespace secret called secret2 which should contain user=user1 and pass=1234. These entries should be available inside the Pod's container as environment variables APP_USER and APP_PASS.

Confirm everything is working.

 

Answer

First we create the Namespace and the requested Secrets in it:

We need to adjust the Namespace for that Secret:

Next we create the second Secret:

Now we create the Pod template:

Then make the necessary changes:

It might not be necessary in current K8s versions to specify the readOnly: true because it's the default setting anyways.

And execute:

Finally we check if all is correct:

All is good.

 

 

Question 20 | Update Kubernetes Version and join cluster

Task weight: 10%

 

Use context: kubectl config use-context k8s-c3-CCC

 

Your coworker said node cluster3-worker2 is running an older Kubernetes version and is not even part of the cluster. Update Kubernetes on that node to the exact version that's running on cluster3-master1. Then add this node to the cluster. Use kubeadm for this.

 

Answer:
Upgrade Kubernetes to cluster3-master1 version

Search in the docs for kubeadm upgrade: https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade

Master node seems to be running Kubernetes 1.22.1 and cluster3-worker2 is not yet part of the cluster.

Here kubeadm is already installed in the wanted version, so we can run:

This is usually the proper command to upgrade a node. But this error means that this node was never even initialised, so nothing to update here. This will be done later using kubeadm join. For now we can continue with kubelet and kubectl:

Now we're up to date with kubeadm, kubectl and kubelet. Restart the kubelet:

We can ignore the errors and move into next step to generate the join command.

 

Add cluster3-master2 to cluster

First we log into the master1 and generate a new TLS bootstrap token, also printing out the join command:

We see the expiration of 23h for our token, we could adjust this by passing the ttl argument.

Next we connect again to worker2 and simply execute the join command:

If you have troubles with kubeadm join you might need to run kubeadm reset.

This looks great though for us. Finally we head back to the main terminal and check the node status:

Give it a bit of time till the node is ready.

We see cluster3-worker2 is now available and up to date.

 

 

Question 21 | Create a Static Pod and Service

Task weight: 2%

 

Use context: kubectl config use-context k8s-c3-CCC

 

Create a Static Pod named my-static-pod in Namespace default on cluster3-master1. It should be of image nginx:1.16-alpine and have resource requests for 10m CPU and 20Mi memory.

Then create a NodePort Service named static-pod-service which exposes that static Pod on port 80 and check if it has Endpoints and if its reachable through the cluster3-master1 internal IP address. You can connect to the internal node IPs from your main terminal.

 

Answer:

Then edit the my-static-pod.yaml to add the requested resource requests:

 

And make sure its running:

Now we expose that static Pod:

This would generate a Service like:

Then run and test:

Looking good.

 

 

Question 22 | Check how long certificates are valid

Task weight: 2%

 

Use context: kubectl config use-context k8s-c2-AC

 

Check how long the kube-apiserver server certificate is valid on cluster2-master1. Do this with openssl or cfssl. Write the exipiration date into /opt/course/22/expiration.

Also run the correct kubeadm command to list the expiration dates and confirm both methods show the same date.

Write the correct kubeadm command that would renew the apiserver server certificate into /opt/course/22/kubeadm-renew-certs.sh.

 

Answer:

First let's find that certificate:

Next we use openssl to find out the expiration date:

There we have it, so we write it in the required location on our main terminal:

And we use the feature from kubeadm to get the expiration too:

Looking good. And finally we write the command that would renew all certificates into the requested location:

 

 

Question 23 | Kubelet client/server cert info

Task weight: 2%

 

Use context: kubectl config use-context k8s-c2-AC

 

Node cluster2-worker1 has been added to the cluster using kubeadm and TLS bootstrapping.

Find the "Issuer" and "Extended Key Usage" values of the cluster2-worker1:

  1. kubelet client certificate, the one used for outgoing connections to the kube-apiserver.
  2. kubelet server certificate, the one used for incoming connections from the kube-apiserver.

Write the information into file /opt/course/23/certificate-info.txt.

Compare the "Issuer" and "Extended Key Usage" fields of both certificates and make sense of these.

 

Answer:

To find the correct kubelet certificate directory, we can look for the default value of the --cert-dir parameter for the kubelet. For this search for "kubelet" in the Kubernetes docs which will lead to: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet. We can check if another certificate directory has been configured using ps aux or in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.

First we check the kubelet client certificate:

Next we check the kubelet server certificate:

We see that the server certificate was generated on the worker node itself and the client certificate was issued by the Kubernetes api. The "Extended Key Usage" also shows if its for client or server authentication.

More about this: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping

 

 

Question 24 | NetworkPolicy

Task weight: 9%

 

Use context: kubectl config use-context k8s-c1-H

 

There was a security incident where an intruder was able to access the whole cluster from a single hacked backend Pod.

To prevent this create a NetworkPolicy called np-backend in Namespace project-snake. It should allow the backend-* Pods only to:

  • connect to db1-* Pods on port 1111
  • connect to db2-* Pods on port 2222

Use the app label of Pods in your policy.

After implementation, connections from backend-* Pods to vault-* Pods on port 3333 should for example no longer work.

 

Answer:

First we look at the existing Pods and their labels:

We test the current connection situation and see nothing is restricted:

Now we create the NP by copying and chaning an example from the k8s docs:

The NP above has two rules with two conditions each, it can be read as:

 

Wrong example

Now let's shortly look at a wrong example:

The NP above has one rule with two conditions and two condition-entries each, it can be read as:

Using this NP it would still be possible for backend-* Pods to connect to db2-* Pods on port 1111 for example which should be forbidden.

 

Create NetworkPolicy

We create the correct NP:

And test again:

Also helpful to use kubectl describe on the NP to see how k8s has interpreted the policy.

Great, looking more secure. Task done.

 

 

Question 25 | Etcd Snapshot Save and Restore

Task weight: 8%

 

Use context: kubectl config use-context k8s-c3-CCC

 

Make a backup of etcd running on cluster3-master1 and save it on the master node at /tmp/etcd-backup.db.

Then create a Pod of your kind in the cluster.

Finally restore the backup, confirm the cluster is still working and that the created Pod is no longer with us.

 

Answer:
Etcd Backup

First we log into the master and try to create a snapshop of etcd:

But it fails because we need to authenticate ourselves. For the necessary information we can check the etc manifest:

We only check the etcd.yaml for necessary information we don't change it.

But we also know that the api-server is connecting to etcd, so we can check how its manifest is configured:

We use the authentication information and pass it to etcdctl:

 

NOTE: Dont use snapshot status because it can alter the snapshot file and render it invalid

 

Etcd restore

Now create a Pod in the cluster and wait for it to be running:

 

NOTE: If you didn't solve questions 18 or 20 and cluster3 doesn't have a ready worker node then the created pod might stay in a Pending state. This is still ok for this task.

 

Next we stop all controlplane components:

Now we restore the snapshot into a specific directory:

We could specify another host to make the backup from by using etcdctl --endpoints http://IP, but here we just use the default value which is: http://127.0.0.1:2379,http://127.0.0.1:4001.

The restored files are located at the new folder /var/lib/etcd-backup, now we have to tell etcd to use that directory:

Now we move all controlplane yaml again into the manifest directory. Give it some time (up to several minutes) for etcd to restart and for the api-server to be reachable again:

Then we check again for the Pod:

Awesome, backup and restore worked as our pod is gone.

 

 

Extra Question 1 | Find Pods first to be terminated

Use context: kubectl config use-context k8s-c1-H

 

Check all available Pods in the Namespace project-c13 and find the names of those that would probably be terminated first if the nodes run out of resources (cpu or memory) to schedule all Pods. Write the Pod names into /opt/course/e1/pods-not-stable.txt.

 

Answer:

When available cpu or memory resources on the nodes reach their limit, Kubernetes will look for Pods that are using more resources than they requested. These will be the first candidates for termination. If some Pods containers have no resource requests/limits set, then by default those are considered to use more than requested.

Kubernetes assigns Quality of Service classes to Pods based on the defined resources and limits, read more here: https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod

Hence we should look for Pods without resource requests defined, we can do this with a manual approach:

Or we do:

We see that the Pods of Deployment c13-3cc-runner-heavy don't have any resources requests specified. Hence our answer would be:

To automate this process you could use jsonpath like this:

This lists all Pod names and their requests/limits, hence we see the three Pods without those defined.

Or we look for the Quality of Service classes:

Here we see three with BestEffort, which Pods get that don't have any memory or cpu limits or requests defined.

A good practice is to always set resource requests and limits. If you don't know the values your containers should have you can find this out using metric tools like Prometheus. You can also use kubectl top pod or even kubectl exec into the container and use top and similar tools.

 

 

Extra Question 2 | Curl Manually Contact API

Use context: kubectl config use-context k8s-c1-H

 

There is an existing ServiceAccount secret-reader in Namespace project-hamster. Create a Pod of image curlimages/curl:7.65.3 named tmp-api-contact which uses this ServiceAccount. Make sure the container keeps running.

Exec into the Pod and use curl to access the Kubernetes Api of that cluster manually, listing all available secrets. You can ignore insecure https connection. Write the command(s) for this into file /opt/course/e4/list-secrets.sh.

 

Answer:

https://kubernetes.io/docs/tasks/run-application/access-api-from-pod

It's important to understand how the Kubernetes API works. For this it helps connecting to the api manually, for example using curl. You can find information fast by search in the Kubernetes docs for "curl api" for example.

First we create our Pod:

Add the service account name and Namespace:

Then run and exec into:

Once on the container we can try to connect to the api using curl, the api is usually available via the Service named kubernetes in Namespace default (You should know how dns resolution works across Namespaces.). Else we can find the endpoint IP via environment variables running env.

So now we can do:

The last command shows 403 forbidden, this is because we are not passing any authorisation information with us. The Kubernetes Api Server thinks we are connecting as system:anonymous. We want to change this and connect using the Pods ServiceAccount named secret-reader.

We find the the token in the mounted folder at /var/run/secrets/kubernetes.io/serviceaccount, so we do:

Now we're able to list all Secrets, registering as the ServiceAccount secret-reader under which our Pod is running.

To use encrypted https connection we can run:

For troubleshooting we could also check if the ServiceAccount is actually able to list Secrets using:

Finally write the commands into the requested location:

 

 

CKA Simulator Preview Kubernetes 1.22

https://killer.sh

This is a preview of the full CKA Simulator course content.

The full course contains 25 scenarios from all the CKA areas. The course also provides a browser terminal which is a very close replica of the original one. This is great to get used and comfortable before the real exam. After the test session (120 minutes), or if you stop it early, you'll get access to all questions and their detailed solutions. You'll have 36 hours cluster access in total which means even after the session, once you have the solutions, you can still play around.

The following preview will give you an idea of what the full course will provide. These preview questions are in addition to the 25 of the full course. But the preview questions are part of the same CKA simulation environment which we setup for you, so with access to the full course you can solve these too.

The answers provided here assume that you did run the initial terminal setup suggestions as provided in the tips section, but especially:

 

These questions can be solved in the test environment provided through the CKA Simulator

 

Preview Question 1

Use context: kubectl config use-context k8s-c2-AC

The cluster admin asked you to find out the following information about etcd running on cluster2-master1:

  • Server private key location
  • Server certificate expiration date
  • Is client certificate authentication enabled

Write these information into /opt/course/p1/etcd-info.txt

Finally you're asked to save an etcd snapshot at /etc/etcd-snapshot.db on cluster2-master1 and display its status.

 

Answer:
Find out etcd information

Let's check the nodes:

First we check how etcd is setup in this cluster:

We see its running as a Pod, more specific a static Pod. So we check for the default kubelet directory for static manifests:

So we look at the yaml and the parameters with which etcd is started:

We see that client authentication is enabled and also the requested path to the server private key, now let's find out the expiration of the server certificate:

There we have it. Let's write the information into the requested file:

 

Create etcd snapshot

First we try:

We get the endpoint also from the yaml. But we need to specify more parameters, all of which we can find the yaml declaration above:

This worked. Now we can output the status of the backup file:

The status shows:

  • Hash: 4d4e953
  • Revision: 7213
  • Total Keys: 1291
  • Total Size: 2.7 MB

 

 

Preview Question 2

Use context: kubectl config use-context k8s-c1-H

 

You're asked to confirm that kube-proxy is running correctly on all nodes. For this perform the following in Namespace project-hamster:

Create a new Pod named p2-pod with two containers, one of image nginx:1.21.3-alpine and one of image busybox:1.31. Make sure the busybox container keeps running for some time.

Create a new Service named p2-service which exposes that Pod internally in the cluster on port 3000->80.

Find the kube-proxy container on all nodes cluster1-master1, cluster1-worker1 and cluster1-worker2 and make sure that it's using iptables. Use command crictl for this.

Write the iptables rules of all nodes belonging the created Service p2-service into file /opt/course/p2/iptables.txt.

Finally delete the Service and confirm that the iptables rules are gone from all nodes.

 

Answer:
Create the Pod

First we create the Pod:

Next we add the requested second container:

And we create the Pod:

 

Create the Service

Next we create the Service:

This will create a yaml like:

We should confirm Pods and Services are connected, hence the Service should have Endpoints.

 

Confirm kube-proxy is running and is using iptables

First we get nodes in the cluster:

The idea here is to log into every node, find the kube-proxy container and check its logs:

This should be repeated on every node and result in the same output Using iptables Proxier.

 

Check kube-proxy is creating iptables rules

Now we check the iptables rules on every node first manually:

Great. Now let's write these logs into the requested file:

 

Delete the Service and confirm iptables rules are gone

Delete the Service:

And confirm the iptables rules are gone:

Done.

Kubernetes Services are implemented using iptables rules (with default config) on all nodes. Every time a Service has been altered, created, deleted or Endpoints of a Service have changed, the kube-apiserver contacts every node's kube-proxy to update the iptables rules according to the current state.

 

 

Preview Question 3

Use context: kubectl config use-context k8s-c2-AC

 

Create a Pod named check-ip in Namespace default using image httpd:2.4.41-alpine. Expose it on port 80 as a ClusterIP Service named check-ip-service. Remember/output the IP of that Service.

Change the Service CIDR to 11.96.0.0/12 for the cluster.

Then create a second Service named check-ip-service2 pointing to the same Pod to check if your settings did take effect. Finally check if the IP of the first Service has changed.

 

Answer:

Let's create the Pod and expose it:

And check the Pod and Service ips:

Now we change the Service CIDR on the kube-apiserver:

Give it a bit for the kube-apiserver and controller-manager to restart

Wait for the api to be up again:

 

 

Now we do the same for the controller manager:

Give it a bit for the controller-manager to restart.

We can check if it was restarted using crictl:

 

 

Checking our existing Pod and Service again:

Nothing changed so far. Now we create another Service like before:

And check again:

There we go, the new Service got an ip of the new specified range assigned. We also see that both Services have our Pod as endpoint.

 

CKA Tips Kubernetes 1.22

In this section we'll provide some tips on how to handle the CKA exam and browser terminal.

 

Knowledge

Study all topics as proposed in the curriculum till you feel comfortable with all.

Resources

The majority of tasks in the CKA will also be around creating Kubernetes resources, like its tested in the CKAD. So we suggest to do:

Components

  • The other part is understanding Kubernetes components and being able to fix and investigate clusters. Understand this: https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster
  • When you have to fix a component (like kubelet) in one cluster, just check how its setup on another node in the same or even another cluster. You can copy config files over etc
  • If you like you can look at Kubernetes The Hard Way once. But it's NOT necessary to do, the CKA is not that complex. But KTHW helps understanding the concepts
  • You should install your own cluster using kubeadm (one master, one worker) in a VM or using a cloud provider and investigate the components
  • Know how to use kubeadm to for example add nodes to a cluster
  • Know how to create an Ingress resources
  • Know how to snapshot/restore ETCD from another machine

General

Do 1 or 2 test session with this CKA Simulator. Understand the solutions and maybe try out other ways to achieve the same thing.

Setup your aliases, be fast and breath kubectl

 

CKA Preparation

Read the Curriculum

https://github.com/cncf/curriculum

Read the Handbook

https://docs.linuxfoundation.org/tc-docs/certification/lf-candidate-handbook

Read the important tips

https://docs.linuxfoundation.org/tc-docs/certification/tips-cka-and-ckad

Read the FAQ

https://docs.linuxfoundation.org/tc-docs/certification/faq-cka-ckad

 

Kubernetes documentation

Get familiar with the Kubernetes documentation and be able to use the search. You can have one browser tab open with one of the allowed links: https://kubernetes.io/docs https://github.com/kubernetes https://kubernetes.io/blog

NOTE: You can have the other tab open as a separate window, this is why a big screen is handy

 

Deprecated commands

Make sure to not depend on deprecated commands as they might stop working at any time. When you execute a deprecated kubectl command a message will be shown, so you know which ones to avoid.

With kubectl version 1.18+ things have changed. Like its no longer possible to use kubectl run to create Jobs, CronJobs or Deployments, only Pods still work. This makes things a bit more verbose when you for example need to create a Deployment with resource limits or multiple replicas.

What if we need to create a Deployment which has, for example, a resources section? We could use both kubectl run and kubectl create, then do some vim magic. Read more here.

 

The Test Environment / Browser Terminal

You'll be provided with a browser terminal which uses Ubuntu 20. The standard shells included with a minimal install of Ubuntu 20 will be available, including bash.

Laggin

There could be some lagging, definitely make sure you are using a good internet connection because your webcam and screen are uploading all the time.

Kubectl autocompletion and commands

Autocompletion is configured by default, as well as the k alias source and others:

kubectl with k alias and Bash autocompletion

yq and jqfor YAML/JSON processing

tmux for terminal multiplexing

curl and wget for testing web services

man and man pages for further documentation

Copy & Paste

There could be issues copying text (like pod names) from the left task information into the terminal. Some suggested to "hard" hit or long hold Cmd/Ctrl+C a few times to take action. Apart from that copy and paste should just work like in normal terminals.

Percentages and Score

There are 15-20 questions in the exam and 100% of total percentage to reach. Each questions shows the % it gives if you solve it. Your results will be automatically checked according to the handbook. If you don't agree with the results you can request a review by contacting the Linux Foundation support.

Notepad & Skipping Questions

You have access to a simple notepad in the browser which can be used for storing any kind of plain text. It makes sense to use this for saving skipped question numbers and their percentages. This way it's possible to move some questions to the end. It might make sense to skip 2% or 3% questions and go directly to higher ones.

Contexts

You'll receive access to various different clusters and resources in each. They provide you the exact command you need to run to connect to another cluster/context. But you should be comfortable working in different namespaces with kubectl.

 

Your Desktop

You are allowed to have multiple monitors connected and have to share every monitor with the proctor. Having one large screen definitely helps as you’re only allowed one application open (Chrome Browser) with two tabs, one terminal and one k8s docs.

NOTE: You can have the other tab open as a separate window, this is why a big screen is handy

The questions will be on the left (default maybe ~30% space), the terminal on the right. You can adjust the size of the split though to your needs in the real exam.

If you use a laptop you could work with lid closed, external mouse+keyboard+monitor attached. Make sure you also have a webcam+microphone working.

You could also have both monitors, laptop screen and external, active. You might be asked that your webcam points straight into your face. So using an external screen and your laptop webcam could not be accepted. Just keep that in mind.

You have to be able to move your webcam around in the beginning to show your whole room and desktop. Have a clean desk with only the necessary on it. You can have a glass/cup with water without anything printed on.

In the end you should feel very comfortable with your setup.

 

Browser Terminal Setup

It should be considered to spend ~1 minute in the beginning to setup your terminal. In the real exam the vast majority of questions will be done from the main terminal. For few you might need to ssh into another machine. Just be aware that configurations to your shell will not be transferred in this case.

Minimal Setup

Alias

The alias k for kubectl will be configured together with autocompletion. In case not you can configure it using this link.

Vim

Create the file ~/.vimrc with the following content:

The expandtab make sure to use spaces for tabs. Memorize these and just type them down. You can't have any written notes with commands on your desktop etc.

Optional Setup

Fast dry-run output

This way you can just run k run pod1 --image=nginx $do. Short for "dry output", but use whatever name you like.

Fast pod delete

This way you can run k delete pod1 $now and don't have to wait for ~30 seconds termination time.

Persist bash settings

You can store aliases and other setup in ~/.bashrc if you're planning on using different shells or tmux.

 

Be fast

Use the history command to reuse already entered commands or use even faster history search through Ctrl r .

If a command takes some time to execute, like sometimes kubectl delete pod x. You can put a task in the background using Ctrl z and pull it back into foreground running command fg.

You can delete pods fast with:

 

Vim

Be great with vim.

Toggle vim line numbers

When in vim you can press Esc and type :set number or :set nonumber followed by Enter to toggle line numbers. This can be useful when finding syntax errors based on line - but can be bad when wanting to mark&copy by mouse. You can also just jump to a line number with Esc :22 + Enter.

Copy&paste

Get used to copy/paste/cut with vim:

Indent multiple lines

In case not defined in .vimrc, to indent multiple lines press Esc and type :set shiftwidth=2.

First mark multiple lines using Shift v and the up/down keys. Then to indent the marked lines press > or <. You can then press . to repeat the action.

 

Split terminal screen

By default tmux is installed and can be used to split your one terminal into multiple. But just do this if you know your shit, because scrolling is different and copy&pasting might be weird.

https://www.hamvocke.com/blog/a-quick-and-easy-guide-to-tmux

Comments

Popular posts from this blog

OWASP Top 10 Threats and Mitigations Exam - Single Select

Last updated 4 Aug 11 Course Title: OWASP Top 10 Threats and Mitigation Exam Questions - Single Select 1) Which of the following consequences is most likely to occur due to an injection attack? Spoofing Cross-site request forgery Denial of service   Correct Insecure direct object references 2) Your application is created using a language that does not support a clear distinction between code and data. Which vulnerability is most likely to occur in your application? Injection   Correct Insecure direct object references Failure to restrict URL access Insufficient transport layer protection 3) Which of the following scenarios is most likely to cause an injection attack? Unvalidated input is embedded in an instruction stream.   Correct Unvalidated input can be distinguished from valid instructions. A Web application does not validate a client’s access to a resource. A Web action performs an operation on behalf of the user without checking a shared sec

标 题: 关于Daniel Guo 律师

发信人: q123452017 (水天一色), 信区: I140 标  题: 关于Daniel Guo 律师 关键字: Daniel Guo 发信站: BBS 未名空间站 (Thu Apr 26 02:11:35 2018, 美东) 这些是lz根据亲身经历在 Immigration版上发的帖以及一些关于Daniel Guo 律师的回 帖,希望大家不要被一些马甲帖广告帖所骗,慎重考虑选择律师。 WG 和Guo两家律师对比 1. fully refund的合约上的区别 wegreened家是case不过只要第二次没有file就可以fully refund。郭家是要两次case 没过才给refund,而且只要第二次pl draft好律师就可以不退任何律师费。 2. 回信速度 wegreened家一般24小时内回信。郭律师是在可以快速回复的时候才回复很快,对于需 要时间回复或者是不愿意给出确切答复的时候就回复的比较慢。 比如:lz问过郭律师他们律所在nsc区域最近eb1a的通过率,大家也知道nsc现在杀手如 云,但是郭律师过了两天只回复说让秘书update最近的case然后去网页上查,但是上面 并没有写明tsc还是nsc。 lz还问过郭律师关于准备ps (他要求的文件)的一些问题,模版上有的东西不是很清 楚,但是他一般就是把模版上的东西再copy一遍发过来。 3. 材料区别 (推荐信) 因为我只收到郭律师写的推荐信,所以可以比下两家推荐信 wegreened家推荐信写的比较长,而且每封推荐信会用不同的语气和风格,会包含lz写 的research summary里面的某个方面 郭家四封推荐信都是一个格式,一种语气,连地址,信的称呼都是一样的,怎么看四封 推荐信都是同一个人写出来的。套路基本都是第一段目的,第二段介绍推荐人,第三段 某篇或几篇文章的abstract,最后结论 4. 前期材料准备 wegreened家要按照他们的模版准备一个十几页的research summary。 郭律师在签约之前说的是只需要准备五页左右的summary,但是在lz签完约收到推荐信 ,郭律师又发来一个很长的ps要lz自己填,而且和pl的格式基本差不多。 总结下来,申请自己上心最重要。但是如果选律师,lz更倾向于wegreened,