There was a security incident where an intruder was able to access the whole cluster from a single hacked backend Pod.
To prevent this create a NetworkPolicy called np-backend
in Namespace project-snake
. It should allow the backend-*
Pods only to:
- connect to
db1-*
Pods on port 1111 - connect to
db2-*
Pods on port 2222
Use the app
label of Pods in your policy.
After implementation, connections from backend-*
Pods to vault-*
Pods on port 3333 should for example no longer work.
Answer:
First we look at the existing Pods and their labels:
➜ k -n project-snake get pod
NAME READY STATUS RESTARTS AGE
backend-0 1/1 Running 0 8s
db1-0 1/1 Running 0 8s
db2-0 1/1 Running 0 10s
vault-0 1/1 Running 0 10s
➜ k -n project-snake get pod -L app
NAME READY STATUS RESTARTS AGE APP
backend-0 1/1 Running 0 3m15s backend
db1-0 1/1 Running 0 3m15s db1
db2-0 1/1 Running 0 3m17s db2
vault-0 1/1 Running 0 3m17s vault
We test the current connection situation and see nothing is restricted:
➜ k -n project-snake get pod -o wide
NAME READY STATUS RESTARTS AGE IP ...
backend-0 1/1 Running 0 4m14s 10.44.0.24 ...
db1-0 1/1 Running 0 4m14s 10.44.0.25 ...
db2-0 1/1 Running 0 4m16s 10.44.0.23 ...
vault-0 1/1 Running 0 4m16s 10.44.0.22 ...
➜ k -n project-snake exec backend-0 -- curl -s 10.44.0.25:1111
database one
➜ k -n project-snake exec backend-0 -- curl -s 10.44.0.23:2222
database two
➜ k -n project-snake exec backend-0 -- curl -s 10.44.0.22:3333
vault secret storage
Now we create the NP by copying and chaning an example from the k8s docs:
vim 24_np.yaml
# 24_np.yaml
apiVersion networking.k8s.io/v1
kind NetworkPolicy
metadata
name np-backend
namespace project-snake
spec
podSelector
matchLabels
app backend
policyTypes
# policy is only about Egress Egress
egress
# first rule
to# first condition "to"
podSelector
matchLabels
app db1
ports# second condition "port"
protocol TCP
port1111
# second rule
to# first condition "to"
podSelector
matchLabels
app db2
ports# second condition "port"
protocol TCP
port2222
The NP above has two rules with two conditions each, it can be read as:
allow outgoing traffic if:
(destination pod has label app=db1 AND port is 1111)
OR
(destination pod has label app=db2 AND port is 2222)
Wrong example
Now let's shortly look at a wrong example:
# WRONG
apiVersion networking.k8s.io/v1
kind NetworkPolicy
metadata
name np-backend
namespace project-snake
spec
podSelector
matchLabels
app backend
policyTypes
Egress
egress
# first rule
to# first condition "to"
podSelector# first "to" possibility
matchLabels
app db1
podSelector# second "to" possibility
matchLabels
app db2
ports# second condition "ports"
protocol TCP # first "ports" possibility
port1111
protocol TCP # second "ports" possibility
port2222
The NP above has one rule with two conditions and two condition-entries each, it can be read as:
allow outgoing traffic if:
(destination pod has label app=db1 OR destination pod has label app=db2)
AND
(destination port is 1111 OR destination port is 2222)
Using this NP it would still be possible for backend-*
Pods to connect to db2-*
Pods on port 1111 for example which should be forbidden.
Create NetworkPolicy
We create the correct NP:
k -f 24_np.yaml create
And test again:
➜ k -n project-snake exec backend-0 -- curl -s 10.44.0.25:1111
database one
➜ k -n project-snake exec backend-0 -- curl -s 10.44.0.23:2222
database two
➜ k -n project-snake exec backend-0 -- curl -s 10.44.0.22:3333
^C
Also helpful to use kubectl describe
on the NP to see how k8s has interpreted the policy.
Great, looking more secure. Task done.
Comments
Post a Comment
https://gengwg.blogspot.com/