Closed tdigangi closed 2 years ago
Mm no I think either the CNI driver is buggy or something is wrong with your testing. I recommend reading the API docs or watching my talk.
The cluster used for testing was recently created with no other workloads running in it, however it is possible that the CNI driver is buggy or something is going on. My testing is pretty vanilla but please see my tests below and point out if you see an issue.
Establish namespaces tenant-a & tenant-b. Apply NetworkPolicy to each namespace. Curl pods cross namespace.
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-deployment-a
namespace: tenant-a
spec:
selector:
matchLabels:
greeting: hello
version: one
replicas: 3
template:
metadata:
labels:
greeting: hello
version: one
spec:
containers:
- name: hello-app
image: "us-docker.pkg.dev/google-samples/containers/gke/hello-app:1.0"
env:
- name: "PORT"
value: "50000"
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-a
namespace: tenant-a
spec:
type: NodePort
selector:
greeting: hello
version: one
ports:
- protocol: TCP
port: 60000
targetPort: 50000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-deployment-b
namespace: tenant-b
spec:
selector:
matchLabels:
greeting: hello
version: one
replicas: 3
template:
metadata:
labels:
greeting: hello
version: one
spec:
containers:
- name: hello-app
image: "us-docker.pkg.dev/google-samples/containers/gke/hello-app:1.0"
env:
- name: "PORT"
value: "50000"
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-b
namespace: tenant-b
spec:
type: NodePort
selector:
greeting: hello
version: one
ports:
- protocol: TCP
port: 60000
targetPort: 50000
apiVersion: v1
kind: Namespace
metadata:
name: tenant-a
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
namespace: tenant-a
name: deny-from-other-namespaces
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
---
apiVersion: v1
kind: Namespace
metadata:
name: tenant-b
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces
namespace: tenant-b
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
apiVersion: batch/v1
kind: Job
metadata:
name: client-job-a
namespace: tenant-a
spec:
template:
spec:
containers:
- name: client
image: byrnedo/alpine-curl
command: ["/bin/sh", "-c", "curl -s http://${HELLO_WORLD_A_SERVICE_HOST}:${HELLO_WORLD_A_SERVICE_PORT}"]
restartPolicy: Never
backoffLimit: 4
---
apiVersion: batch/v1
kind: Job
metadata:
name: client-job-b
namespace: tenant-b
spec:
template:
spec:
containers:
- name: client
image: byrnedo/alpine-curl
# Explicitly hard code a hello-world-a pod IP or hello-world-a-service IP
command: ["/bin/sh", "-c", "curl -s http://[POD-IP or SERVICE-IP]:[PORT]"]
restartPolicy: Never
backoffLimit: 4
$ k get pods -n tenant-b
NAME READY STATUS RESTARTS AGE
client-job-b-fvmcb 0/1 Pending 0 75s
hello-world-deployment-b-6854d5b96f-6xgfj 1/1 Running 0 2d20h
hello-world-deployment-b-6854d5b96f-9tkw6 1/1 Running 0 106m
hello-world-deployment-b-6854d5b96f-cfq8s 1/1 Running 0 106m
$ k logs client-job-b-fvmcb -n tenant-b
Hello, world!
Version: 1.0.0
Hostname: hello-world-deployment-a-6854d5b96f-4bsnm
$ k get pods -n tenant-a
NAME READY STATUS RESTARTS AGE
client-job-a-vzq7m 0/1 Completed 0 2m23s
hello-world-deployment-a-6854d5b96f-4bsnm 1/1 Running 0 6m55s
hello-world-deployment-a-6854d5b96f-547q7 1/1 Running 0 6m55s
hello-world-deployment-a-6854d5b96f-sjksq 1/1 Running 0 6m55s
$ k logs client-job-a-vzq7m -n tenant-a
Hello, world!
Version: 1.0.0
Hostname: hello-world-deployment-a-6854d5b96f-547q7
$ kubectl delete networkpolicy deny-from-other-namespaces -n tenant-a
$ kubectl delete networkpolicy deny-from-other-namespaces -n tenant-b
$ k delete -f test-connectivity.yaml
job.batch "client-job-a" deleted
job.batch "client-job-b" deleted
$ k delete -f hello-world-tenant-a.yaml -f hello-world-tenant-b.yaml
deployment.apps "hello-world-deployment-a" deleted
service "hello-world-a" deleted
deployment.apps "hello-world-deployment-b" deleted
service "hello-world-b" deleted
apiVersion: v1
kind: Namespace
metadata:
name: tenant-a
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
namespace: tenant-a
name: deny-from-other-namespaces
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: tenant-a
---
apiVersion: v1
kind: Namespace
metadata:
name: tenant-b
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
namespace: tenant-b
name: deny-from-other-namespaces
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: tenant-b
apiVersion: batch/v1
kind: Job
metadata:
name: client-job-a
namespace: tenant-a
spec:
template:
spec:
containers:
- name: client
image: byrnedo/alpine-curl
command: ["/bin/sh", "-c", "curl -s http://${HELLO_WORLD_A_SERVICE_HOST}:${HELLO_WORLD_A_SERVICE_PORT}"]
restartPolicy: Never
backoffLimit: 4
---
apiVersion: batch/v1
kind: Job
metadata:
name: client-job-b
namespace: tenant-b
spec:
template:
spec:
containers:
- name: client
image: byrnedo/alpine-curl
# Explicitly hard code a hello-world-a pod IP or hello-world-a-service IP for TEST 2
command: ["/bin/sh", "-c", "curl -s http://[POD-IP or SERVICE-IP]:[PORT]"]
restartPolicy: Never
backoffLimit: 4
$ kubectl get pods -n tenant-a
NAME READY STATUS RESTARTS AGE
client-job-a-5l7tl 0/1 Completed 0 2m9s
hello-world-deployment-a-6854d5b96f-6g888 1/1 Running 0 5m37s
hello-world-deployment-a-6854d5b96f-djcqr 1/1 Running 0 5m37s
hello-world-deployment-a-6854d5b96f-lkxh9 1/1 Running 0 5m37s
$ k logs client-job-a-5l7tl -n tenant-a
Hello, world!
Version: 1.0.0
Hostname: hello-world-deployment-a-6854d5b96f-lkxh9
$ kubectl get pods -n tenant-b
NAME READY STATUS RESTARTS AGE
client-job-b-9sz2b 0/1 Completed 0 2m30s
hello-world-deployment-b-6854d5b96f-7qsnv 1/1 Running 0 5m58s
hello-world-deployment-b-6854d5b96f-bd9n9 1/1 Running 0 5m58s
hello-world-deployment-b-6854d5b96f-m8g4q 1/1 Running 0 5m58s
$ k logs client-job-b-9sz2b -n tenant-b
Error from server: Get "https://10.10.XX.XX:10250/containerLogs/tenant-b/client-job-b-9sz2b/client": dial tcp 10.10.XX.XXX:10250: connect: connection refused
Provisioned a new GKE cluster v.1.21.6-gke.1503 and the existing example appears to be functioning properly, uncertain what the cluster issue was. the empty The empty podSelector selects all pods in the namespace.
Issue
04-deny-traffic-from-other-namespaces.md does not isolate namespace traffic as described. Is is missing a namespaceSelector to isolate pods to their respective namespaces.
GKE Versions
1.21.6-gke.1500
and1.20.12-gke.1500
Corrected Network Policy