okd-project / okd

The self-managing, auto-upgrading, Kubernetes distribution for everyone
https://okd.io
Apache License 2.0
1.72k stars 295 forks source link

NetworkPolicy - deny-all policy does not correctly restrict traffic to pod when using nodeports #1175

Closed thurcombe closed 1 month ago

thurcombe commented 2 years ago

Describe the bug Network policy does not appear to work as expected/documented.

In a UPI cluster with openshiftDSN in the default network policy mode I have two nodes:

worker-a -> 10.60.0.52 worker-b -> 10.60.0.53

After deploying a simple unprivileged webserver pod it is scheduled to worker-b Nodeport service published, allocated port 32213

Curling either of the node IP addresses at port 32213 returns content from the pod as expected.

Next I implement the following Networkpolicy to deny all traffic by default

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: deny-by-default
spec:
  podSelector: {}
  ingress: []

I would now expect that curling the nodeport on either node would result in no connectivity to the pod, but what actually happens is:

curling worker-a (not running the pod) exhibits the expected behaviour, the connection times out. (SYN packets are seen on the node inerface but no SYN-ACK is ever sent)

curling worker-b results in a response from the pod which is not expected.

Version Behaviour replicated on 4.9.0-0.okd-2021-12-12-025847 and 4.10.0-0.okd-2022-03-07-131213 Both are UPI

How reproducible 100%

openshift-bot commented 2 years ago

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

openshift-bot commented 2 years ago

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten /remove-lifecycle stale