aws / amazon-network-policy-controller-k8s

Apache License 2.0
38 stars 10 forks source link

Ingress rule with named port not resolving to numerical port on target Pod in PolicyEndpoint #81

Open yndai opened 8 months ago

yndai commented 8 months ago

What happened:

Summary: When creating a NetworkPolicy with an ingress rule that allows traffic to a named port on a target pod, the ingress rule is not created in the resulting PolicyEndpoint (and traffic is not allowed either).

Additional details:

What you expected to happen:

When specifying a named port in a NetworkPolicy rule, the port should be mapped to the corresponding numerical port on the target Pod, if available.

How to reproduce it (as minimally and precisely as possible):

Example:

kubctl apply -f on:

apiVersion: v1
kind: Pod
metadata:
  name: target
  labels:
    app: target
spec:
  containers:
  - name: nginx-container
    image: nginx:latest
    ports:
    - containerPort: 80
      name: web-service # Named port
    - containerPort: 443
      name: s-web-service # Named port
---
apiVersion: v1
kind: Pod
metadata:
  name: source
  labels:
    app: source
spec:
  containers:
  - name: nginx-container
    image: nginx:latest
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-web-traffic
spec:
  podSelector:
    matchLabels:
      app: target
  policyTypes:
  - Ingress
  ingress:
    - from:
      - ipBlock:
          cidr: 172.17.0.0/16
      ports:
      - protocol: TCP
        port: web-service # Named port
    - from:
      - podSelector:
          matchLabels:
            app: source
      ports:
      - protocol: TCP
        port: s-web-service # Named port

We expect this in the resultingPolicyEndpoint:

apiVersion: networking.k8s.aws/v1alpha1
kind: PolicyEndpoint
metadata:
  name: allow-web-traffic-fjz4q
 [...]
spec:
  ingress:
  - cidr: 172.17.0.0/16
    ports:
    - port: 80
      protocol: TCP
  - cidr: <source Pod IP>
    ports:
    - port: 443
      protocol: TCP
[...]

But instead got this:

apiVersion: networking.k8s.aws/v1alpha1
kind: PolicyEndpoint
metadata:
  name: allow-web-traffic-fjz4q
 [...]
spec:
  ingress: []
[...]

Anything else we need to know?:

Another weirdness: if the source Pod itself has the same named ports defined in the ingress rule, the rule that specifies the source Pod actually works as expected!

e.g. if you delete the source pod in the above example and apply this instead:

apiVersion: v1
kind: Pod
metadata:
  name: source
  labels:
    app: source
spec:
  containers:
  - name: nginx-container
    image: nginx:latest
    ports:
    - containerPort: 8000 # Irrelevant
      name: web-service   # Named port
    - containerPort: 8443 # Irrelevant
      name: s-web-service # Named port

We now get:

apiVersion: networking.k8s.aws/v1alpha1
kind: PolicyEndpoint
metadata:
  name: allow-web-traffic-fjz4q
 [...]
spec:
  ingress:
  - cidr: <source Pod IP>
      ports:
      - port: 443
        protocol: TCP
[...]

The CIDR source rule is still missing, however. This makes me think that there is an incorrect check against the source Pod when mapping named ports to their numerical values instead of the target Pod.

Environment:

aballman commented 8 months ago

https://github.com/aws/amazon-network-policy-controller-k8s/issues/81 https://github.com/aws/amazon-network-policy-controller-k8s/issues/71

achevuru commented 7 months ago

Moving this to Network Policy controller repo. Fix for this issue in NP controller is currently being rolled out to all the clusters. Will update here once the rollout is complete.

yndai commented 6 months ago

Update: I see our managed cluster has been updated to v1.27.10-eks-508b6b3 and the managed platform version eks.13. In the reproduction example from my post, the from.podSelector rule is now working, however the from.ipBlock rule still does not work for a named port. Replacing port: web-service with port: 80 produces a correct EndpointPolicy.

Specifically from above example:

Expected:

apiVersion: networking.k8s.aws/v1alpha1
kind: PolicyEndpoint
metadata:
  name: allow-web-traffic-fjz4q
 [...]
spec:
  ingress:
  - cidr: 172.17.0.0/16
    ports:
    - port: 80
      protocol: TCP
  - cidr: <source Pod IP>
    ports:
    - port: 443
      protocol: TCP
[...]

But got:

apiVersion: networking.k8s.aws/v1alpha1
kind: PolicyEndpoint
metadata:
  name: allow-web-traffic-fjz4q
 [...]
spec:
  ingress:
  - cidr: <source Pod IP>
    ports:
    - port: 443
      protocol: TCP
[...]
achevuru commented 6 months ago

@yndai Any particular reason behind specifying a static Pod IP as opposed to using Pod/Namespace selectors? Pod IPs are ephemeral, so trying to understand the use case here. Right now, we don't support named ports when pod IPs are specified as static IPs in the NP resource (and also don't support Service VIPs for static pod IPs)..

yndai commented 6 months ago

@achevuru I think you might be misunderstanding my example (correct me if I am wrong):

In my net policy I have an ingress rule for a CIDR range (not pod ip):

  ingress:
    - from:
      - ipBlock:
          cidr: 172.17.0.0/16
      ports:
      - protocol: TCP
        port: web-service # Named port on target pod = 80

In the PolicyEndpoint I expect a rule like this:

spec:
  ingress:
  - cidr: 172.17.0.0/16
    ports:
    - port: 80
      protocol: TCP

But I get:

spec:
  ingress: []

However, this CIDR ingress rule is not created. If I change the network policy rule to target the numeric port like so, then it works:

In my net policy I have an ingress rule for a CIDR range (not pod IP):

  ingress:
    - from:
      - ipBlock:
          cidr: 172.17.0.0/16
      ports:
      - protocol: TCP
        port: 80

Basically, the issue persists for ipBlock rules, but was fixed for podSelector rules.

achevuru commented 6 months ago

@yndai Understood. We should be able to extend the named port support to ingress rules at the minimum for ipBlock but I would like to call out that if the CIDR range you provide is trying to capture the cluster's pod IP range, it will run in to issues if the pod to which the NP applies tries to use Service VIPs of those pods (egress rules).

yndai commented 6 months ago

@achevuru Thank you, and yes, I understand the caveats

haouc commented 6 months ago

@yndai thanks for reporting this. After discussing internally, as @achevuru mentioned there could be some caveats by using IPs, we will add the named ports support for ipBlocks for Ingress only. The change will be made by one of our members soon. Thanks again for testing them quickly!

yndai commented 6 months ago

Another note for anyone else possibly facing this: For rules that allow all traffic to specific ports e.g.

ingress:
  - ports:
    - port: https
      protocol: TCP

Under the hood this maps to a CIDR rule on ::/0, so using named ports on such rules are also not working.