aws / aws-network-policy-agent

Apache License 2.0
45 stars 29 forks source link

Network Policy Rule Evaluation Blocks Traffic to DNS Server #326

Open junzebao opened 3 weeks ago

junzebao commented 3 weeks ago

What happened: I created a new EKS clusters and enabled network policy in the VPC-CNI, but I realized the following NetworkPolicy blocks the dns resolution requests. It was working when I used Calico as the network plugin.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: curl
  namespace: default
spec:
  egress:
  - to:
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
          - 10.0.0.0/8
          - 172.16.0.0/12
          - 192.168.0.0/16
  - ports:
      - protocol: UDP
        port: 53
      - port: 53
        protocol: TCP
  policyTypes:
    - Egress
  podSelector:
    matchLabels:
      app: curl

Attach logs

What you expected to happen: The first rule would block DNS requests to kube-dns (172.20.0.10 in our case), but the second rule should allow the request.

How to reproduce it (as minimally and precisely as possible): Create a pod with label app: curl in the default namespace as the NetworkPolicy on an EKS cluster with network policy enabled in VPC-CNI. I attached this config to the EKS addon { "enableNetworkPolicy": "true" }.

Anything else we need to know?:

Environment:

orsenthil commented 3 weeks ago

The first rule would block DNS requests to kube-dns (172.20.0.10 in our case), but the second rule should allow the request.

If you enable network policy event logs, do you see the toggle from accept to block? Looking at the event logs can shed more details on what is happening with these evaluation rules.

Pavani-Panakanti commented 6 days ago

Network policy controller is creating policy endpoint rules in a way that when we have conflicting rules, precedence is to DENY over ALLOW. Based on upstream recommendation we should do a ALLOW in such cases. This needs a change in Network Policy Controller. We are looking into it and will update on the fix here.

Policy endpoint generated for the above network policy

dev-dsk-pavanipt-2a-0981017d % kubectl describe policyendpoint curl-4mhvr
Name:         curl-4mhvr
Namespace:    test1
Labels:       <none>
Annotations:  <none>
API Version:  networking.k8s.aws/v1alpha1
Kind:         PolicyEndpoint
Metadata:
  Creation Timestamp:  2024-11-12T23:06:46Z
  Generate Name:       curl-
  Generation:          1
  Owner References:
    API Version:           networking.k8s.io/v1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  NetworkPolicy
    Name:                  curl
    UID:                   acc0d91d-e6ab-46fe-a8bb-aad64e6eaaec
  Resource Version:        1073029
  UID:                     7415e7fe-6711-4df3-8664-ba5ae73d71a1
Spec:
  Egress:
    Cidr:  ::/0
    Ports:
      Port:      53
      Protocol:  UDP
      Port:      53
      Protocol:  TCP
    Cidr:        0.0.0.0/0
    Except:
      10.0.0.0/8
      172.16.0.0/12
      192.168.0.0/16
    Ports:
      Port:      53
      Protocol:  UDP
      Port:      53
      Protocol:  TCP
  Pod Isolation:
    Egress
  Pod Selector:
    Match Labels:
      App:  tester2
  Pod Selector Endpoints:
    Host IP:    192.168.21.1
    Name:       tester2-55c7c875f-xx96w
    Namespace:  test1
    Pod IP:     192.168.15.193
  Policy Ref:
    Name:       curl
    Namespace:  test1
Events:         <none>

For your specific case, providing explicit CIDR "10.0.0.0/8" in the second rule should fix the issue. Let us know if this works for you as a workaround