jtblin / kube2iam

kube2iam provides different AWS IAM roles for pods running on Kubernetes
BSD 3-Clause "New" or "Revised" License
1.98k stars 319 forks source link

kube2iam breaks network policy allowing traffic to EC2 metadata ip #308

Open SmithJosh opened 3 years ago

SmithJosh commented 3 years ago

Description:

I have a K8s cluster with a network policy which denies all traffic by default. Some pods in the cluster need access to the EC2 instance metadata, so there's a network policy which explicitly allows that. After installing kube2iam, that policy stopped working and the pods can no longer access the EC2 instance metadata. Other policies still work fine. It's only an issue when using the 169.254.169.254 ip in the policy.

Steps to reproduce

  1. Create a namespace
    $ kubectl create ns policytest
  2. Create default-deny network policy
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
    name: default-deny-all
    namespace: policytest
    spec:
    podSelector: {}
    policyTypes:
    - Ingress
    - Egress
  3. Create a policy allowing access to the EC2 metadata ip
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
    name: allow-ec2-instance-metadata-retrieval
    namespace: policytest
    spec:
    podSelector:
    matchLabels:
      run: access
    policyTypes:
    - Egress
    # See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
    egress:
    - to:
        - ipBlock:
            cidr: 169.254.169.254/32
  4. Start a new pod in the policytest namespace
    $ kubectl run --namespace=policytest access --rm -ti --image busybox /bin/sh
  5. Run the following command to verify EC2 instance metadata is accessible
    # wget -O- http://169.254.169.254/

    You should see a response.

  6. Install kube2iam. I used the helm chart with the following values
    host:
    iptables: true
    interface: "cali+"
    rbac:
    create: true
    podSecurityPolicy:
    enabled: true
  7. Open another pod (or reuse the one from step 4) and again attempt to hit the EC2 metadata ip
    # wget -O- http://169.254.169.254/

    This time the request will hang

Versions

Kubernetes: v1.19.7 (client) v1.20.4 (server) Deployment method: RKE 1.2.6 CNI provider: Calico v3.17.2 kube2iam version: 2.6.0 (Helm chart version)

SmithJosh commented 3 years ago

Interestingly, if I change the CIDR in the allow policy to the ip of the host machine (eth0), the policy works again.

...
  egress:
    - to:
        - ipBlock:
            cidr: $HOST_IP

I assume this is related to the fact that kube2iam sets hostNetwork: true on the daemonset. Is there any way to get around this? To use hostNetwork: false or somehow still support using the ec2 metadata ip. I don't want to have to grant access to the whole internet or make the policies aware of my host ips.

YourTechBud commented 3 years ago

Any solution or recommendation to solve this issue? I'm facing a similar problem in my cluster.

aufomin commented 3 years ago

Is there any workaround for this issue?