Open SmithJosh opened 3 years ago
Interestingly, if I change the CIDR in the allow policy to the ip of the host machine (eth0), the policy works again.
...
egress:
- to:
- ipBlock:
cidr: $HOST_IP
I assume this is related to the fact that kube2iam sets hostNetwork: true
on the daemonset. Is there any way to get around this? To use hostNetwork: false
or somehow still support using the ec2 metadata ip. I don't want to have to grant access to the whole internet or make the policies aware of my host ips.
Any solution or recommendation to solve this issue? I'm facing a similar problem in my cluster.
Is there any workaround for this issue?
Description:
I have a K8s cluster with a network policy which denies all traffic by default. Some pods in the cluster need access to the EC2 instance metadata, so there's a network policy which explicitly allows that. After installing kube2iam, that policy stopped working and the pods can no longer access the EC2 instance metadata. Other policies still work fine. It's only an issue when using the
169.254.169.254
ip in the policy.Steps to reproduce
policytest
namespaceYou should see a response.
This time the request will hang
Versions
Kubernetes: v1.19.7 (client) v1.20.4 (server) Deployment method: RKE 1.2.6 CNI provider: Calico v3.17.2 kube2iam version: 2.6.0 (Helm chart version)