aws / amazon-vpc-cni-k8s

Networking plugin repository for pod networking in Kubernetes using Elastic Network Interfaces on AWS
Apache License 2.0
2.26k stars 735 forks source link

Issue fetching logs for pods running in different CIDR range under one VPC #2022

Closed bhargavamin closed 2 years ago

bhargavamin commented 2 years ago

I'm setting up a dual-stack VPC with multiple CIDRS in which I created an EKS cluster using terraform module and self-managed nodes.

I'm facing a very peculiar issue where I can only do a successful kubectl exec and kubectl logs commands on nodes from a single CIDR range out of 3 CIDRS attached to VPC. I suspect that this issue could be related to iptable rules or vpc cni settings.

CIDR range attached with VPC:

The cluster and nodes are launched in the private subnet. Internet traffic for IPV6 traffic going EIGW and IPV4 traffic going NAT.

I have checked the following things:

Versions:

AWS Support Case ID: 10315548691

If required I can provide debug output of vpc cni troubleshooting script.

jayanthvn commented 2 years ago

Hi, cni is responsible only for setting up routing for pod to pod communication and also any cni setting won't impact this. This doesn't look like a CNI issue. Since you have already opened up a case we will look into why kubectl is not working for additional cidrs.

jayanthvn commented 2 years ago

Since you already have a case opened, we will check fi the CIDR is allow-listed. Will close this issue for now.

github-actions[bot] commented 2 years ago

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see. If you need more assistance, please open a new issue that references this one. If you wish to keep having a conversation with other community members under this issue feel free to do so.

bhargavamin commented 2 years ago

I was able to fix the issue. Its a node AMI config issue.

When you have self managed nodes you need to explicitly mention some parameters for the /etc/eks/bootstrap.sh script so that it can support ipv6 eks cluster.

Adding bootstrap_extra_args: "--ip-family ipv6 --service-ipv6-cidr fc00::/7" fixed the issue.

Ref: https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1958