Closed joejulian closed 1 year ago
@joejulian this indicates an issue in the IPAM daemon. What do the IPAM logs on the node show: /var/log/aws-routed-eni/ipamd.log
? And have you tried the latest release, v1.12.6
against this?
I emailed the debug bundle. I have not tried a different cni version.
Responded via email
Upgrading to v1.12.6 fixed this.
Comments on closed issues are hard for our team to see. If you need more assistance, please open a new issue that references this one. If you wish to keep having a conversation with other community members under this issue feel free to do so.
What happened: When I reboot a worker node the aws-cni pod for that node fails to connect:
Attach logs
What you expected to happen: I expected the cni to start and function correctly
How to reproduce it (as minimally and precisely as possible):
ubuntu-eks/k8s_1.23/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20230430
amiAnything else we need to know?:
Environment:
kubectl version
):v1.10.4
- OS (e.g:cat /etc/os-release
):uname -a
):