After upgrading to EKS 1.22, could notice multiple sets of Kube2IAM pods ending up in pending state, the nodes are running out of CPU with the other microservices. Not sure how the microservices rather winning the race to jump on to nodes before Kube2iam get inititated.
Had tried multiple options by upgrading helm chart of kube2iam, trying to implement Nodetiant (wish/nodetaint). Can some one please help out if you got any solutions to suggest for us to take a look further. Thanks.
pods getting no head space on node::
kube2iam-78chr 0/1 Pending 0 105m
Event:
Warning FailedScheduling 115s (x97 over 95m) default-scheduler 0/26 nodes are available: 1 Insufficient cpu, 25 node(s) didn't match Pod's node affinity/selector.
Have added pod level config in attachements.
kube2iam.docx
After upgrading to EKS 1.22, could notice multiple sets of Kube2IAM pods ending up in pending state, the nodes are running out of CPU with the other microservices. Not sure how the microservices rather winning the race to jump on to nodes before Kube2iam get inititated.
Had tried multiple options by upgrading helm chart of kube2iam, trying to implement Nodetiant (wish/nodetaint). Can some one please help out if you got any solutions to suggest for us to take a look further. Thanks.
pods getting no head space on node::
kube2iam-78chr 0/1 Pending 0 105m
Event: Warning FailedScheduling 115s (x97 over 95m) default-scheduler 0/26 nodes are available: 1 Insufficient cpu, 25 node(s) didn't match Pod's node affinity/selector.
Have added pod level config in attachements. kube2iam.docx