Closed keoren3 closed 2 days ago
I was able to fix this, I'll elaborate on what I did:
Ami Selector Terms:
Name: amazon-eks-node-1.31-*
# Instead of the id: ami-00710ab8f493e2428
TBH, I'm not so sure what caused it to work, it might be related to more than 1 thing.
Description
Observed Behavior: Karpenter raises a new EC2, but it doesn't connect to the EKS - Instead it's stuck in status 'Unknown': "Cannot disrupt NodeClaim: state node doesn't contain both a node and a nodeclaim"
Expected Behavior: The node is added to the EKS cluster
Reproduction Steps (Please include YAML): EC2NodeClass + Nodepool
Extra info: I'm trying to replace my Auto-Scaler with Karpenter. I gave the nodes the exact same:
I've added the required roles to aws-auth:
I've entered the EKS worker node and ran:
journalctl -u kubelet
but no entry was added there.I tried changing the roles name, tried adding permissions to the role, tried adding permissions to the SG, Nothing, the nodes just refuse to connect.
Karpenter logs:
(No errors AFAIK)
The auto-scaler still works as expected though - I raise the deployment replicas to 1, and everything works as expected.
Is this a bug? Or am I missing anything.
I've looked at all the other topics about this, all the solutions are "Oh I missed some tag" - I've checked all the tags again and again, it's not the issue.
Any help would be great.
Versions:
Chart Version: 1.0.8
Kubernetes Version (
kubectl version
): 1.31.3 (EKS)Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
If you are interested in working on this issue or have submitted a pull request, please leave a comment