Open ariretiarno opened 1 month ago
Do you have logs that you can share around when the terminations happen and what Karpenter is "marking the node as" when it is rolling them? Are the nodes getting marked as drifted? Are they expiring? Though to tell from what you shared above since those are only the launch logs, so a more verbose, longer log dump would help out a lot here.
This issue has been inactive for 14 days. StaleBot will close this stale issue after 14 more days of inactivity.
Description
Observed Behavior: Karpenter always change whole nodes on nodepool at 11:50PM (UTC). It makes my app is down. Even my nodepool disruption is have to change to
whenEmpty
andconsolidateAfter: Never
karpenter always changing the whole nodesExpected Behavior: Karpenter didn't change whole nodes.
Reproduction Steps (Please include YAML):
EC2 Nodeclass
Logs
Versions:
Chart Version: karpenter-v0.32.9
Kubernetes Version (
kubectl version
): v1.28.9-eks-036c24bPlease vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
If you are interested in working on this issue or have submitted a pull request, please leave a comment