Open diranged opened 5 days ago
This issue is currently awaiting triage.
If Karpenter contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Description
What problem are you trying to solve? Sometimes nodes just become
NotReady
for a variety of reasons (bad cloud provider instance, non-responsive kubelet, etc). When a Node has been in aReady
state and then transitions intoNotReady
, I think that Karpenter should have another Disruption Controller that monitors for these nodes and terminates them.Third party controllers like the Spot.io Ocean Product, and the Cluster Autoscaler both handle nodes that become
NotReady
for you automatically. Karpenter should be able to do the same thing.(Note we have also raised this with our AWS TAM via a support ticket, and we were recommended to open a feature-request here)
Related: https://github.com/kubernetes-sigs/karpenter/issues/1573
How important is this feature to you?
This is actually a blocker for us migrating off of our current tools - we launch enough nodes and we have enough failures throughout the day that we cannot fully migrate unless we have a completely automated self healing system where these nodes get cycled out once they become
NotReady
.(separate but related, is the ongoing discussion at https://github.com/bottlerocket-os/bottlerocket/issues/4075 about EKS nodes becoming unready due to heavy memory pressure)