Open fullykubed opened 1 month ago
This issue is currently awaiting triage.
If Karpenter contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Also see https://github.com/kubernetes/enhancements/issues/4212 (Declarative node maintenance)
Description
What problem are you trying to solve?
Core Kubernetes supports graceful node shutdown which provides some ordering to node draining during a shutdown operation.
This is helpful when you have
system-node-critical
pods that provide key capabilities such as log collection or networking to all the other pods on the node. Kubernete's graceful node shutdown logic provides a mechanism to ensure that all normal pods are terminated before the critical pods are terminated.As Karpenter implements its own draining logic (configurable via the NodePool's
terminationGracePeriod
field) which differs from how Kubernetes shuts down nodes, this can lead to differences in how pods are terminated when a node is disrupted.This is important to address for two reasons:
Right now it is more difficult than needed to create an orderly shutdown using Karpenter as all pods are terminated at once regardless of their priority class.
Since this logic differs from how Kubernetes shuts down nodes, users must now have multiple mental models for how node termination works which adds operational complexity.
I'd recommend the following:
terminationGracePeriod
field loosely maps to the kubelet'sshutdownGracePeriod
already, add another field calledterminationGracePeriodCriticalPods
that maps to the kubelet'sshutdownGracePeriodCriticalPods
.Additionally, I'd recommend that the implementation for this leave the door open to potentially adding the enhanced pod priority graceful node shutdown in the future.
How important is this feature to you?