kubernetes-sigs / karpenter

Karpenter is a Kubernetes Node Autoscaler built for flexibility, performance, and simplicity.
Apache License 2.0
660 stars 211 forks source link

Pod eviction will cause service interruption #1674

Open andyblog opened 2 months ago

andyblog commented 2 months ago

Description

Observed Behavior:

When all replicas of a Deployment are on the same node, for example, a deployment has 2 pods on this node, and the 2 pods are evicted when the node is terminated. From the time the 2 pods are evicted to the time the 2 pods are created and run successfully on the new node, the deployment has no pods to provide services. This also happens when a Deployment has only one replica.

Expected Behavior:

During Evicting, a judgment will be made here. If all replicas of the Deployment are on this node, or the Deployment has only one replica, restarting the Deployment is more elegant than evicting. This operation will first create a pod on the new node, wait for the new pod to run successfully, and then terminate the old pod, which will reduce service interruption time.

Reproduction Steps (Please include YAML):

Versions:

k8s-ci-robot commented 2 months ago

This issue is currently awaiting triage.

If Karpenter contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
njtran commented 2 months ago

This operation will first create a pod on the new node, wait for the new pod to run successfully, and then terminate the old pod, which will reduce service interruption time.

As I understand it, this is how it currently works. Can you show reproduction steps to show what you're talking about?

andyblog commented 2 months ago

The current working method is:

  1. Spot instance or ondemand instance is terminated for various reasons
  2. Node starts to be deleted and finalizer method starts to be executed
  3. Pods on the current node start to be evicted

I think when all replicas of the deployment are on this node, restarting is more elegant than evicting, because the service will not be interrupted during the restart