kubernetes-sigs / karpenter

Karpenter is a Kubernetes Node Autoscaler built for flexibility, performance, and simplicity.
Apache License 2.0
452 stars 153 forks source link

Custom drain flow #740

Closed dorsegal closed 2 months ago

dorsegal commented 1 year ago

Tell us about your request

Add a rollout flag when using drain. It will be used when consolidation and native termination handler (https://github.com/aws/karpenter/pull/2546) will be ready. The custom drain flow is like this:

  1. Cordon the node
  2. Do a rolling restart of the deployments that have pods running on the node.
  3. Drain the node.

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?

Currently when using consolidation feature or aws-node-termination-handler we can result of downtime or heavy performance degradation with the current implementation of kubectl drain

Current drain will terminate all workloads on a node and the scheduler will try to create those workloads on available nodes and if not any Karpenter will provision new node. Even with PDB there is some level or degradation.

Are you currently working around this issue?

having a custom bash script that implements an alternative to kubectl drain

https://gist.github.com/juliohm1978/1f24f9259399e1e1edf092f1e2c7b089

Additional Context

kubectl drain leads to downtime even with a PodDisruptionBudget https://github.com/kubernetes/kubernetes/issues/48307

Attachments

No response

Community Note

tzneal commented 1 year ago

Sorry, I'm not following with respect to consolidation, it always pre-spins a replacement node so you should never need to wait for a node to provision.

Regarding PDBs, why are they not sufficient? It will slow the rate at which the pods are evicted.

dorsegal commented 1 year ago

There are cases when application takes time to load so even if you pre-spin node the application takes time to become available. PDB have the same problem. It will first terminate a pod(s) and K8s will schedule a new one. If I PDB are defined with 99% or only allow small number of pod disruption it can slow the rate at which the pods are evicted as well.

We want to achieve as close to 100% up-time using spot instances and currently the drain behavior is what holding us back.

tzneal commented 1 year ago

It sounds like you're using the max surge on the restart to temporarily launch more pods. Instead you can just permanently scale the deployment to your desired baseline + whatever the surge you want is then use a PDB to limit the maxUnavailable for that deployment to the surge amount. This will ensure you always have your baseline desired capacity without incurring extra restarts.

bwagner5 commented 1 year ago

You could also try catching SIGTERM within your pod and keep it from shutting down immediately so that the new pod has time to initialize if they are spinning up while the other pod is terminating.

dorsegal commented 1 year ago

You could also try catching SIGTERM within your pod and keep it from shutting down immediately so that the new pod has time to initialize if they are spinning up while the other pod is terminating.

We thought about it. The problem is when using 3rd party images, it will require to change source code for every used application. Plus it is recommend to handle SIGTERM as graceful shutdown not suspend you application till k8s kills it.

This request makes it granular solution for all pods.

We had a new idea for custom flow that does not use rollouts. The labels to detach pods from their controllers (replica sets) to add/remove label from all pods in

so the new drain flow will be like this:

  1. cordon node
  2. change labels for all pods inside that node
  3. wait 90 seconds (when spot terminates we need handle this in no more than 120 seconds)
  4. drain node.

It's not perfect but will reduce the impact of draining nodes.

ellistarn commented 1 year ago

What about a pre-stop command? https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/

dorsegal commented 1 year ago

What about a pre-stop command? https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/

Since pre-stop does not put the container in terminating state k8s scheduler does not know to spin a new pod.

ellistarn commented 1 year ago

IIUC, it should go into terminating, which will trigger the pod's replicaset to create a new one.

PreStop hooks are not executed asynchronously from the signal to stop the Container; the hook must complete its execution before the TERM signal can be sent. If a PreStop hook hangs during execution, the Pod's phase will be Terminating and remain there until the Pod is killed after its terminationGracePeriodSeconds expires.

dorsegal commented 1 year ago

IIUC, it should go into terminating, which will trigger the pod's replicaset to create a new one.

PreStop hooks are not executed asynchronously from the signal to stop the Container; the hook must complete its execution before the TERM signal can be sent. If a PreStop hook hangs during execution, the Pod's phase will be Terminating and remain there until the Pod is killed after its terminationGracePeriodSeconds expires.

It even makes it worse :) since the pod is terminating requests are no longer coming to that pod which means we are getting desegregation till new pods are available.

The idea is to not terminate pods till new pods are available. Just like rollout restart

tath81 commented 1 year ago

This is a similar issue we're also running into where the node(s) will terminate before the schedule pod is in a running state on the new node(s).

sftim commented 1 year ago

I actually think a better approach here is to move https://www.medik8s.io/maintenance-node/ to be an official (out of tree, but official) Kubernetes API, and then use that when it's available in a cluster.

You could customize behavior by using your own controller rather than the default one, and keep the API the same for other parties such as kubectl and Karpenter.

Yes, it's a big change. However, it's easier than solving the n-to-m relationship between all the things that might either drain a node or watch a drain happen.

k8s-triage-robot commented 4 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 3 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 2 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/karpenter/issues/740#issuecomment-2118352472): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
Bharath509 commented 1 month ago

I'm also facing same problem. Please open this issue

k8s-ci-robot commented 1 month ago

@jsamuel1: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to [this](https://github.com/kubernetes-sigs/karpenter/issues/740#issuecomment-2172834802): >/reopen > Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.