Closed dorsegal closed 2 months ago
Sorry, I'm not following with respect to consolidation, it always pre-spins a replacement node so you should never need to wait for a node to provision.
Regarding PDBs, why are they not sufficient? It will slow the rate at which the pods are evicted.
There are cases when application takes time to load so even if you pre-spin node the application takes time to become available. PDB have the same problem. It will first terminate a pod(s) and K8s will schedule a new one. If I PDB are defined with 99% or only allow small number of pod disruption it can slow the rate at which the pods are evicted as well.
We want to achieve as close to 100% up-time using spot instances and currently the drain behavior is what holding us back.
It sounds like you're using the max surge on the restart to temporarily launch more pods. Instead you can just permanently scale the deployment to your desired baseline + whatever the surge you want is then use a PDB to limit the maxUnavailable for that deployment to the surge amount. This will ensure you always have your baseline desired capacity without incurring extra restarts.
You could also try catching SIGTERM within your pod and keep it from shutting down immediately so that the new pod has time to initialize if they are spinning up while the other pod is terminating.
You could also try catching SIGTERM within your pod and keep it from shutting down immediately so that the new pod has time to initialize if they are spinning up while the other pod is terminating.
We thought about it. The problem is when using 3rd party images, it will require to change source code for every used application. Plus it is recommend to handle SIGTERM as graceful shutdown not suspend you application till k8s kills it.
This request makes it granular solution for all pods.
We had a new idea for custom flow that does not use rollouts. The labels to detach pods from their controllers (replica sets) to add/remove label from all pods in
so the new drain flow will be like this:
It's not perfect but will reduce the impact of draining nodes.
What about a pre-stop command? https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
What about a pre-stop command? https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
Since pre-stop does not put the container in terminating
state k8s scheduler does not know to spin a new pod.
IIUC, it should go into terminating, which will trigger the pod's replicaset to create a new one.
PreStop hooks are not executed asynchronously from the signal to stop the Container; the hook must complete its execution before the TERM signal can be sent. If a PreStop hook hangs during execution, the Pod's phase will be Terminating and remain there until the Pod is killed after its terminationGracePeriodSeconds expires.
IIUC, it should go into terminating, which will trigger the pod's replicaset to create a new one.
PreStop hooks are not executed asynchronously from the signal to stop the Container; the hook must complete its execution before the TERM signal can be sent. If a PreStop hook hangs during execution, the Pod's phase will be Terminating and remain there until the Pod is killed after its terminationGracePeriodSeconds expires.
It even makes it worse :) since the pod is terminating requests are no longer coming to that pod which means we are getting desegregation till new pods are available.
The idea is to not terminate pods till new pods are available. Just like rollout restart
This is a similar issue we're also running into where the node(s) will terminate before the schedule pod is in a running state on the new node(s).
I actually think a better approach here is to move https://www.medik8s.io/maintenance-node/ to be an official (out of tree, but official) Kubernetes API, and then use that when it's available in a cluster.
You could customize behavior by using your own controller rather than the default one, and keep the API the same for other parties such as kubectl
and Karpenter.
Yes, it's a big change. However, it's easier than solving the n-to-m relationship between all the things that might either drain a node or watch a drain happen.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
I'm also facing same problem. Please open this issue
@jsamuel1: You can't reopen an issue/PR unless you authored it or you are a collaborator.
Tell us about your request
Add a
rollout
flag when using drain. It will be used when consolidation and native termination handler (https://github.com/aws/karpenter/pull/2546) will be ready. The custom drain flow is like this:Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
Currently when using consolidation feature or aws-node-termination-handler we can result of downtime or heavy performance degradation with the current implementation of
kubectl drain
Current drain will terminate all workloads on a node and the scheduler will try to create those workloads on available nodes and if not any Karpenter will provision new node. Even with PDB there is some level or degradation.
Are you currently working around this issue?
having a custom bash script that implements an alternative to
kubectl drain
https://gist.github.com/juliohm1978/1f24f9259399e1e1edf092f1e2c7b089
Additional Context
kubectl drain leads to downtime even with a PodDisruptionBudget https://github.com/kubernetes/kubernetes/issues/48307
Attachments
No response
Community Note