kubernetes / autoscaler

Autoscaling components for Kubernetes
Apache License 2.0
7.95k stars 3.93k forks source link

[EKS] safely evict pods on scale down #6147

Open aaadipop opened 11 months ago

aaadipop commented 11 months ago

Which component are you using?: cluster-autoscaler

What version of the component are you using?: v1.27.2

Component version:

What k8s version are you using (kubectl version)?: v1.22.12

kubectl version Output
$ kubectl version

What environment is this in?: AWS EKS

What did you expect to happen?: safely evict all pods from node before scaling down

What happened instead?: on scale down, the pods are been killed and not safely evicted from the node, this will result in a downtime until the new ones will become available

How to reproduce it (as minimally and precisely as possible): try to scale down an EKS node pool I've set the extraArgs.cordon-node-before-terminating flag to true

Anything else we need to know?: I saw the faq about gracefultermination in scale down and also this issue :)

Shubham82 commented 9 months ago

/area provider/aws

tumaf33 commented 8 months ago

have this issue solved yet? I experience the same behavior having one Jenkins controller replica. Whenever CA scaling down nodes, it scales down the node where Jenkins controller pod is running and it causes the application to be down until Kubernetes restarting it into another node.

Autoscaler log shows the following for the same node running the Jenkins controller pod: 1 klogx.go:87] Node ip-123-45-67-89.ec2.internal- cpu utilization 0.049087 1 cluster.go:178] ip-123-45-67-89.ec2.internal may be removed I1219 20:35:13.918724 1 nodes.go:84] ip-123-45-67-89.ec2.internal is unneeded since 2023-12-19 20:35:13.91132046 +0000 UTC m=+19353.207298775 duration 0s I1219 20:35:13.919046 1 nodes.go:126] ip-123-45-67-89.ec2.internal was unneeded for 0s

here is a snippet of the CA deployment:

command:

I would be more than happy for any advise or workaround to fix this behavior.

p.s I use CA version 1.28

k8s-triage-robot commented 5 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 4 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

jjmerri commented 3 months ago

/remove-lifecycle rotten

k8s-triage-robot commented 4 weeks ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale