Open ut0mt8 opened 5 months ago
I'm having the same issue as well, this is releated code: https://github.com/kubernetes/autoscaler/blob/3fd892a37b50a885eaceaa9619a1a3e153548dc9/cluster-autoscaler/core/scaledown/eligibility/eligibility.go#L187
/area provider/aws /area cluster-autoscaler
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Which component are you using?:
cluster-autoscaller
v1.29.0
Component version:
What k8s version are you using (
kubectl version
)?:What environment is this in?:
in EKS/AWS launch with args like this:
What did you expect to happen?:
When nodes are empty (meaning no pods from deployment) scale down happening
What happened instead?:
Something prevent nodes to scale down : see this spurious log :
on one of the candidate node.
How to reproduce it (as minimally and precisely as possible):
Nothing more to add. Below config should be sufficient.
Anything else we need to know?:
putting 0.01 for scale-down-utilization-threshold seems to works but it's a bit counter intuitive. and what we want actually is that cluster autoscaller dont' care about resource but just scale down empty nodes. I wonder why such a complex heuristics?