Open pawcykca opened 11 months ago
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Which component are you using?: cluster-autoscaler
What version of the component are you using?: cluster-autoscaler:v1.27.1
What k8s version are you using (
kubectl version
)?:kubectl version
OutputWhat environment is this in?: Openstack with Magnum
What did you expect to happen?:
Rapid scaling down of few nodes from the same nodegroup should be performed (few solutions):
*_COMPLETED
(to don't cancel/broke previous scale up/down operation) Configurable by the new parameter for a backward compatibilityWhat happened instead?:
When Cluster Autoscaler try to scale down too fast this same Nodegroup - performs scale down operation every 5-15 seconds on Nodes in this nodegroup - then previous scale down operation (Openstack Heat's stack update) is canceled by Openstack Heat. This mainly applies for scaling of
default-worker
nodegroup which share this same Heat stack asdefault-master
nodegroup, because update of this shared stack (scale operation) checks all resources indefault-master
nodegroup and then perform update of resources indefault-worker
nodegroup.How to reproduce it (as minimally and precisely as possible):
default-worker
nodegroup and all Pods inRunning
statusdefault-worker
nodegroupAnything else we need to know?:
Cluster Autoscaler configuration
Cluster Autoscaler logs
Removing two nodes from this same nodegorup one by one, not in batch (one operation)
Openstack Heat events