kubernetes / autoscaler

Autoscaling components for Kubernetes
Apache License 2.0
8.11k stars 3.98k forks source link

cluster autoscaler should consider availability zone balancing during scaledown #3693

Open rittneje opened 4 years ago

rittneje commented 4 years ago

We are running a cluster in AWS EKS that uses nodes from auto-scaling groups. We have noticed that whenever the autoscaler terminates a node during scaledown, the auto-scaling group triggers an availability zone rebalancing shortly thereafter. This in turn leads to a spike in errors. It would be preferable if the cluster autoscaler properly considered availability zones during scaledown, shuffling pods between nodes as necessary to preemptively avoid a rebalancing.

knkarthik commented 4 years ago

We are also seeing this even after removing --balance-similar-node-groups suggested in https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#common-notes-and-gotchas.

May be we'll give Suspended processes in ASG console a try.

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

rittneje commented 3 years ago

/remove-lifecycle stale

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

rittneje commented 3 years ago

/remove-lifecycle stale

k8s-triage-robot commented 3 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

rittneje commented 3 years ago

/remove-lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

rittneje commented 2 years ago

/remove-lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

rittneje commented 2 years ago

/remove-lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

rittneje commented 2 years ago

/remove-lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

rittneje commented 2 years ago

/remove-lifecycle stale

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

rittneje commented 1 year ago

/remove-lifecycle stale

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

rittneje commented 1 year ago

/remove-lifecycle stale

maxgio92 commented 1 year ago

Hi all, are there news in the meantime? We're experiencing this issue I think, where the aws autoscaling rebalances and consequently scales down for overprovisioning, independently from the cluster-autoscaler work:

MidTerminatingLifecycleAction
    Terminating EC2 instance: i-XYZ At 2023-05-08T12:08:31Z instances were launched to balance instances in zones eu-west-1b eu-west-1a with other zones resulting in more than desired number of instances in the group.
    At 2023-05-08T12:08:42Z an instance was taken out of service in response to a difference between desired and actual capacity, shrinking the capacity from 12 to 11.
    At 2023-05-08T12:08:42Z instance i-XYZ was selected for termination.

The result is that AWS autoscaling seems to spawn new instances for balancing the avilabilty across the AZs, but exceeding the desired count of instances, it terminates an instance to match the desired count (managed by the cluster-autoscaler), obviously bypassing the cluster-autoscaler.

Furthermore I have workload that can't be evicted (and for which set the cluster-autoscaler.kubernetes.io/safe-to-evict: "false"), and the AWS autoscalig is agnostic to that obviously.

Am I missing something?

k8s-triage-robot commented 10 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

rittneje commented 10 months ago

/remove-lifecycle stale

zioproto commented 8 months ago

Pinging repo approvers about this feature request. @mwielgus @MaciekPytel @gjtempleton

This seems to be a valid issue. It is documented in this repo documentation here:

https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#im-running-cluster-with-nodes-in-multiple-zones-for-ha-purposes-is-that-supported-by-cluster-autoscaler

Currently the balancing is only done at scale-up. Cluster Autoscaler will still scale down underutilized nodes regardless of the relative sizes of underlying node groups. We plan to take balancing into account in scale-down in the future.

Is the sentence "We plan to take balancing into account in scale-down in the future" still valid ?

Is there a Roadmap published on GitHub ?

Why this is blocked since a long time ? There is a lack of interest in doing the implementation or it requires a massive code refactoring that is not worth the effort ?

Please let the community know what would help here. Thank you !

k8s-triage-robot commented 5 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

rittneje commented 5 months ago

/remove-lifecycle stale

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

rittneje commented 2 months ago

/remove-lifecycle stale