kubernetes / autoscaler

Autoscaling components for Kubernetes
Apache License 2.0
7.98k stars 3.94k forks source link

[cluster-autoscaler] More quickly mark spot ASG in AWS as unavailable if InsufficientInstanceCapacity #3241

Closed cep21 closed 2 weeks ago

cep21 commented 4 years ago

I have two ASG: a spot and on-demand ASG. They are GPU nodes, so frequently spot instances aren't available. AWS tells us very quickly that a spot instance is unavailable: we can see "Could not launch Spot Instances. InsufficientInstanceCapacity - There is no Spot capacity available that matches your request. Launching EC2 instance failed" in the ASG logs.

The current behavior is that autoscaler tries to use the spot ASG for 15 minutes (my current timeout) before it gives up and tries to use a non spot ASG. Ideally, it could notice that the reason the ASG did not scale up, InsufficientInstanceCapacity, is unlikely to go away in the next 15 minutes and would instead mark that group as unable to scale up and fall back to the on-demand ASG.

qqshfox commented 4 years ago

Having the same issue here.

https://github.com/kubernetes/autoscaler/blob/852ea800914cae101824687a71236f7688ee653d/cluster-autoscaler/cloudprovider/aws/auto_scaling_groups.go#L220

SetDesiredCapacity will not return any error related to InsufficientInstanceCapacity according to its doc. We might need to check the scaling activities by calling DescribeScalingActivities.

{
    "Activities": [
        {
            "ActivityId": "ee05cf07-241b-2f28-2be4-3b60f77a76e9",
            "AutoScalingGroupName": "nodes-gpu-spot-cn-north-1a.aws-cn-north-1.prod-1.k8s.local",
            "Description": "Launching a new EC2 instance.  Status Reason: There is no Spot capacity available that matches your request. Launching EC2 instance failed.",
            "Cause": "At 2020-08-06T03:20:39Z an instance was started in response to a difference between desired and actual capacity, increasing the capacity from 0 to 1.",
            "StartTime": "2020-08-06T03:20:43.979Z",
            "EndTime": "2020-08-06T03:20:43Z",
            "StatusCode": "Failed",
            "StatusMessage": "There is no Spot capacity available that matches your request. Launching EC2 instance failed.",
            "Progress": 100,
            "Details": "{\"Subnet ID\":\"subnet-5d6fb339\",\"Availability Zone\":\"cn-north-1a\"}"
        },
        ...
    ]
}
JacobHenner commented 4 years ago

I think the title of this issue should be amended to include other holding states. For example, I'm running into a similar issue with price-too-low. If the maximum spot price for my ASGs is below the current spot prices, cluster-autoscaler waits quite a while before it attempts to use a non-spot ASG.

cep21 commented 4 years ago

It's not just spot. Another example is you can hit your account limit on number of instances of a specific instance type: that will also not likely change in the next 15 minutes and it's best to try another ASG.

A general understanding of failure states that are unlikely to change could be very helpful.

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

cep21 commented 3 years ago

Super important! /remove-lifecycle stale

klebediev commented 3 years ago

Looking at AWS API, it seems like there is no reliable way to find out that scaling out for particular SetDesiredCapacity call has failed. If SetDesiredCapacity returned ActivityId for scaling activity, that would work. Otherwise - personally I can't come up with nothing better than parsing autoscaling activities "younger" than mySetDesiderCapacity API call. Don't feel like this way is production-ready. Any better ideas?

cep21 commented 3 years ago

I wouldn't expect anything that ties back to a single SetDesiredCapacity since it's async and there could be multiple calls.

parsing autoscaling activities "younger" than mySetDesiderCapacity API call

Maybe look at the last activity (rather than them all), if it's recent (for some definition of recent), then assume the capacity isn't able to change right now and quick fallover any scaling operation.

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

cep21 commented 3 years ago

Super important! /remove-lifecycle stale

itssimon commented 3 years ago

This is important for us too, same use case as OP.

k8s-triage-robot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

azhurbilo commented 3 years ago

/remove-lifecycle stale

orsher commented 3 years ago

Any updates regarding this? It's super important for us and I'm sure for many others. Also, where this magic number "15 min" is set? Is it configurable?

atze234 commented 2 years ago

I think the 15 Minutes magic number is set by "--max-node-provision-time". For sure it would be better and a nice feature to scan the scaling events and mark the ASG instantly as dead for next x minutes.

klebediev commented 2 years ago

what if we improve detection of "ASG can't be scaled up activity" by sending notifications Fails to launch to SNS topic like:

 $ aws autoscaling put-notification-configuration --auto-scaling-group-name <value> --topic-arn <value> --notification-types "autoscaling:EC2_INSTANCE_LAUNCH_ERROR"

then we can subscribe SQS queue to this topic and cluster-autoscaler can start polling this SQS queue after initiating "scale up" activity.

At this approach requires some configuration effort, it should be disabled by default => but for use cases when fast detection of Fails to launch is useful like with spot ASGs users can configure corresponding infrastructure (SNS, SQS, ASG notifications) and enable this "fail fast" detection method.

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

itssimon commented 2 years ago

/remove-lifecycle rotten

piotrwielgolaski-tomtom commented 2 years ago

with recent changes making use of eks:DescribeNodegroup https://github.com/kubernetes/autoscaler/commit/b4cadfb4e25b6660c41dbe2b73e66e9a2f3a2cc9 can we use health information from nodegroup https://docs.aws.amazon.com/cli/latest/reference/eks/describe-nodegroup.html Unhealthly node group should be excluded from calculation

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

miadabrin commented 1 year ago

/remove-lifecycle rotten

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

theintz commented 1 year ago

/remove-lifecycle rotten

decipher27 commented 1 year ago

We are using an expander "priority" in our autoscaler config which doesn't solve this case. If there is a rebalance recommendation done on ASG, [Having 2 AZ's] sometimes SPOT is unavailable in 1 AZ but, it doesn't fallback to ON_Demand Node_Group. Is there a way we can achieve the fallback to happen on On_demand in someway?

decipher27 commented 1 year ago

Any updates on the fix for this case?

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

RamazanKara commented 1 year ago

/remove-lifecycle rotten

ntkach commented 1 year ago

Or at least workaround? I can verify also it's not just spot. We're getting the same issue with a k8s cluster running on regular ec2 instances. We currently have 3 autoscaling groups that are using us-east-2a, us-east-2b, and us-east-2c that are stuck bouncing back and forth between max and max-1 because a zone rebalancing failed based on capacity in that zone.

ddelange commented 9 months ago

was this not fixed by https://github.com/kubernetes/autoscaler/pull/4489 released as of cluster-autoscaler-1.24.0?

Shubham82 commented 9 months ago

/remove-lifecycle stale

ddelange commented 9 months ago

there is also another related PR open: #5756

k8s-triage-robot commented 6 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 5 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

Shubham82 commented 4 months ago

/remove-lifecycle rotten

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

ddelange commented 1 month ago

was this not fixed by #4489 released as of cluster-autoscaler-1.24.0?

cc @drmorr0 @gjtempleton can you confirm this can be closed?

k8s-triage-robot commented 2 weeks ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

drmorr0 commented 2 weeks ago

Yes, I believe this can be closed, that PR should resolve this.

drmorr0 commented 2 weeks ago

/close

k8s-ci-robot commented 2 weeks ago

@drmorr0: Closing this issue.

In response to [this](https://github.com/kubernetes/autoscaler/issues/3241#issuecomment-2329373860): >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.