Closed cep21 closed 2 weeks ago
Having the same issue here.
SetDesiredCapacity
will not return any error related to InsufficientInstanceCapacity
according to its doc. We might need to check the scaling activities by calling DescribeScalingActivities.
{
"Activities": [
{
"ActivityId": "ee05cf07-241b-2f28-2be4-3b60f77a76e9",
"AutoScalingGroupName": "nodes-gpu-spot-cn-north-1a.aws-cn-north-1.prod-1.k8s.local",
"Description": "Launching a new EC2 instance. Status Reason: There is no Spot capacity available that matches your request. Launching EC2 instance failed.",
"Cause": "At 2020-08-06T03:20:39Z an instance was started in response to a difference between desired and actual capacity, increasing the capacity from 0 to 1.",
"StartTime": "2020-08-06T03:20:43.979Z",
"EndTime": "2020-08-06T03:20:43Z",
"StatusCode": "Failed",
"StatusMessage": "There is no Spot capacity available that matches your request. Launching EC2 instance failed.",
"Progress": 100,
"Details": "{\"Subnet ID\":\"subnet-5d6fb339\",\"Availability Zone\":\"cn-north-1a\"}"
},
...
]
}
I think the title of this issue should be amended to include other holding states. For example, I'm running into a similar issue with price-too-low
. If the maximum spot price for my ASGs is below the current spot prices, cluster-autoscaler waits quite a while before it attempts to use a non-spot ASG.
It's not just spot. Another example is you can hit your account limit on number of instances of a specific instance type: that will also not likely change in the next 15 minutes and it's best to try another ASG.
A general understanding of failure states that are unlikely to change could be very helpful.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Super important! /remove-lifecycle stale
Looking at AWS API, it seems like there is no reliable way to find out that scaling out for particular SetDesiredCapacity
call has failed. If SetDesiredCapacity
returned ActivityId
for scaling activity, that would work.
Otherwise - personally I can't come up with nothing better than parsing autoscaling activities "younger" than mySetDesiderCapacity
API call. Don't feel like this way is production-ready.
Any better ideas?
I wouldn't expect anything that ties back to a single SetDesiredCapacity
since it's async and there could be multiple calls.
parsing autoscaling activities "younger" than mySetDesiderCapacity API call
Maybe look at the last activity (rather than them all), if it's recent (for some definition of recent), then assume the capacity isn't able to change right now and quick fallover any scaling operation.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
Super important! /remove-lifecycle stale
This is important for us too, same use case as OP.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
Any updates regarding this? It's super important for us and I'm sure for many others. Also, where this magic number "15 min" is set? Is it configurable?
I think the 15 Minutes magic number is set by "--max-node-provision-time". For sure it would be better and a nice feature to scan the scaling events and mark the ASG instantly as dead for next x minutes.
what if we improve detection of "ASG can't be scaled up activity" by sending notifications Fails to launch
to SNS topic like:
$ aws autoscaling put-notification-configuration --auto-scaling-group-name <value> --topic-arn <value> --notification-types "autoscaling:EC2_INSTANCE_LAUNCH_ERROR"
then we can subscribe SQS queue to this topic and cluster-autoscaler
can start polling this SQS queue after initiating "scale up" activity.
At this approach requires some configuration effort, it should be disabled by default => but for use cases when fast detection of Fails to launch
is useful like with spot ASGs users can configure corresponding infrastructure (SNS, SQS, ASG notifications) and enable this "fail fast" detection method.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
with recent changes making use of eks:DescribeNodegroup https://github.com/kubernetes/autoscaler/commit/b4cadfb4e25b6660c41dbe2b73e66e9a2f3a2cc9 can we use health information from nodegroup https://docs.aws.amazon.com/cli/latest/reference/eks/describe-nodegroup.html Unhealthly node group should be excluded from calculation
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
We are using an expander "priority" in our autoscaler config which doesn't solve this case. If there is a rebalance recommendation done on ASG, [Having 2 AZ's] sometimes SPOT is unavailable in 1 AZ but, it doesn't fallback to ON_Demand Node_Group. Is there a way we can achieve the fallback to happen on On_demand in someway?
Any updates on the fix for this case?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle rotten
Or at least workaround? I can verify also it's not just spot. We're getting the same issue with a k8s cluster running on regular ec2 instances. We currently have 3 autoscaling groups that are using us-east-2a, us-east-2b, and us-east-2c that are stuck bouncing back and forth between max and max-1 because a zone rebalancing failed based on capacity in that zone.
was this not fixed by https://github.com/kubernetes/autoscaler/pull/4489 released as of cluster-autoscaler-1.24.0
?
/remove-lifecycle stale
there is also another related PR open: #5756
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
was this not fixed by #4489 released as of
cluster-autoscaler-1.24.0
?
cc @drmorr0 @gjtempleton can you confirm this can be closed?
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Yes, I believe this can be closed, that PR should resolve this.
/close
@drmorr0: Closing this issue.
I have two ASG: a spot and on-demand ASG. They are GPU nodes, so frequently spot instances aren't available. AWS tells us very quickly that a spot instance is unavailable: we can see "Could not launch Spot Instances. InsufficientInstanceCapacity - There is no Spot capacity available that matches your request. Launching EC2 instance failed" in the ASG logs.
The current behavior is that autoscaler tries to use the spot ASG for 15 minutes (my current timeout) before it gives up and tries to use a non spot ASG. Ideally, it could notice that the reason the ASG did not scale up, InsufficientInstanceCapacity, is unlikely to go away in the next 15 minutes and would instead mark that group as unable to scale up and fall back to the on-demand ASG.