Closed mjvankampen closed 8 months ago
Could it be:
36 | 2024-02-22T17:36:00.244+01:00 | WARN (probot): ⚠️ Failed to create instance with type m7a.large: InsufficientInstanceCapacity: We currently do not have sufficient m7a.large capacity in the Availability Zone you requested (eu-west-1b). Our system will be working on provisioning additional capacity. You can currently get m7a.large capacity by not specifying an Availability Zone in your request or choosing eu-west-1a, eu-west-1c.. | instance/6168b02e21fc4dbc93e9ac5400c6cc00 | 398481473716:/aws/apprunner/RunsOnService-5MKOLtIFO5yh/d2b9ee3c564b4e94aa7ba84120b959f6/application | FieldValue@ingestionTime1708619764947@log398481473716:/aws/apprunner/RunsOnService-5MKOLtIFO5yh/d2b9ee3c564b4e94aa7ba84120b959f6/application@logStreaminstance/6168b02e21fc4dbc93e9ac5400c6cc00@messageWARN (probot): ⚠️ Failed to create instance with type m7a.large: InsufficientInstanceCapacity: We currently do not have sufficient m7a.large capacity in the Availability Zone you requested (eu-west-1b). Our system will be working on provisioning additional capacity. You can currently get m7a.large capacity by not specifying an Availability Zone in your request or choosing eu-west-1a, eu-west-1c..@timestamp1708619760244 | Field | Value | @ingestionTime | 1708619764947 | @log | 398481473716:/aws/apprunner/RunsOnService-5MKOLtIFO5yh/d2b9ee3c564b4e94aa7ba84120b959f6/application | @logStream | instance/6168b02e21fc4dbc93e9ac5400c6cc00 | @message | WARN (probot): ⚠️ Failed to create instance with type m7a.large: InsufficientInstanceCapacity: We currently do not have sufficient m7a.large capacity in the Availability Zone you requested (eu-west-1b). Our system will be working on provisioning additional capacity. You can currently get m7a.large capacity by not specifying an Availability Zone in your request or choosing eu-west-1a, eu-west-1c.. | @timestamp | 1708619760244
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
1708619764947
398481473716:/aws/apprunner/RunsOnService-5MKOLtIFO5yh/d2b9ee3c564b4e94aa7ba84120b959f6/application
instance/6168b02e21fc4dbc93e9ac5400c6cc00
WARN (probot): ⚠️ Failed to create instance with type m7a.large: InsufficientInstanceCapacity: We currently do not have sufficient m7a.large capacity in the Availability Zone you requested (eu-west-1b). Our system will be working on provisioning additional capacity. You can currently get m7a.large capacity by not specifying an Availability Zone in your request or choosing eu-west-1a, eu-west-1c..
1708619760244
36
2024-02-22T17:36:00.244+01:00
WARN (probot): ⚠️ Failed to create instance with type m7a.large: InsufficientInstanceCapacity: We currently do not have sufficient m7a.large capacity in the Availability Zone you requested (eu-west-1b). Our system will be working on provisioning additional capacity. You can currently get m7a.large capacity by not specifying an Availability Zone in your request or choosing eu-west-1a, eu-west-1c..
[instance/6168b02e21fc4dbc93e9ac5400c6cc00](https://eu-west-1.console.aws.amazon.com/cloudwatch/home?region=eu-west-1#logsV2:log-groups/log-group/$252Faws$252Fapprunner$252FRunsOnService-5MKOLtIFO5yh$252Fd2b9ee3c564b4e94aa7ba84120b959f6$252Fapplication/log-events/instance$252F6168b02e21fc4dbc93e9ac5400c6cc00$3Fstart$3D2024-02-22T16$253A36$253A00.244Z)
398481473716:/aws/apprunner/RunsOnService-5MKOLtIFO5yh/d2b9ee3c564b4e94aa7ba84120b959f6/application
Field Value
@ingestionTime
1708619764947
@log
398481473716:/aws/apprunner/RunsOnService-5MKOLtIFO5yh/d2b9ee3c564b4e94aa7ba84120b959f6/application
@logStream
instance/6168b02e21fc4dbc93e9ac5400c6cc00
@message
WARN (probot): ⚠️ Failed to create instance with type m7a.large: InsufficientInstanceCapacity: We currently do not have sufficient m7a.large capacity in the Availability Zone you requested (eu-west-1b). Our system will be working on provisioning additional capacity. You can currently get m7a.large capacity by not specifying an Availability Zone in your request or choosing eu-west-1a, eu-west-1c..
@timestamp
1708619760244
@mjvankampen this issue (incorrect error message) was fixed in v1.6.3, please upgrade :)
But yes, the underlying error is likely the one you see in the logs. I would recommend adding a fallback family type (e.g. family=m7a+m6a
) to your runs-on definition, to ensure you can always get an instance. m7a
family is pretty constrained these days.
Ah cool, thank you! Will let you know if this helps. Is it also possible to not specify an availability zone?
Thanks for your awesome work btw!
For now it's not possible. The best way is to find a zone where instances are available (using spot pricing history tool in ec2 dashboard / spot), and configure the CF stack to use that one
Alright, I switched to eu-west-1-a instead of b by following the update guide. But now nothing seems to run anymore, although I see requests in the logs (sorry bit of an AWS noob).
I got this error after changing the availability region by upgrading the stack resulting in none of the instances being launched.
{"level":40,"time":1708664731881,"pid":127,"hostname":"ip-10-0-172-66.eu-west-1.compute.internal","name":"probot","name":"probot","msg":"⚠️ Failed to create instance with type m7a.large: InvalidSubnetID.NotFound: The subnet ID 'subnet-0af3e6652276be6b5' does not exist."}
--
Ps. a note, I'm deleting the stack now the make it again. But to delete it you need to delete the S3 buckets manually. No biggie for me but fyi.
I got this error after changing the availability region by upgrading the stack resulting in none of the instances being launched.
{"level":40,"time":1708664731881,"pid":127,"hostname":"ip-10-0-172-66.eu-west-1.compute.internal","name":"probot","name":"probot","msg":"⚠️ Failed to create instance with type m7a.large: InvalidSubnetID.NotFound: The subnet ID 'subnet-0af3e6652276be6b5' does not exist."} --
You are correct, I think there is a little race condition in the way the RunsOn services retrieves its parameters from the stack outputs. If the service launches before the stack converges, it could happen that it fetches the old stack outputs instead of the new ones.
I'll fix that today, sorry for the hassle.
Ps. a note, I'm deleting the stack now the make it again. But to delete it you need to delete the S3 buckets manually. No biggie for me but fyi.
The bucket is not automatically deleted because it contains your GitHub App credentials, so that you can move the runs-on/.env
file contained within into a new bucket, in the case you need to re-create the stack, but don't want to re-register the GitHub App.
So yes for uninstalling, you need to select "Retain bucket" after the first deletion attempt, then delete the stack again. New documentation site will also be pushed today, with a page for uninstall.
v1.6.4 has been released to fix the issue with outdated stack outputs being used by the app.
Awesome, everything is up and running again. Will let you know how it goes!
@mjvankampen great! If you're interested in joining a support slack channel, let me know cyril@runs-on.com
I get this error every now and then through the RunsOn Alerts notification service. But I am not sure what is going on. The label I use is:
runs-on, runner=1cpu-linux, image=ubuntu22-base-x64
does anyone have any pointers?