kubernetes / autoscaler

Autoscaling components for Kubernetes
Apache License 2.0
8.1k stars 3.98k forks source link

Hetzner resource_unavailable error although resource available via UI #7038

Open karsten42 opened 4 months ago

karsten42 commented 4 months ago

Which component are you using?: cluster-autoscaler

What version of the component are you using?:

Component version: v1.30.1

What environment is this in?: Hetzner

What did you expect to happen?: A new node being created and added to the cluster

What happened instead?: An error saying that the node cannot be provisioned although it is possible to create an instance with the same specs and in the same location via the UI. Error: hetzner_node_group.go:120] failed to create error: could not create server type ccx23 in region nbg1: we are unable to provision servers for this location, try with a different location or try later (resource_unavailable)

adrianmoisey commented 4 months ago

/area cluster-autoscaler

apricote commented 4 months ago

/area provider/hetzner

Certain server types might be restricted for periods of time. During that time, your previous orders are taken into account to evaluate if you can create more servers of these types. Depending on the accounts you used and the precise timing, this can always happen.

This is not a bug in the Hetzner Provider in cluster-autoscaler, but inherent behavior of our platform.

karsten42 commented 4 months ago

I see. Thanks for the explanation. Is there anything one could do to become unrestricted? This issue persisted for multiple hours which is very problematic if you have pods stuck in pending.

restuhaqza commented 3 months ago

Hi, I want to ask about this issue. is there any documentation about the restriction? the restriction makes the stability on the cluster become suck.

apricote commented 3 months ago

There is a status message about the limited availability of cx plans.

I would recommend you to use multiple node groups with different types/locations and use the priority expander to try your preferred one, but fall back to other types/locations if they preferred one is not available.

k8s-triage-robot commented 2 weeks ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale