Closed kalamba closed 2 years ago
@kalamba Reproduced on my hetzner cloud deployment. That look realy bad because it can prevent cluster from scaling in some cases
curl -I -H "Authorization: Bearer $HCLOUD_TOKEN" https://api.hetzner.cloud/v1/servers
HTTP/2 429
date: Thu, 08 Jul 2021 18:27:17 GMT
content-type: application/json
ratelimit-limit: 3600
ratelimit-remaining: 0
ratelimit-reset: 1625772437
x-correlation-id: bbbc20a1-ac8a-4ef3-8b78-2158d7bbba44
strict-transport-security: max-age=15724800; includeSubDomains
access-control-allow-origin: *
access-control-allow-credentials: true
Ping @LKaemmerling @kalamba Is there a workaround for this?
@jawabuu IDK but in my current setup it doesn't prevent CA from work. So i still use it. But it can be only my case because my cluster doesn't scale very frequently
@sergeyshevch CA does work but it exhausts your available limits. So if you need to make other API calls, e.g. with Terraform they will fail. Also depending on the size of your cluster/nodes and how soon you need scale-up events to occur, autoscaling will be affected.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
@teksuo: You can't reopen an issue/PR unless you authored it or you are a collaborator.
Which component are you using?:
cluster-autoscaler (cloud provider hetzner)
What version of the component are you using?:
Component version: 1.21
What k8s version are you using (
kubectl version
)?: v1.20.5kubectl version
OutputWhat environment is this in?: Hetzner Cloud
What did you expect to happen?:
What happened instead?: When I ran cluster-autoscaler in my k8s cluser in hetzner cloud, I saw that the api request limits are running out. Hetzner cloud api have rate limit requests - 3600 requests per hour
https://docs.hetzner.cloud/#rate-limiting
Cluster-autoscaler with default settings utilizes all available api requests limits (ratelimit-remaining: 0)
curl -I -H "Authorization: Bearer $HCLOUD_TOKEN" https://api.hetzner.cloud/v1/servers
OutputHow to reproduce it (as minimally and precisely as possible):
curl -I -H "Authorization: Bearer $HCLOUD_TOKEN" https://api.hetzner.cloud/v1/servers
Anything else we need to know?:
Even I run cluster-autoscaler without nodes group (- --nodes=1:10:CPX11:FSN1:pool1)
cluster-autoscaler.yaml
Outputtcpdump in cluster-autoscaler container shows about 5 PPS to api.hetzner.cloud ip(213.239.246.1):
tcpdump from cluster-autoscaler to api.hetzner.cloud (it's about 5 pps)
OutputHetzner cloud api ip:
dig api.hetzner.cloud 213.239.246.1