kubernetes / autoscaler

Autoscaling components for Kubernetes
Apache License 2.0
8.11k stars 3.98k forks source link

cluster-autoscaler clusterapi provider performance degrades when there are a high number of node groups #6784

Open elmiko opened 7 months ago

elmiko commented 7 months ago

Which component are you using?:

cluster-autoscaler

What version of the component are you using?:

Component version: all versions up to and including 1.30.0

What k8s version are you using (kubectl version)?:

this affects all kubernetes versions that are compatible with the cluster autoscaler

What environment is this in?:

clusterapi provider, with more than 50 node groups (eg. MachineDeployments, MachineSets, MachinePools)

What did you expect to happen?:

expect cluster autoscaler to operate as normal

What happened instead?:

as the number of node groups increases, the performance of the autoscaler appears to degrade. it takes longer and longer to process the scan interval and in some cases (when node groups are in the 100s) it can take more than 40 minutes to add a new node when pods are pending.

How to reproduce it (as minimally and precisely as possible):

  1. setup a cluster with clusterapi and cluster autoscaler
  2. create 100 machinedeployments
  3. configure autoscaler to recognize all 100 machinedeployments as node groups
  4. creating a pending job to the cluster
  5. observe the autoscaler behavior

Anything else we need to know?:

this problem appears related to how the clusterapi provider interacts with the api server. when assessing activity in the cluster, the provider will query the api server for all the node groups, then query again for scalable resources, and potentially another time for the infrastructure machine template. i have a feeling that this interaction is causing the issues.

i think it's possible that extending the scan interval time might alleviate some of the issues, but i hove not confirmed anything yet.

enxebre commented 7 months ago

/area provider/cluster-api

elmiko commented 7 months ago

i've been hacking on a PR to add some timing metrics on the NodeGroups interface function. i believe we spend the most time in this function and have been trying to prove out how the number of node groups affects the time that this call takes.

https://github.com/elmiko/kubernetes-autoscaler/commit/1a5d9cdd9e91754caac78ca87730cd2715dcf765

enxebre commented 6 months ago

I don't think kas calls are the main bottle neck but rather the cloudprovider.NodeGroup function implementation. Currently it takes ~20 seconds with ~90 MachineSets. This https://github.com/kubernetes/autoscaler/pull/6796 avoids expensive loop and copy pointers resulting in ~5 seconds each NodeGroups call .

elmiko commented 6 months ago

it seems we might have multiple areas for improvement, when i observed behavior with 50 to 75 node groups, i could see the performance becoming worse over time. it appeared that we might have inefficiency in the way we handle all the various cluster-api CRs.

adrianmoisey commented 4 months ago

/area cluster-autoscaler

songminglong commented 3 months ago

I have encountered a situation where the startup process of autoscaler (provider is alicloud) takes a very long time. The reason is that MixedTemplateNodeInfoProvider:Process() will build nodeinfo for each nodegroup. During the initialization of this function, it will make DescribeScalingInstances request to aliyun provider, and this request takes 2s each time. My cluster has 4k nodes, and the startup phase takes a very long time.

I don’t know if this issue is caused by the long startup process of ca, or the long loop phase after startup?

songminglong commented 3 months ago

I have encountered a situation where the startup process of autoscaler (provider is alicloud) takes a very long time. The reason is that MixedTemplateNodeInfoProvider:Process() will build nodeinfo for each nodegroup. During the initialization of this function, it will make DescribeScalingInstances request to aliyun provider, and this request takes 2s each time. My cluster has 4k nodes, and the startup phase takes a very long time.

I don’t know if this issue is caused by the long startup process of ca, or the long loop phase after startup?

The cluster-api provider can support the nodegroup cache to cache nodegroups, such as AWS AwsManager

songminglong commented 3 months ago

/cc

k8s-triage-robot commented 6 days ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

elmiko commented 5 days ago

i think this issue is still import but will require more research

/remove-lifecycle stale