Open elmiko opened 7 months ago
/area provider/cluster-api
i've been hacking on a PR to add some timing metrics on the NodeGroups
interface function. i believe we spend the most time in this function and have been trying to prove out how the number of node groups affects the time that this call takes.
https://github.com/elmiko/kubernetes-autoscaler/commit/1a5d9cdd9e91754caac78ca87730cd2715dcf765
I don't think kas calls are the main bottle neck but rather the cloudprovider.NodeGroup function implementation. Currently it takes ~20 seconds with ~90 MachineSets. This https://github.com/kubernetes/autoscaler/pull/6796 avoids expensive loop and copy pointers resulting in ~5 seconds each NodeGroups call .
it seems we might have multiple areas for improvement, when i observed behavior with 50 to 75 node groups, i could see the performance becoming worse over time. it appeared that we might have inefficiency in the way we handle all the various cluster-api CRs.
/area cluster-autoscaler
I have encountered a situation where the startup process of autoscaler (provider is alicloud) takes a very long time. The reason is that MixedTemplateNodeInfoProvider:Process() will build nodeinfo for each nodegroup. During the initialization of this function, it will make DescribeScalingInstances request to aliyun provider, and this request takes 2s each time. My cluster has 4k nodes, and the startup phase takes a very long time.
I don’t know if this issue is caused by the long startup process of ca, or the long loop phase after startup?
I have encountered a situation where the startup process of autoscaler (provider is alicloud) takes a very long time. The reason is that MixedTemplateNodeInfoProvider:Process() will build nodeinfo for each nodegroup. During the initialization of this function, it will make DescribeScalingInstances request to aliyun provider, and this request takes 2s each time. My cluster has 4k nodes, and the startup phase takes a very long time.
I don’t know if this issue is caused by the long startup process of ca, or the long loop phase after startup?
The cluster-api provider can support the nodegroup cache to cache nodegroups, such as AWS AwsManager
/cc
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
i think this issue is still import but will require more research
/remove-lifecycle stale
Which component are you using?:
cluster-autoscaler
What version of the component are you using?:
Component version: all versions up to and including 1.30.0
What k8s version are you using (
kubectl version
)?:this affects all kubernetes versions that are compatible with the cluster autoscaler
What environment is this in?:
clusterapi provider, with more than 50 node groups (eg. MachineDeployments, MachineSets, MachinePools)
What did you expect to happen?:
expect cluster autoscaler to operate as normal
What happened instead?:
as the number of node groups increases, the performance of the autoscaler appears to degrade. it takes longer and longer to process the scan interval and in some cases (when node groups are in the 100s) it can take more than 40 minutes to add a new node when pods are pending.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
this problem appears related to how the clusterapi provider interacts with the api server. when assessing activity in the cluster, the provider will query the api server for all the node groups, then query again for scalable resources, and potentially another time for the infrastructure machine template. i have a feeling that this interaction is causing the issues.
i think it's possible that extending the scan interval time might alleviate some of the issues, but i hove not confirmed anything yet.