Open jonathanbeber opened 1 year ago
The client used to write the configmap seems to come from https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/main.go#L392, set in line https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/main.go#L457 for AutoscalerOptions
. That being said, as --cloud-config
is not kubernetes client for other cloud providers, I believe there's no way for these change be just in the ClusterAPI provider.
thanks for reporting this, it sounds like this might be really difficult to fix from the capi provider.
perhaps we should start with a docs update so that users know the configmap will be created in the cluster specified by the --kubeconfig
parameter?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
i think we still need to deal with somehow
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
this needs fixing
/remove-lifecycle stale
/help-wanted
/help
@elmiko: This request has been marked as needing help from a contributor.
Please ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
/triage accepted /lifecycle frozen
Which component are you using?: cluster-autoscaler for Cluster API.
What version of the component are you using?:
Component version: v1.27.2
What k8s version are you using (
kubectl version
)?:v1.27.4
What environment is this in?: AWS managed by CAPA.
What did you expect to happen?:
The status configmap should be created in the cluster using the client resulted from the
--cloud-config
parameter.What happened instead?:
The status configmap is created in the workload cluster.
How to reproduce it (as minimally and precisely as possible):
Deploy cluster-autoscaler with a topology as Autoscaler running in management cluster using service account credentials, with separate workload cluster and check that the status configmap is created in the workload cluster.
Anything else we need to know?:
I would like to work in this task.