Closed mtavaresmedeiros closed 1 year ago
this can't be a bug of the autoscaler from my perspective.. For me it works! We inject the configmap into the environment like this:
containers:
- image: k8s.gcr.io/autoscaling/cluster-autoscaler:v1.23.1
name: cluster-autoscaler
command:
- ./cluster-autoscaler
- --v=4
- --stderrthreshold=$(IAC_CAS_LOG_LEVEL)
- --cloud-provider=$(IAC_CAS_CLOUD_PROVIDER)
- --skip-nodes-with-local-storage=$(IAC_CAS_SKIP_NODES_WITH_LOCAL_STORAGE)
- --expander=$(IAC_CAS_EXPANDER)
- --balance-similar-node-groups=$(IAC_CAS_BALANCE_SIMILAR_NODE_GROUPS)
- --scale-down-enabled=$(IAC_CAS_SCALE_DOWN_ENABLED)
- --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/$(IAC_CLUSTER_NAME)
- --scan-interval=$(IAC_CAS_SCAN_INTERVAL)
- --skip-nodes-with-system-pods=$(IAC_CAS_SKIP_NODES_WITH_SYSTEM_PODS)
envFrom:
- configMapRef:
name: iac-cluster-autoscaler
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
Which component are you using?: cluster-autoscaler
What version of the component are you using?: Cluster Autoscaler: 1.21.1 Helm Chart Version: 9.13.1
Component version: v1.21.14-eks-6d3986b
kubectl version
OutputWhat environment is this in?: EKS - AWS
What did you expect to happen?: Use Environment Variables
What happened instead?: Using environment variables doesn't work
How to reproduce it (as minimally and precisely as possible): Try to change environments variables: --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/$EKS_CLUSTER_NAME