Open ojundt opened 4 months ago
/area cluster-autoscaler
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Which component are you using?:
cluster-autoscaler
What version of the component are you using?: 1.29.0
What k8s version are you using (
kubectl version
)?: 1.29What environment is this in?: AWS EKS
What did you expect to happen?: When I schedule a pod with
nodeSelector=kubernetes.io/os: windows
on a Windows-based EKS Managed Node Group that's currently running at 0 nodes, I expect cluster autoscaler to scale up the node group from zero.What happened instead?: Cluster autoscaler didn't scale up the node group from zero. Logs indicate that the nodegroup doesn't match the nodeselector.
My hunch: Cluster Autoscaler doesn't know that EKS Managed Node Group will inject the
kubernetes.io/os=windows
node label because it's not part of theDescribeNodeGroup
response that's being used to feed the cache.How to reproduce it (as minimally and precisely as possible):
WINDOWS_CORE_2022_x86_64
) with a min count of0
.Anything else we need to know?: