Closed greglangford closed 3 years ago
I am closing this issue, I actually realise now that the label being on the node is not needed, although it would be nice if Hetzner instance labels are imported but I think this falls under the realm of the hcloud-cloud-controller-manager.
Essentially my reason for opening this ticket was because without the label on the node it appears it was impossible to use nodeSelector or nodeAffinity to schedule workloads on specific pools. It turns out all that is needed is to do the following. Replacing insecure-workload with the name of your pool. Using the following key allows the simulation before node start to figure out if the pod being scheduled would work on the pool defined within the nodeAffinity.
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: hcloud/node-group
operator: In
values:
- insecure-workload
Which component are you using?: cluster-autoscaler
What version of the component are you using?: 1.21.0
Component version: k8s.gcr.io/autoscaling/cluster-autoscaler:v1.21.0
What k8s version are you using (
kubectl version
)?: 1.21.4kubectl version
OutputWhat environment is this in?: Hetzner Cloud
What did you expect to happen?: Expecting to see nodegroup label in kubernetes node labels of scaled instance.
What happened instead?: Nodegroup label is not present
How to reproduce it (as minimally and precisely as possible): Install cluster using kubeadm, install hcloud-cloud-controller-manager, install cluster autoscaler and specify a couple of different pools in the configuration. Force a scale up event by scheduling pods. Once the scaled instance has started and joined the cluster check its labels by using kubectl get node xxx -o yaml
Anything else we need to know?: None