kubernetes / autoscaler

Autoscaling components for Kubernetes
Apache License 2.0
8.06k stars 3.97k forks source link

Hetzner cloud node group label not added to node in kubernetes #4310

Closed greglangford closed 3 years ago

greglangford commented 3 years ago

Which component are you using?: cluster-autoscaler

What version of the component are you using?: 1.21.0

Component version: k8s.gcr.io/autoscaling/cluster-autoscaler:v1.21.0

What k8s version are you using (kubectl version)?: 1.21.4

kubectl version Output
kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.4", GitCommit:"3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae", GitTreeState:"clean", BuildDate:"2021-08-11T18:16:05Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.4", GitCommit:"3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae", GitTreeState:"clean", BuildDate:"2021-08-11T18:10:22Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}

What environment is this in?: Hetzner Cloud

What did you expect to happen?: Expecting to see nodegroup label in kubernetes node labels of scaled instance.

What happened instead?: Nodegroup label is not present

How to reproduce it (as minimally and precisely as possible): Install cluster using kubeadm, install hcloud-cloud-controller-manager, install cluster autoscaler and specify a couple of different pools in the configuration. Force a scale up event by scheduling pods. Once the scaled instance has started and joined the cluster check its labels by using kubectl get node xxx -o yaml

Anything else we need to know?: None

greglangford commented 3 years ago

I am closing this issue, I actually realise now that the label being on the node is not needed, although it would be nice if Hetzner instance labels are imported but I think this falls under the realm of the hcloud-cloud-controller-manager.

Essentially my reason for opening this ticket was because without the label on the node it appears it was impossible to use nodeSelector or nodeAffinity to schedule workloads on specific pools. It turns out all that is needed is to do the following. Replacing insecure-workload with the name of your pool. Using the following key allows the simulation before node start to figure out if the pod being scheduled would work on the pool defined within the nodeAffinity.

  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: hcloud/node-group
                operator: In
                values:
                  - insecure-workload