k0sproject / k0sctl

A bootstrapping and management tool for k0s clusters.
Other
393 stars 77 forks source link

Allow specifying node labels in configuration. #175

Open vs49688 opened 3 years ago

vs49688 commented 3 years ago

Specifically for cases where using installFlags: ["--labels=whatever"] isn't acceptable, e.g.

Kubelets can't set the node-role.kubernetes.io/master="" label on themselves for security reasons, it has to be done via an API client (e.g. kubectl). See https://github.com/kubernetes/kubernetes/issues/84912#issuecomment-551362981

This could be added as the following:

spec:
  hosts:
  - role: controller+worker
    labels:
    - "node-role.kubernetes.io/master="

or

spec:
  hosts:
  - role: controller+worker
    labels:
    - key: node-role.kubernetes.io/master
      value: ""
kke commented 3 years ago

I think it needs to know which labels were set by k0sctl so it can remove the ones that no longer exist in k0sctl.yaml

Maybe some k0sctl.k0sproject.io/node-labels annotation of the label keys 🤔

vs49688 commented 3 years ago

So something like this?

metadata:
  annotations:
    k0sctl.k0sproject.io/node-labels: "node-role.kubernetes.io/master,label1,label2"

That's probably the nicest way to do it, as least that I can think of.

kke commented 3 years ago

Maybe a configMap

sjdrc commented 2 years ago

This would be a super useful feature to have. For now I'm working around this by adding

    installFlags:
    - --labels="machine-type=train"

but this only applies on freshly provisioned hosts. It would be great if this was implemented in a way that updated labels on existing clusters by using a mechanism like one suggested above.

redzioch commented 1 year ago

+1

pinghe commented 1 year ago

+1

kke commented 1 year ago

It would be a bit simpler to just have something like:

spec:
  hosts:
    - role: controller+worker
      labels:
        apply:
          - node-role.kubernetes.io/control-plane=
        delete:
          - node-role.kubernetes.io/master
      # or:
      labels:
        - apply: node-role.kubernetes.io/control-plane=
        - delete: node-role.kubernetes.io/master     

Then it wouldn't need to keep track of anything. Possibly less room for error too. Same could be done for taints while at it.

till commented 1 year ago

This reads slightly nicer:

spec:
  hosts:
    - role: controller+worker
      labels:
        apply:
          - node-role.kubernetes.io/control-plane=
        delete:
          - node-role.kubernetes.io/master

How do you plan on merging this with installFlags? I am just curious what the path going forward is since it's gonna be messy to support multiple places.

kke commented 1 year ago

As installFlags is already conveniently named "install flags", I think anything you have there will only be used to modify k0s install flags like before. The labels section would be applied once the node is up and rechecked on every apply.

It would be possible to allow something like:

spec:
  hosts:
    - role: controller+worker
      labels:
        install:
          - node.kubernetes.io/out-of-service=NoExecute
        apply:
          - node-role.kubernetes.io/control-plane=
        delete:
          - node-role.kubernetes.io/master

The install ones would then get merged into the installFlags behind the scenes.