Closed mnaser closed 1 year ago
There is neither a special label nor annotation. master node
annotations:
cluster.x-k8s.io/cluster-name: kube-carvi
cluster.x-k8s.io/cluster-namespace: magnum-system
cluster.x-k8s.io/machine: kube-carvi-9tjnd-vvtlc
cluster.x-k8s.io/owner-kind: KubeadmControlPlane
cluster.x-k8s.io/owner-name: kube-carvi-9tjnd
csi.volume.kubernetes.io/nodeid: '{"cinder.csi.openstack.org":"1d08fcd0-a2bb-4b62-b237-c5409e8ed691","manila.csi.openstack.org":"kube-carvi-control-plane-dxsqm-kvzvm","nfs.csi.k8s.io":"kube-carvi-control-plane-dxsqm-kv>
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: "0"
projectcalico.org/IPv4Address: 10.0.0.127/24
projectcalico.org/IPv4IPIPTunnelAddr: 10.100.93.192
volumes.kubernetes.io/controller-managed-attach-detach: "true"
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/instance-type: m1.medium
beta.kubernetes.io/os: linux
failure-domain.beta.kubernetes.io/region: RegionOne
failure-domain.beta.kubernetes.io/zone: nova
kubernetes.io/arch: amd64
kubernetes.io/hostname: kube-carvi-control-plane-dxsqm-kvzvm
kubernetes.io/os: linux
node-role.kubernetes.io/control-plane: ""
node.kubernetes.io/exclude-from-external-load-balancers: ""
node.kubernetes.io/instance-type: m1.medium
topology.cinder.csi.openstack.org/zone: nova
topology.kubernetes.io/region: RegionOne
topology.kubernetes.io/zone: nova
worker node (node group without any specific role)
annotations:
cluster.x-k8s.io/cluster-name: kube-carvi
cluster.x-k8s.io/cluster-namespace: magnum-system
cluster.x-k8s.io/machine: kube-carvi-default-worker-wrjbf-5b8588cb4f-tw765
cluster.x-k8s.io/owner-kind: MachineSet
cluster.x-k8s.io/owner-name: kube-carvi-default-worker-wrjbf-5b8588cb4f
csi.volume.kubernetes.io/nodeid: '{"cinder.csi.openstack.org":"3c648120-e4c0-4fc0-b974-c4658e437b01","manila.csi.openstack.org":"kube-carvi-default-worker-infra-t2f8r-kjl88","nfs.csi.k8s.io":"kube-carvi-default-worker->
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: "0"
projectcalico.org/IPv4Address: 10.0.0.182/24
projectcalico.org/IPv4IPIPTunnelAddr: 10.100.161.128
volumes.kubernetes.io/controller-managed-attach-detach: "true"
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/instance-type: m1.medium
beta.kubernetes.io/os: linux
failure-domain.beta.kubernetes.io/region: RegionOne
failure-domain.beta.kubernetes.io/zone: nova
kubernetes.io/arch: amd64
kubernetes.io/hostname: kube-carvi-default-worker-infra-t2f8r-kjl88
kubernetes.io/os: linux
node.kubernetes.io/instance-type: m1.medium
topology.cinder.csi.openstack.org/zone: nova
topology.kubernetes.io/region: RegionOne
topology.kubernetes.io/zone: nova
$ o coe nodegroup list 4e1910e1-35d1-4d99-913d-2ee5765156b2
+--------------------------------------+----------------+-----------+--------------------------------------+------------+-----------------+--------+
| uuid | name | flavor_id | image_id | node_count | status | role |
+--------------------------------------+----------------+-----------+--------------------------------------+------------+-----------------+--------+
| 81d69894-8696-4802-9ce9-04608582680d | default-master | m1.medium | ef107f29-8f26-474e-8f5f-80d269c7d2cd | 1 | CREATE_COMPLETE | master |
| c61acc4b-41f6-4ea9-834d-e0e04914a96b | default-worker | m1.medium | ef107f29-8f26-474e-8f5f-80d269c7d2cd | 1 | UPDATE_COMPLETE | worker |
+--------------------------------------+----------------+-----------+--------------------------------------+------------+-----------------+--------+
$ kubectl get nodes -L magnum.openstack.org/role
NAME STATUS ROLES AGE VERSION ROLE
kube-carvi-control-plane-dxsqm-kvzvm Ready control-plane 21d v1.25.3
kube-carvi-default-worker-infra-t2f8r-kjl88 Ready <none> 21d v1.25.3
metadata propagation flow https://cluster-api.sigs.k8s.io/images/metadata-propagation.jpg
Limitations of machine to node https://cluster-api.sigs.k8s.io/developer/architecture/controllers/metadata-propagation.html#machine
Top-level labels that meet a specific cretria are propagated to the Node labels and top-level annotatation are not propagated.
.labels.[label-meets-criteria]
=> Node.labels
.annotations => Not propagated. Label should meet one of the following criterias to propagate to Node:Has
node-role.kubernetes.io
as prefix. Belongs tonode-restriction.kubernetes.io
domain. Belongs tonode.cluster.x-k8s.io
domain.
So we cannot use CAPI metadata propogation for magnum.openstack.org
labels.
kubeletExtraArgs.node-labels
in KubeadmConfigTemplate but they also allow the followings and others are discouraged.
kubernetes.io/hostname
kubernetes.io/instance-type
kubernetes.io/os
kubernetes.io/arch
beta.kubernetes.io/instance-type beta.kubernetes.io/os beta.kubernetes.io/arch
failure-domain.beta.kubernetes.io/zone failure-domain.beta.kubernetes.io/region
failure-domain.kubernetes.io/zone failure-domain.kubernetes.io/region
[.]kubelet.kubernetes.io/ [.]node.kubernetes.io/
**i.e. `magnum.openstack.org` is discouraged (because of security perspective) but it is possible at least so we can try using `kubeletExtraArgs.node-labels` in KubeadmConfigTemplate.**
Other concern is this workaround only applies labels at the cluster creation but i don't think we have cases to change nodegroup roles on fly. So it is ok.
**Another option is to request node label names change to Magnum upstream, or use different labels in mcapi project and add in the doc?**
@mnaser what is your opinion?
@okozachenko1203 I think we can diverge from the Magnum role and instead use the native Kubernetes one, so I would like for us to propose the following:
node-role.kubernetes.io/NODEGROUPNAME=""
This is much more clean and native way of doing it than what Magnum was doing, we can document this in our documentation as well. It will have the added useful feature of being able to see the role when doing kubectl get nodes
too :)
and it's very easy :)
More information can be found here: https://docs.openstack.org/magnum/latest/user/index.html#roles
We should figure out the best way to attach this, if it's not available out of the box it might be good to know/see what labels that Cluster API adds by default.