Closed d2orbc closed 1 month ago
Oh, yeah, I created a new node in the cluster and it got the labels right away.
I thought the labels would be applied to existing nodes and kept updated for example if the node gets migrated to another host.
I see now this isn't true.
Kubernetes has a lot of immutable values. There is not simple way to move instance from one zone/region to another. Better to use drain/cordon technics...
Labels aren't being applied. For some reason, they did get applied to one of the nodes in some past configuration. I'm not sure what changed that the other nodes aren't getting labels now.
All nodes have providerID set (as you can see from
kubectl describe node talos61
output below)Do I need to recreate these nodes? I thought the labels would be applied to existing nodes and kept updated for example if the node gets migrated to another host.
Logs
Environment
kubectl describe node <node>
]cat /etc/os-release
] Ubuntu 20.04.6 LTS