Open wallrj opened 9 months ago
kubeadm (used by kind) enables this controller: https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#noderestriction
The NodeRestriction admission plugin prevents kubelets from deleting their Node API object, and enforces kubelet modification of labels under the kubernetes.io/ or k8s.io/ prefixes as follows:
node-role.kubernetes.io/*
are not allowed labels for a kubelet to self-apply.
Thanks @neolit123 that explains what's causing the failure.
node-role.kubernetes.io/${ROLE_NAME}: ""
to all the nodes so that kubectl get nodes
would show the roles of all nodes in a multi-node cluster (not just control-plane).The NodeRestriction admission plugin prevents kubelets from deleting their Node API object, and enforces kubelet modification of labels under the kubernetes.io/ or k8s.io/ prefixes as follows:
Prevents kubelets from adding/removing/updating labels with a node-restriction.kubernetes.io/ prefix. This label prefix is reserved for administrators to label their Node objects for workload isolation purposes, and kubelets will not be allowed to modify labels with that prefix.
Use of any other labels under the kubernetes.io or k8s.io prefixes by kubelets is reserved, and may be disallowed or allowed by the NodeRestriction admission plugin in the future.
If Kind is going to continue to use the kubelet to apply the node labels, I'd like if Kind would fail early with a clear error message if I choose labels that are going to be rejected by the noderestriction controller.
Yes, at this point we should add validation to fail early on attempting to set kubelet disallowed labels.
At this point it exists in all kind-supported releases (it hasn't always been the case).
OR Kind could allow any labels to be supplied in the Node config and apply them in some other way, to make life simpler for people who want to use a multi-node Kind cluster for testing node affinity settings.
We should not attempt to circumvent the controls in Kubernetes. KIND is all about conformant Kubernetes and these controls exist for a reason, it is because the API namespace is owned by the Kubernetes project and only expected, API approved usage should exist.
But instead, you can add some other label to your nodes like foo.dev/role
We likely won't do something like that out of the box because again, that would actively encourage workloads to not be based on conformant Kubernetes.
Which: In general you shouldn't need to use this label? The control planes will be tainted for scheduling purposes.
If you're just listing nodes interactively, kubect get nodes
will have the roles in the names of all the kind nodes.
5. And perhaps I should create an issue to update that page on the Kubernetes website, because I it mislead me into thinking that node-role.kubernetes.io would be allowed. Right now it specifically says:
the ticket exists https://github.com/kubernetes/website/issues/31992
i forgot we added this section: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#managed-node-labels
We should also link to that from the kind docs, I thought we had but we clearly aren't: https://kind.sigs.k8s.io/docs/user/configuration/#extra-labels
We could validate this early in kind, but the behavior may change in Kubernetes so I'm a bit hesitant to re-encode the permitted list.
Thought about just blocking use of all k8s.io / kubernetes.io labels early, but some of the permitted ones may be very useful to set, so maybe we should just copy the list of permitted labels into kind and error on configs with any other k8s.io / kubernetes.io labels.
I attempted to use this feature to set a meaningful role name for the worker nodes. Here's a demonstration of manually labelling the nodes and the effect it has on the output of
kubectl get nodes
But with the following config file,
kind create cluster
just crashes after a number of minutes.Originally posted by @wallrj in https://github.com/kubernetes-sigs/kind/issues/1926#issuecomment-1969030825