kubernetes-sigs / kind

Kubernetes IN Docker - local clusters for testing Kubernetes
https://kind.sigs.k8s.io/
Apache License 2.0
13.5k stars 1.56k forks source link

Node label `app.kubernetes.io/part-of` breaks cluster #2932

Closed brumhard closed 2 years ago

brumhard commented 2 years ago

What happened:

I tried to add a label to the control-plane node to work with existing nodeSelectors. Example labels like "test: testing" worked like a charm but the actual label I'm trying to set (app.kubernetes.io/part-of: testing) does not. kind create cluster --config ./kind.yaml gets stuck in the Starting control-plane step and eventually fails with

ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged kind-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
...
I0919 10:27:08.823130     125 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

What you expected to happen:

A new cluster is created with the node label set.

How to reproduce it (as minimally and precisely as possible):

Use the following config with kind create cluster --config ./kind.yaml:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
    labels:
      "app.kubernetes.io/part-of": testing

Anything else we need to know?:

I also tried with the InitConfiguration for kubeadm.

Environment:

Server: Containers: 2 Running: 1 Paused: 0 Stopped: 1 Images: 30 Server Version: 20.10.17 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: true userxattr: false Logging Driver: json-file Cgroup Driver: cgroupfs Cgroup Version: 2 Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: io.containerd.runtime.v1.linux runc io.containerd.runc.v2 Default Runtime: runc Init Binary: docker-init containerd version: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 runc version: v1.1.2-0-ga916309 init version: de40ad0 Security Options: seccomp Profile: default cgroupns Kernel Version: 5.10.104-linuxkit Operating System: Docker Desktop OSType: linux Architecture: aarch64 CPUs: 4 Total Memory: 7.667GiB Name: docker-desktop Docker Root Dir: /var/lib/docker Debug Mode: false HTTP Proxy: http.docker.internal:3128 HTTPS Proxy: http.docker.internal:3128 No Proxy: hubproxy.docker.internal Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: hubproxy.docker.internal:5000 127.0.0.0/8 Live Restore Enabled: false


- OS (e.g. from `/etc/os-release`): macOS Monterey 12.1 21C52 arm64
brumhard commented 2 years ago

Btw applying k label nodes kind-control-plane "app.kubernetes.io/part-of"=testing to an existing cluster works.

BenTheElder commented 2 years ago

kubernetes.io and k8s.io were reserved by kubernetes. Kubelet will refuse to start.

BenTheElder commented 2 years ago

You should use a node label that isn't in the kubernetes namespaces. I don't think app labels were intended to be used on nodes like this either.

brumhard commented 2 years ago

Mmh okay I see. Someone introduced these node labels for some workload, I'm not sure why as well. But thank you very much for the explanation.