kubernetes-sigs / kind

Kubernetes IN Docker - local clusters for testing Kubernetes
https://kind.sigs.k8s.io/
Apache License 2.0
13.55k stars 1.57k forks source link

Failed to create cluster: failed to init node with kubeadm #3762

Closed vortegatorres closed 1 month ago

vortegatorres commented 1 month ago

What happened:

Running: kind create cluster -v 999 --retain I have this output:

kind create cluster -v 999 --retain
Creating cluster "kind" ...
DEBUG: docker/images.go:58] Image: kindest/node:v1.31.0@sha256:53df588e04085fd41ae12de0c3fe4c72f7013bba32a20e7325357a1ac94ba865 present locally
 βœ“ Ensuring node image (kindest/node:v1.31.0) πŸ–Ό
 βœ“ Preparing nodes πŸ“¦
DEBUG: config/config.go:96] Using the following kubeadm config for node kind-control-plane:
apiServer:
  certSANs:
  - localhost
  - 127.0.0.1
  extraArgs:
    runtime-config: ""
apiVersion: kubeadm.k8s.io/v1beta3
clusterName: kind
controlPlaneEndpoint: kind-control-plane:6443
controllerManager:
  extraArgs:
    enable-hostpath-provisioner: "true"
kind: ClusterConfiguration
kubernetesVersion: v1.31.0
networking:
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/16
scheduler:
  extraArgs: null
---
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- token: abcdef.0123456789abcdef
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.18.0.2
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    node-ip: 172.18.0.2
    node-labels: ""
    provider-id: kind://docker/kind/kind-control-plane
skipPhases:
- preflight
---
apiVersion: kubeadm.k8s.io/v1beta3
controlPlane:
  localAPIEndpoint:
    advertiseAddress: 172.18.0.2
    bindPort: 6443
discovery:
  bootstrapToken:
    apiServerEndpoint: kind-control-plane:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    node-ip: 172.18.0.2
    node-labels: ""
    provider-id: kind://docker/kind/kind-control-plane
skipPhases:
- preflight
---
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
cgroupRoot: /kubelet
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
  nodefs.inodesFree: 0%
failSwapOn: false
imageGCHighThresholdPercent: 100
kind: KubeletConfiguration
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
conntrack:
  maxPerCore: 0
iptables:
  minSyncPeriod: 1s
kind: KubeProxyConfiguration
mode: iptables
 βœ“ Writing configuration πŸ“œ
DEBUG: kubeadminit/init.go:96] I1024 15:11:46.752221     144 initconfiguration.go:261] loading configuration from "/kind/kubeadm.conf"
W1024 15:11:46.762000     144 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1024 15:11:46.767782     144 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1024 15:11:46.768447     144 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "JoinConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1024 15:11:46.770194     144 initconfiguration.go:361] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration
[init] Using Kubernetes version: v1.31.0
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I1024 15:11:46.777715     144 certs.go:112] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I1024 15:11:47.027448     144 certs.go:473] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kind-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.18.0.2 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
I1024 15:11:47.493595     144 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I1024 15:11:47.574484     144 certs.go:473] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I1024 15:11:47.731523     144 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I1024 15:11:47.995066     144 certs.go:473] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.18.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.18.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I1024 15:11:48.767884     144 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1024 15:11:49.060618     144 kubeconfig.go:111] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I1024 15:11:49.398093     144 kubeconfig.go:111] creating kubeconfig file for super-admin.conf
[kubeconfig] Writing "super-admin.conf" kubeconfig file
I1024 15:11:49.540775     144 kubeconfig.go:111] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I1024 15:11:49.672773     144 kubeconfig.go:111] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1024 15:11:49.827461     144 kubeconfig.go:111] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1024 15:11:50.293955     144 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I1024 15:11:50.294109     144 manifests.go:103] [control-plane] getting StaticPodSpecs
I1024 15:11:50.295792     144 certs.go:473] validating certificate period for CA certificate
I1024 15:11:50.296198     144 manifests.go:129] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I1024 15:11:50.296211     144 manifests.go:129] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I1024 15:11:50.296215     144 manifests.go:129] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I1024 15:11:50.296219     144 manifests.go:129] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I1024 15:11:50.296221     144 manifests.go:129] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I1024 15:11:50.296876     144 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I1024 15:11:50.297027     144 manifests.go:103] [control-plane] getting StaticPodSpecs
I1024 15:11:50.297184     144 manifests.go:129] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I1024 15:11:50.297197     144 manifests.go:129] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I1024 15:11:50.297202     144 manifests.go:129] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I1024 15:11:50.297206     144 manifests.go:129] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I1024 15:11:50.297209     144 manifests.go:129] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I1024 15:11:50.297213     144 manifests.go:129] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I1024 15:11:50.297217     144 manifests.go:129] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I1024 15:11:50.297749     144 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I1024 15:11:50.297764     144 manifests.go:103] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-scheduler"
I1024 15:11:50.297865     144 manifests.go:129] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I1024 15:11:50.298162     144 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I1024 15:11:50.298254     144 kubelet.go:68] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
I1024 15:11:50.891389     144 loader.go:395] Config loaded from file:  /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001271567s

Unfortunately, an error has occurred:
    The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
    - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
could not initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase.func1
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:112
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:122
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:259
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:128
github.com/spf13/cobra.(*Command).execute
    github.com/spf13/cobra@v1.8.1/command.go:985
github.com/spf13/cobra.(*Command).ExecuteC
    github.com/spf13/cobra@v1.8.1/command.go:1117
github.com/spf13/cobra.(*Command).Execute
    github.com/spf13/cobra@v1.8.1/command.go:1041
k8s.io/kubernetes/cmd/kubeadm/app.Run
    k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:47
main.main
    k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
    runtime/proc.go:271
runtime.goexit
    runtime/asm_amd64.s:1695
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:260
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:128
github.com/spf13/cobra.(*Command).execute
    github.com/spf13/cobra@v1.8.1/command.go:985
github.com/spf13/cobra.(*Command).ExecuteC
    github.com/spf13/cobra@v1.8.1/command.go:1117
github.com/spf13/cobra.(*Command).Execute
    github.com/spf13/cobra@v1.8.1/command.go:1041
k8s.io/kubernetes/cmd/kubeadm/app.Run
    k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:47
main.main
    k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
    runtime/proc.go:271
runtime.goexit
    runtime/asm_amd64.s:1695
 βœ— Starting control-plane πŸ•ΉοΈ
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged kind-control-plane kubeadm init --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
Command Output: I1024 15:11:46.752221     144 initconfiguration.go:261] loading configuration from "/kind/kubeadm.conf"
W1024 15:11:46.762000     144 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1024 15:11:46.767782     144 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1024 15:11:46.768447     144 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "JoinConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1024 15:11:46.770194     144 initconfiguration.go:361] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration
[init] Using Kubernetes version: v1.31.0
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I1024 15:11:46.777715     144 certs.go:112] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I1024 15:11:47.027448     144 certs.go:473] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kind-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.18.0.2 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
I1024 15:11:47.493595     144 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I1024 15:11:47.574484     144 certs.go:473] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I1024 15:11:47.731523     144 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I1024 15:11:47.995066     144 certs.go:473] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.18.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.18.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I1024 15:11:48.767884     144 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1024 15:11:49.060618     144 kubeconfig.go:111] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I1024 15:11:49.398093     144 kubeconfig.go:111] creating kubeconfig file for super-admin.conf
[kubeconfig] Writing "super-admin.conf" kubeconfig file
I1024 15:11:49.540775     144 kubeconfig.go:111] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I1024 15:11:49.672773     144 kubeconfig.go:111] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1024 15:11:49.827461     144 kubeconfig.go:111] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1024 15:11:50.293955     144 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I1024 15:11:50.294109     144 manifests.go:103] [control-plane] getting StaticPodSpecs
I1024 15:11:50.295792     144 certs.go:473] validating certificate period for CA certificate
I1024 15:11:50.296198     144 manifests.go:129] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I1024 15:11:50.296211     144 manifests.go:129] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I1024 15:11:50.296215     144 manifests.go:129] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I1024 15:11:50.296219     144 manifests.go:129] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I1024 15:11:50.296221     144 manifests.go:129] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I1024 15:11:50.296876     144 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I1024 15:11:50.297027     144 manifests.go:103] [control-plane] getting StaticPodSpecs
I1024 15:11:50.297184     144 manifests.go:129] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I1024 15:11:50.297197     144 manifests.go:129] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I1024 15:11:50.297202     144 manifests.go:129] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I1024 15:11:50.297206     144 manifests.go:129] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I1024 15:11:50.297209     144 manifests.go:129] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I1024 15:11:50.297213     144 manifests.go:129] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I1024 15:11:50.297217     144 manifests.go:129] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I1024 15:11:50.297749     144 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I1024 15:11:50.297764     144 manifests.go:103] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-scheduler"
I1024 15:11:50.297865     144 manifests.go:129] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I1024 15:11:50.298162     144 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I1024 15:11:50.298254     144 kubelet.go:68] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
I1024 15:11:50.891389     144 loader.go:395] Config loaded from file:  /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001271567s

Unfortunately, an error has occurred:
    The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
    - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
could not initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase.func1
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:112
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:122
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:259
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:128
github.com/spf13/cobra.(*Command).execute
    github.com/spf13/cobra@v1.8.1/command.go:985
github.com/spf13/cobra.(*Command).ExecuteC
    github.com/spf13/cobra@v1.8.1/command.go:1117
github.com/spf13/cobra.(*Command).Execute
    github.com/spf13/cobra@v1.8.1/command.go:1041
k8s.io/kubernetes/cmd/kubeadm/app.Run
    k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:47
main.main
    k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
    runtime/proc.go:271
runtime.goexit
    runtime/asm_amd64.s:1695
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:260
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:128
github.com/spf13/cobra.(*Command).execute
    github.com/spf13/cobra@v1.8.1/command.go:985
github.com/spf13/cobra.(*Command).ExecuteC
    github.com/spf13/cobra@v1.8.1/command.go:1117
github.com/spf13/cobra.(*Command).Execute
    github.com/spf13/cobra@v1.8.1/command.go:1041
k8s.io/kubernetes/cmd/kubeadm/app.Run
    k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:47
main.main
    k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
    runtime/proc.go:271
runtime.goexit
    runtime/asm_amd64.s:1695
Stack Trace:
sigs.k8s.io/kind/pkg/errors.WithStack
    sigs.k8s.io/kind/pkg/errors/errors.go:59
sigs.k8s.io/kind/pkg/exec.(*LocalCmd).Run
    sigs.k8s.io/kind/pkg/exec/local.go:124
sigs.k8s.io/kind/pkg/cluster/internal/providers/docker.(*nodeCmd).Run
    sigs.k8s.io/kind/pkg/cluster/internal/providers/docker/node.go:146
sigs.k8s.io/kind/pkg/exec.CombinedOutputLines
    sigs.k8s.io/kind/pkg/exec/helpers.go:67
sigs.k8s.io/kind/pkg/cluster/internal/create/actions/kubeadminit.(*action).Execute
    sigs.k8s.io/kind/pkg/cluster/internal/create/actions/kubeadminit/init.go:95
sigs.k8s.io/kind/pkg/cluster/internal/create.Cluster
    sigs.k8s.io/kind/pkg/cluster/internal/create/create.go:135
sigs.k8s.io/kind/pkg/cluster.(*Provider).Create
    sigs.k8s.io/kind/pkg/cluster/provider.go:192
sigs.k8s.io/kind/pkg/cmd/kind/create/cluster.runE
    sigs.k8s.io/kind/pkg/cmd/kind/create/cluster/createcluster.go:110
sigs.k8s.io/kind/pkg/cmd/kind/create/cluster.NewCommand.func1
    sigs.k8s.io/kind/pkg/cmd/kind/create/cluster/createcluster.go:54
github.com/spf13/cobra.(*Command).execute
    github.com/spf13/cobra@v1.8.0/command.go:983
github.com/spf13/cobra.(*Command).ExecuteC
    github.com/spf13/cobra@v1.8.0/command.go:1115
github.com/spf13/cobra.(*Command).Execute
    github.com/spf13/cobra@v1.8.0/command.go:1039
sigs.k8s.io/kind/cmd/kind/app.Run
    sigs.k8s.io/kind/cmd/kind/app/main.go:53
sigs.k8s.io/kind/cmd/kind/app.Main
    sigs.k8s.io/kind/cmd/kind/app/main.go:35
main.main
    sigs.k8s.io/kind/main.go:25
runtime.main
    runtime/proc.go:272
runtime.goexit
    runtime/asm_arm64.s:1223

Logs: docker-info.txt kind-version.txt alternatives.log containerd.log images.log inspect.json journal.log kubelet.log kubernetes-version.txt serial.log

What you expected to happen:

Create the cluster without error.

How to reproduce it (as minimally and precisely as possible):

kind create cluster -v 999 --retain

Anything else we need to know?:

Environment:

Server: Containers: 5 Running: 2 Paused: 0 Stopped: 3 Images: 231 Server Version: 27.2.0 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Using metacopy: false Native Overlay Diff: true userxattr: false Logging Driver: json-file Cgroup Driver: cgroupfs Cgroup Version: 2 Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog Swarm: inactive Runtimes: io.containerd.runc.v2 runc Default Runtime: runc Init Binary: docker-init containerd version: 8fc6bcff51318944179630522a095cc9dbf9f353 runc version: v1.1.13-0-g58aa920 init version: de40ad0 Security Options: seccomp Profile: unconfined cgroupns Kernel Version: 6.10.4-linuxkit Operating System: Docker Desktop OSType: linux Architecture: aarch64 CPUs: 10 Total Memory: 24.17GiB Name: docker-desktop ID: b1d232ef-3cb7-4ac7-a42c-7cd608c09f29 Docker Root Dir: /var/lib/docker Debug Mode: false HTTP Proxy: http.docker.internal:3128 HTTPS Proxy: http.docker.internal:3128 No Proxy: hubproxy.docker.internal Labels: com.docker.desktop.address=unix:///Users/vicenteortegatorres/Library/Containers/com.docker.docker/Data/docker-cli.sock Experimental: false Insecure Registries: hubproxy.docker.internal:5555 127.0.0.0/8 Live Restore Enabled: false

WARNING: daemon is not using the default seccomp profile

- OS (e.g. from `/etc/os-release`): Sequoia 15.0.1 
- Kubernetes version: (use `kubectl version`):
- Any proxies or other special environment settings?:

Client Version: v1.30.2 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3

BenTheElder commented 1 month ago

In the future can you upload the logs folder as described rather than the individual files? It's easier to manage.

from serial.log:

Detected architecture x86-64.

from docker info:

Architecture: aarch64

You can't run cross-architecture like this. #2718

/remove-kind bug /kind support /close

k8s-ci-robot commented 1 month ago

@BenTheElder: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/kind/issues/3762#issuecomment-2435904318): >In the future can you upload the logs folder as described rather than the individual files? It's easier to manage. > >from serial.log: >> Detected architecture x86-64. > >from docker info: >> Architecture: aarch64 > >You can't run cross-platform like this. #2718 > >/remove-kind bug >/kind support >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
BenTheElder commented 1 month ago

https://github.com/kubernetes-sigs/kind/issues/2718 has more details, unfortunately this won't work. It looks likely that this was via the env variable rather than explicitly attempting to use a cross-architecture image. You'll have to make sure kind is called with the env unset or set to match the actual host arch.