kubernetes-sigs / kind

Kubernetes IN Docker - local clusters for testing Kubernetes
https://kind.sigs.k8s.io/
Apache License 2.0
13.01k stars 1.51k forks source link

trying to accelerate Kind cluster creation by using a commited image #3652

Closed SnappyLarry closed 2 weeks ago

SnappyLarry commented 3 weeks ago

Hi, i am currently unable to start a kind cluster using a commited image as the control-plane image. I just installed crossplane and a couple of providers. Am I missing something?

Here are the details:

command:

kind create cluster --name tester1 --config=integrated-tests/kind-config.yaml -v 5

kind-config.yaml:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
    image: kindest/node:v1.29.2-pinpin

Logs:

Creating cluster "tester1" ...
DEBUG: docker/images.go:58] Image: kindest/node:v1.29.2-pinpin present locally
 โœ“ Ensuring node image (kindest/node:v1.29.2-pinpin) ๐Ÿ–ผ 
 โœ“ Preparing nodes ๐Ÿ“ฆ  
DEBUG: config/config.go:96] Using the following kubeadm config for node tester1-control-plane:
apiServer:
  certSANs:
  - localhost
  - 127.0.0.1
  extraArgs:
    runtime-config: ""
apiVersion: kubeadm.k8s.io/v1beta3
clusterName: tester1
controlPlaneEndpoint: tester1-control-plane:6443
controllerManager:
  extraArgs:
    enable-hostpath-provisioner: "true"
kind: ClusterConfiguration
kubernetesVersion: v1.29.2
networking:
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/16
scheduler:
  extraArgs: null
---
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- token: abcdef.0123456789abcdef
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.20.0.2
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    node-ip: 172.20.0.2
    node-labels: ""
    provider-id: kind://docker/tester1/tester1-control-plane
---
apiVersion: kubeadm.k8s.io/v1beta3
controlPlane:
  localAPIEndpoint:
    advertiseAddress: 172.20.0.2
    bindPort: 6443
discovery:
  bootstrapToken:
    apiServerEndpoint: tester1-control-plane:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    node-ip: 172.20.0.2
    node-labels: ""
    provider-id: kind://docker/tester1/tester1-control-plane
---
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
cgroupRoot: /kubelet
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
  nodefs.inodesFree: 0%
failSwapOn: false
imageGCHighThresholdPercent: 100
kind: KubeletConfiguration
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
conntrack:
  maxPerCore: 0
iptables:
  minSyncPeriod: 1s
kind: KubeProxyConfiguration
mode: iptables
 โœ“ Writing configuration ๐Ÿ“œ 
DEBUG: kubeadminit/init.go:82] I0611 01:52:24.552665     221 initconfiguration.go:260] loading configuration from "/kind/kubeadm.conf"
W0611 01:52:24.553471     221 initconfiguration.go:341] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration
I0611 01:52:24.556661     221 certs.go:519] validating certificate period for CA certificate
I0611 01:52:24.556714     221 certs.go:519] validating certificate period for front-proxy CA certificate
[init] Using Kubernetes version: v1.29.2
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0611 01:52:24.556906     221 certs.go:519] validating certificate period for ca certificate
[certs] Using existing ca certificate authority
I0611 01:52:24.557142     221 certs.go:519] validating certificate period for apiserver certificate
[certs] Using existing apiserver certificate and key on disk
I0611 01:52:24.557373     221 certs.go:519] validating certificate period for apiserver-kubelet-client certificate
[certs] Using existing apiserver-kubelet-client certificate and key on disk
I0611 01:52:24.557504     221 certs.go:519] validating certificate period for front-proxy-ca certificate
[certs] Using existing front-proxy-ca certificate authority
I0611 01:52:24.557719     221 certs.go:519] validating certificate period for front-proxy-client certificate
[certs] Using existing front-proxy-client certificate and key on disk
I0611 01:52:24.557893     221 certs.go:519] validating certificate period for etcd/ca certificate
[certs] Using existing etcd/ca certificate authority
I0611 01:52:24.558066     221 certs.go:519] validating certificate period for etcd/server certificate
[certs] Using existing etcd/server certificate and key on disk
I0611 01:52:24.558238     221 certs.go:519] validating certificate period for etcd/peer certificate
[certs] Using existing etcd/peer certificate and key on disk
I0611 01:52:24.558451     221 certs.go:519] validating certificate period for etcd/healthcheck-client certificate
[certs] Using existing etcd/healthcheck-client certificate and key on disk
I0611 01:52:24.558665     221 certs.go:519] validating certificate period for apiserver-etcd-client certificate
[certs] Using existing apiserver-etcd-client certificate and key on disk
I0611 01:52:24.558807     221 certs.go:78] creating new public/private key files for signing service account users
I0611 01:52:24.558949     221 kubeconfig.go:112] creating kubeconfig file for admin.conf
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
I0611 01:52:24.670238     221 loader.go:395] Config loaded from file:  /etc/kubernetes/admin.conf
W0611 01:52:24.670259     221 kubeconfig.go:273] a kubeconfig file "/etc/kubernetes/admin.conf" exists already but has an unexpected API Server URL: expected: https://tester1-control-plane:6443, got: https://crossplane-perlinator-control-plane:6443
I0611 01:52:24.670267     221 kubeconfig.go:112] creating kubeconfig file for super-admin.conf
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
I0611 01:52:24.721667     221 loader.go:395] Config loaded from file:  /etc/kubernetes/super-admin.conf
W0611 01:52:24.721713     221 kubeconfig.go:273] a kubeconfig file "/etc/kubernetes/super-admin.conf" exists already but has an unexpected API Server URL: expected: https://tester1-control-plane:6443, got: https://crossplane-perlinator-control-plane:6443
I0611 01:52:24.721730     221 kubeconfig.go:112] creating kubeconfig file for kubelet.conf
I0611 01:52:24.810273     221 loader.go:395] Config loaded from file:  /etc/kubernetes/kubelet.conf
W0611 01:52:24.810301     221 kubeconfig.go:273] a kubeconfig file "/etc/kubernetes/kubelet.conf" exists already but has an unexpected API Server URL: expected: https://tester1-control-plane:6443, got: https://crossplane-perlinator-control-plane:6443
I0611 01:52:24.810309     221 kubeconfig.go:112] creating kubeconfig file for controller-manager.conf
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
I0611 01:52:25.103725     221 loader.go:395] Config loaded from file:  /etc/kubernetes/controller-manager.conf
I0611 01:52:25.103762     221 kubeconfig.go:112] creating kubeconfig file for scheduler.conf
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
I0611 01:52:25.183900     221 loader.go:395] Config loaded from file:  /etc/kubernetes/scheduler.conf
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0611 01:52:25.185401     221 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0611 01:52:25.185425     221 manifests.go:102] [control-plane] getting StaticPodSpecs
I0611 01:52:25.185546     221 manifests.go:128] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0611 01:52:25.185559     221 manifests.go:128] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0611 01:52:25.185561     221 manifests.go:128] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0611 01:52:25.185563     221 manifests.go:128] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I0611 01:52:25.185565     221 manifests.go:128] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0611 01:52:25.185911     221 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
I0611 01:52:25.185927     221 manifests.go:102] [control-plane] getting StaticPodSpecs
I0611 01:52:25.186001     221 manifests.go:128] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0611 01:52:25.186013     221 manifests.go:128] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0611 01:52:25.186015     221 manifests.go:128] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0611 01:52:25.186017     221 manifests.go:128] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0611 01:52:25.186019     221 manifests.go:128] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0611 01:52:25.186021     221 manifests.go:128] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I0611 01:52:25.186023     221 manifests.go:128] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I0611 01:52:25.186325     221 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I0611 01:52:25.186343     221 manifests.go:102] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0611 01:52:25.186402     221 manifests.go:128] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0611 01:52:25.186608     221 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I0611 01:52:25.186625     221 kubelet.go:68] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
I0611 01:52:25.288283     221 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
I0611 01:52:25.288545     221 loader.go:395] Config loaded from file:  /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0611 01:52:26.362093     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1072 milliseconds
I0611 01:52:27.903999     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1041 milliseconds
I0611 01:52:29.411088     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1048 milliseconds
I0611 01:52:30.919581     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1056 milliseconds
I0611 01:52:32.457267     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1093 milliseconds
I0611 01:52:33.908567     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1045 milliseconds
I0611 01:52:35.408104     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1044 milliseconds
I0611 01:52:36.912540     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1049 milliseconds
I0611 01:52:38.453110     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1090 milliseconds
I0611 01:52:39.911829     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1048 milliseconds
I0611 01:52:41.458767     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1095 milliseconds
I0611 01:52:42.918032     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1054 milliseconds
I0611 01:52:44.454394     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1090 milliseconds
I0611 01:52:45.913017     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1049 milliseconds
I0611 01:52:47.409049     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1045 milliseconds
I0611 01:52:48.905051     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1041 milliseconds
I0611 01:52:50.465298     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1101 milliseconds
I0611 01:52:51.911363     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1048 milliseconds
I0611 01:52:53.402748     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1039 milliseconds
I0611 01:52:54.921106     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1057 milliseconds
I0611 01:52:56.437826     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1074 milliseconds
I0611 01:52:57.909728     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1045 milliseconds
I0611 01:52:59.402793     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1039 milliseconds
I0611 01:53:00.915893     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1052 milliseconds
I0611 01:53:02.442688     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1079 milliseconds
I0611 01:53:03.899783     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1036 milliseconds
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I0611 01:53:05.402409     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1038 milliseconds
I0611 01:53:06.905805     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1042 milliseconds
I0611 01:53:08.458188     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1094 milliseconds
I0611 01:53:09.905564     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1042 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I0611 01:53:11.409353     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1045 milliseconds
I0611 01:53:12.908959     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1045 milliseconds
I0611 01:53:14.453747     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1090 milliseconds
I0611 01:53:15.941514     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1077 milliseconds
I0611 01:53:17.402783     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1039 milliseconds
I0611 01:53:18.907569     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1043 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I0611 01:53:20.429875     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1067 milliseconds
I0611 01:53:21.898830     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1035 milliseconds
I0611 01:53:23.412565     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1048 milliseconds
I0611 01:53:24.907556     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1044 milliseconds
I0611 01:53:26.436571     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1073 milliseconds
I0611 01:53:27.908318     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1045 milliseconds
I0611 01:53:29.396910     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1033 milliseconds
I0611 01:53:30.903665     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1039 milliseconds
I0611 01:53:32.458437     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1095 milliseconds
I0611 01:53:33.910400     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1047 milliseconds
I0611 01:53:35.402251     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1039 milliseconds
I0611 01:53:36.904880     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1041 milliseconds
I0611 01:53:38.444489     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1080 milliseconds
I0611 01:53:39.914093     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1050 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I0611 01:53:41.399420     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1036 milliseconds
I0611 01:53:42.905078     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1042 milliseconds
I0611 01:53:44.445375     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1082 milliseconds
I0611 01:53:45.914560     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1050 milliseconds
I0611 01:53:47.409306     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1046 milliseconds
I0611 01:53:48.907498     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1044 milliseconds
I0611 01:53:50.434288     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1071 milliseconds
I0611 01:53:51.913640     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1050 milliseconds
I0611 01:53:53.414734     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1051 milliseconds
I0611 01:53:54.913991     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1050 milliseconds
I0611 01:53:56.450406     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1086 milliseconds
I0611 01:53:57.962225     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1099 milliseconds
I0611 01:53:59.408697     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1045 milliseconds
I0611 01:54:00.904210     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1040 milliseconds
I0611 01:54:02.455484     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1092 milliseconds
I0611 01:54:03.908306     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1045 milliseconds
I0611 01:54:05.412399     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1049 milliseconds
I0611 01:54:06.908968     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1045 milliseconds
I0611 01:54:08.449957     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1086 milliseconds
I0611 01:54:09.905503     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1041 milliseconds
I0611 01:54:11.408958     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1045 milliseconds
I0611 01:54:12.911199     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1047 milliseconds
I0611 01:54:14.454303     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1091 milliseconds
I0611 01:54:15.904002     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1041 milliseconds
I0611 01:54:17.420473     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1057 milliseconds
I0611 01:54:18.907312     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1044 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
        cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:109
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:259
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
        cmd/kubeadm/app/cmd/init.go:124
github.com/spf13/cobra.(*Command).execute
        vendor/github.com/spf13/cobra/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
        vendor/github.com/spf13/cobra/command.go:1068
github.com/spf13/cobra.(*Command).Execute
        vendor/github.com/spf13/cobra/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
        cmd/kubeadm/app/kubeadm.go:50
main.main
        cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:267
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1650
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:260
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
        cmd/kubeadm/app/cmd/init.go:124
github.com/spf13/cobra.(*Command).execute
        vendor/github.com/spf13/cobra/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
        vendor/github.com/spf13/cobra/command.go:1068
github.com/spf13/cobra.(*Command).Execute
        vendor/github.com/spf13/cobra/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
        cmd/kubeadm/app/kubeadm.go:50
main.main
        cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:267
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1650
 โœ— Starting control-plane ๐Ÿ•น๏ธ
Deleted nodes: ["tester1-control-plane"]
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged tester1-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
Command Output: I0611 01:52:24.552665     221 initconfiguration.go:260] loading configuration from "/kind/kubeadm.conf"
W0611 01:52:24.553471     221 initconfiguration.go:341] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration
I0611 01:52:24.556661     221 certs.go:519] validating certificate period for CA certificate
I0611 01:52:24.556714     221 certs.go:519] validating certificate period for front-proxy CA certificate
[init] Using Kubernetes version: v1.29.2
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0611 01:52:24.556906     221 certs.go:519] validating certificate period for ca certificate
[certs] Using existing ca certificate authority
I0611 01:52:24.557142     221 certs.go:519] validating certificate period for apiserver certificate
[certs] Using existing apiserver certificate and key on disk
I0611 01:52:24.557373     221 certs.go:519] validating certificate period for apiserver-kubelet-client certificate
[certs] Using existing apiserver-kubelet-client certificate and key on disk
I0611 01:52:24.557504     221 certs.go:519] validating certificate period for front-proxy-ca certificate
[certs] Using existing front-proxy-ca certificate authority
I0611 01:52:24.557719     221 certs.go:519] validating certificate period for front-proxy-client certificate
[certs] Using existing front-proxy-client certificate and key on disk
I0611 01:52:24.557893     221 certs.go:519] validating certificate period for etcd/ca certificate
[certs] Using existing etcd/ca certificate authority
I0611 01:52:24.558066     221 certs.go:519] validating certificate period for etcd/server certificate
[certs] Using existing etcd/server certificate and key on disk
I0611 01:52:24.558238     221 certs.go:519] validating certificate period for etcd/peer certificate
[certs] Using existing etcd/peer certificate and key on disk
I0611 01:52:24.558451     221 certs.go:519] validating certificate period for etcd/healthcheck-client certificate
[certs] Using existing etcd/healthcheck-client certificate and key on disk
I0611 01:52:24.558665     221 certs.go:519] validating certificate period for apiserver-etcd-client certificate
[certs] Using existing apiserver-etcd-client certificate and key on disk
I0611 01:52:24.558807     221 certs.go:78] creating new public/private key files for signing service account users
I0611 01:52:24.558949     221 kubeconfig.go:112] creating kubeconfig file for admin.conf
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
I0611 01:52:24.670238     221 loader.go:395] Config loaded from file:  /etc/kubernetes/admin.conf
W0611 01:52:24.670259     221 kubeconfig.go:273] a kubeconfig file "/etc/kubernetes/admin.conf" exists already but has an unexpected API Server URL: expected: https://tester1-control-plane:6443, got: https://crossplane-perlinator-control-plane:6443
I0611 01:52:24.670267     221 kubeconfig.go:112] creating kubeconfig file for super-admin.conf
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
I0611 01:52:24.721667     221 loader.go:395] Config loaded from file:  /etc/kubernetes/super-admin.conf
W0611 01:52:24.721713     221 kubeconfig.go:273] a kubeconfig file "/etc/kubernetes/super-admin.conf" exists already but has an unexpected API Server URL: expected: https://tester1-control-plane:6443, got: https://crossplane-perlinator-control-plane:6443
I0611 01:52:24.721730     221 kubeconfig.go:112] creating kubeconfig file for kubelet.conf
I0611 01:52:24.810273     221 loader.go:395] Config loaded from file:  /etc/kubernetes/kubelet.conf
W0611 01:52:24.810301     221 kubeconfig.go:273] a kubeconfig file "/etc/kubernetes/kubelet.conf" exists already but has an unexpected API Server URL: expected: https://tester1-control-plane:6443, got: https://crossplane-perlinator-control-plane:6443
I0611 01:52:24.810309     221 kubeconfig.go:112] creating kubeconfig file for controller-manager.conf
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
I0611 01:52:25.103725     221 loader.go:395] Config loaded from file:  /etc/kubernetes/controller-manager.conf
I0611 01:52:25.103762     221 kubeconfig.go:112] creating kubeconfig file for scheduler.conf
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
I0611 01:52:25.183900     221 loader.go:395] Config loaded from file:  /etc/kubernetes/scheduler.conf
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0611 01:52:25.185401     221 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0611 01:52:25.185425     221 manifests.go:102] [control-plane] getting StaticPodSpecs
I0611 01:52:25.185546     221 manifests.go:128] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0611 01:52:25.185559     221 manifests.go:128] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0611 01:52:25.185561     221 manifests.go:128] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0611 01:52:25.185563     221 manifests.go:128] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I0611 01:52:25.185565     221 manifests.go:128] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0611 01:52:25.185911     221 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
I0611 01:52:25.185927     221 manifests.go:102] [control-plane] getting StaticPodSpecs
I0611 01:52:25.186001     221 manifests.go:128] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0611 01:52:25.186013     221 manifests.go:128] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0611 01:52:25.186015     221 manifests.go:128] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0611 01:52:25.186017     221 manifests.go:128] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0611 01:52:25.186019     221 manifests.go:128] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0611 01:52:25.186021     221 manifests.go:128] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I0611 01:52:25.186023     221 manifests.go:128] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I0611 01:52:25.186325     221 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I0611 01:52:25.186343     221 manifests.go:102] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0611 01:52:25.186402     221 manifests.go:128] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0611 01:52:25.186608     221 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I0611 01:52:25.186625     221 kubelet.go:68] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
I0611 01:52:25.288283     221 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
I0611 01:52:25.288545     221 loader.go:395] Config loaded from file:  /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0611 01:52:26.362093     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1072 milliseconds
I0611 01:52:27.903999     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1041 milliseconds
I0611 01:52:29.411088     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1048 milliseconds
I0611 01:52:30.919581     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1056 milliseconds
I0611 01:52:32.457267     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1093 milliseconds
I0611 01:52:33.908567     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1045 milliseconds
I0611 01:52:35.408104     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1044 milliseconds
I0611 01:52:36.912540     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1049 milliseconds
I0611 01:52:38.453110     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1090 milliseconds
I0611 01:52:39.911829     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1048 milliseconds
I0611 01:52:41.458767     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1095 milliseconds
I0611 01:52:42.918032     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1054 milliseconds
I0611 01:52:44.454394     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1090 milliseconds
I0611 01:52:45.913017     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1049 milliseconds
I0611 01:52:47.409049     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1045 milliseconds
I0611 01:52:48.905051     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1041 milliseconds
I0611 01:52:50.465298     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1101 milliseconds
I0611 01:52:51.911363     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1048 milliseconds
I0611 01:52:53.402748     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1039 milliseconds
I0611 01:52:54.921106     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1057 milliseconds
I0611 01:52:56.437826     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1074 milliseconds
I0611 01:52:57.909728     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1045 milliseconds
I0611 01:52:59.402793     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1039 milliseconds
I0611 01:53:00.915893     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1052 milliseconds
I0611 01:53:02.442688     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1079 milliseconds
I0611 01:53:03.899783     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1036 milliseconds
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I0611 01:53:05.402409     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1038 milliseconds
I0611 01:53:06.905805     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1042 milliseconds
I0611 01:53:08.458188     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1094 milliseconds
I0611 01:53:09.905564     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1042 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I0611 01:53:11.409353     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1045 milliseconds
I0611 01:53:12.908959     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1045 milliseconds
I0611 01:53:14.453747     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1090 milliseconds
I0611 01:53:15.941514     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1077 milliseconds
I0611 01:53:17.402783     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1039 milliseconds
I0611 01:53:18.907569     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1043 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I0611 01:53:20.429875     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1067 milliseconds
I0611 01:53:21.898830     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1035 milliseconds
I0611 01:53:23.412565     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1048 milliseconds
I0611 01:53:24.907556     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1044 milliseconds
I0611 01:53:26.436571     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1073 milliseconds
I0611 01:53:27.908318     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1045 milliseconds
I0611 01:53:29.396910     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1033 milliseconds
I0611 01:53:30.903665     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1039 milliseconds
I0611 01:53:32.458437     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1095 milliseconds
I0611 01:53:33.910400     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1047 milliseconds
I0611 01:53:35.402251     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1039 milliseconds
I0611 01:53:36.904880     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1041 milliseconds
I0611 01:53:38.444489     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1080 milliseconds
I0611 01:53:39.914093     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1050 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I0611 01:53:41.399420     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1036 milliseconds
I0611 01:53:42.905078     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1042 milliseconds
I0611 01:53:44.445375     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1082 milliseconds
I0611 01:53:45.914560     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1050 milliseconds
I0611 01:53:47.409306     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1046 milliseconds
I0611 01:53:48.907498     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1044 milliseconds
I0611 01:53:50.434288     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1071 milliseconds
I0611 01:53:51.913640     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1050 milliseconds
I0611 01:53:53.414734     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1051 milliseconds
I0611 01:53:54.913991     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1050 milliseconds
I0611 01:53:56.450406     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1086 milliseconds
I0611 01:53:57.962225     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1099 milliseconds
I0611 01:53:59.408697     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1045 milliseconds
I0611 01:54:00.904210     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1040 milliseconds
I0611 01:54:02.455484     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1092 milliseconds
I0611 01:54:03.908306     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1045 milliseconds
I0611 01:54:05.412399     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1049 milliseconds
I0611 01:54:06.908968     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1045 milliseconds
I0611 01:54:08.449957     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1086 milliseconds
I0611 01:54:09.905503     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1041 milliseconds
I0611 01:54:11.408958     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1045 milliseconds
I0611 01:54:12.911199     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1047 milliseconds
I0611 01:54:14.454303     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1091 milliseconds
I0611 01:54:15.904002     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1041 milliseconds
I0611 01:54:17.420473     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1057 milliseconds
I0611 01:54:18.907312     221 round_trippers.go:553] GET https://crossplane-perlinator-control-plane:6443/healthz?timeout=10s  in 1044 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
        cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:109
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:259
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
        cmd/kubeadm/app/cmd/init.go:124
github.com/spf13/cobra.(*Command).execute
        vendor/github.com/spf13/cobra/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
        vendor/github.com/spf13/cobra/command.go:1068
github.com/spf13/cobra.(*Command).Execute
        vendor/github.com/spf13/cobra/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
        cmd/kubeadm/app/kubeadm.go:50
main.main
        cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:267
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1650
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:260
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
        cmd/kubeadm/app/cmd/init.go:124
github.com/spf13/cobra.(*Command).execute
        vendor/github.com/spf13/cobra/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
        vendor/github.com/spf13/cobra/command.go:1068
github.com/spf13/cobra.(*Command).Execute
        vendor/github.com/spf13/cobra/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
        cmd/kubeadm/app/kubeadm.go:50
main.main
        cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:267
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1650
Stack Trace: 
sigs.k8s.io/kind/pkg/errors.WithStack
        sigs.k8s.io/kind/pkg/errors/errors.go:59
sigs.k8s.io/kind/pkg/exec.(*LocalCmd).Run
        sigs.k8s.io/kind/pkg/exec/local.go:124
sigs.k8s.io/kind/pkg/cluster/internal/providers/docker.(*nodeCmd).Run
        sigs.k8s.io/kind/pkg/cluster/internal/providers/docker/node.go:146
sigs.k8s.io/kind/pkg/exec.CombinedOutputLines
        sigs.k8s.io/kind/pkg/exec/helpers.go:67
sigs.k8s.io/kind/pkg/cluster/internal/create/actions/kubeadminit.(*action).Execute
        sigs.k8s.io/kind/pkg/cluster/internal/create/actions/kubeadminit/init.go:81
sigs.k8s.io/kind/pkg/cluster/internal/create.Cluster
        sigs.k8s.io/kind/pkg/cluster/internal/create/create.go:135
sigs.k8s.io/kind/pkg/cluster.(*Provider).Create
        sigs.k8s.io/kind/pkg/cluster/provider.go:181
sigs.k8s.io/kind/pkg/cmd/kind/create/cluster.runE
        sigs.k8s.io/kind/pkg/cmd/kind/create/cluster/createcluster.go:110
sigs.k8s.io/kind/pkg/cmd/kind/create/cluster.NewCommand.func1
        sigs.k8s.io/kind/pkg/cmd/kind/create/cluster/createcluster.go:54
github.com/spf13/cobra.(*Command).execute
        github.com/spf13/cobra@v1.4.0/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
        github.com/spf13/cobra@v1.4.0/command.go:974
github.com/spf13/cobra.(*Command).Execute
        github.com/spf13/cobra@v1.4.0/command.go:902
sigs.k8s.io/kind/cmd/kind/app.Run
        sigs.k8s.io/kind/cmd/kind/app/main.go:53
sigs.k8s.io/kind/cmd/kind/app.Main
        sigs.k8s.io/kind/cmd/kind/app/main.go:35
main.main
        sigs.k8s.io/kind/main.go:25
runtime.main
        runtime/proc.go:267
runtime.goexit
        runtime/asm_amd64.s:1650
stmcginnis commented 3 weeks ago

Does cluster creation work if you are not using a custom image? If so, can you describe the process you used to create this custom image? That would point to a problem with how that image is configured, but it's hard to say without knowing a little more about it.

SnappyLarry commented 3 weeks ago

Yes it does work with the original image.

To create the custom image, I do the following:

Now I try to create my cluster with this config:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
    image: kindest/node:v1.29.2-pinpin

and this command:

kind create cluster --name tester1 --config=integrated-tests/kind-config.yaml -v 5

It fails (with the output I provided earlier).

Hope this helps you understand more my situation

stmcginnis commented 3 weeks ago

I think the easiest will probably be to run kind create cluster --name tester1 --config=integrated-tests/kind-config.yaml --retain to keep the node container around after the failure. Then you can exec into the container to inspect the log files. Or kind export logs --name tester1 to extract everything locally.

SnappyLarry commented 3 weeks ago

I exported the logs with kind export. The only thing that seems odd is the following error that loops in journal.log:

Jun 11 12:44:00 tester1-control-plane kubelet[260]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jun 11 12:44:00 tester1-control-plane kubelet[260]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Jun 11 12:44:00 tester1-control-plane kubelet[260]: I0611 12:44:00.512696     260 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Jun 11 12:44:00 tester1-control-plane kubelet[260]: I0611 12:44:00.624631     260 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Jun 11 12:44:00 tester1-control-plane kubelet[260]: I0611 12:44:00.624674     260 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jun 11 12:44:00 tester1-control-plane kubelet[260]: I0611 12:44:00.624836     260 server.go:919] "Client rotation is on, will bootstrap in background"
Jun 11 12:44:00 tester1-control-plane kubelet[260]: E0611 12:44:00.625374     260 bootstrap.go:241] unable to read existing bootstrap client config from /etc/kubernetes/kubelet.conf: invalid configuration: [unable to read client-cert /var/lib/kubelet/pki/kubelet-client-current.pem for system:node:crossplane-perlinator-control-plane due to open /var/lib/kubelet/pki/kubelet-client-current.pem: no such file or directory, unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem for system:node:crossplane-perlinator-control-plane due to open /var/lib/kubelet/pki/kubelet-client-current.pem: no such file or directory]
Jun 11 12:44:00 tester1-control-plane kubelet[260]: E0611 12:44:00.625418     260 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
Jun 11 12:44:00 tester1-control-plane systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jun 11 12:44:00 tester1-control-plane systemd[1]: kubelet.service: Failed with result 'exit-code'.
aojea commented 3 weeks ago

Jun 11 12:44:00 tester1-control-plane kubelet[260]: E0611 12:44:00.625418 260 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"

an important file is missing for the kubelet

SnappyLarry commented 3 weeks ago

Yes. but i dont understand how it somehow got omitted when i did the docker commit?

stmcginnis commented 3 weeks ago

install crossplane using a helm chart install crossplane required providers and functions (yaml manifests)

This would imply that you've fully created a cluster and are now trying to reuse it, right? That's similar to just stopping and restarting the cluster. But you're trying to deploy it again using kind, so it's attempting to use your image as a base node image to create a new cluster. This will likely cause conflicts.

I think what you're hoping to do falls under #3508.

SnappyLarry commented 3 weeks ago

yep. seems like it.

BenTheElder commented 3 weeks ago

trying to accelerate Kind cluster creation by using a commited image

isn't supported, see https://github.com/kubernetes-sigs/kind/issues/3508 for some discussion.

I recommend alternative performance improvements, e.g. you could kind load the container images to avoid pulling, or better yet use a local registry and move things there (see the docs).