labring / sealos

Sealos is a production-ready Kubernetes distribution. You can run any Docker image on sealos, start high availability databases like mysql/pgsql/redis/mongo, develop applications using any Programming language.
https://cloud.sealos.io
Apache License 2.0
13.85k stars 2.07k forks source link

首次安装报错 #2810

Closed txl7771328 closed 1 year ago

txl7771328 commented 1 year ago

Detailed description of the question.

WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded Image is up to date for sealos.hub:5000/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f 2023-03-15T18:07:26 info domain apiserver.cluster.local:192.17.33.42 append success W0315 18:07:26.908934 1537765 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubelet.config.k8s.io", Version:"v1beta1", Kind:"KubeletConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "enableDebugFlagsHandler" W0315 18:07:26.909366 1537765 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeproxy.config.k8s.io", Version:"v1alpha1", Kind:"KubeProxyConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "detectLocal" W0315 18:07:26.910681 1537765 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0 [init] Using Kubernetes version: v1.20.5 [preflight] Running pre-flight checks [WARNING FileExisting-socat]: socat not found in system path [WARNING Hostname]: hostname "k8s-m1" could not be reached [WARNING Hostname]: hostname "k8s-m1": lookup k8s-m1 on 192.17.33.38:53: read udp 192.17.33.42:33136->192.17.33.38:53: read: connection refused [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf" [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf" W0315 18:07:57.474038 1537765 kubeconfig.go:246] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://192.17.33.42:6443, got: https://apiserver.cluster.local:6443 [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf" W0315 18:07:57.746213 1537765 kubeconfig.go:246] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://192.17.33.42:6443, got: https://apiserver.cluster.local:6443 [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.

Some reference materials you see.

No response

txl7771328 commented 1 year ago

开始4.1.3版本 后来更换4.1.4版本依旧

txl7771328 commented 1 year ago

2023-03-15T18:10:06 info start to copy kubeadm join config to master: 192.17.33.44:22 2023-03-15T18:10:07 info start to copy kubeadm join config to master: 192.17.33.43:22 2023-03-15T18:10:08 info start to join 192.17.33.43:22 as master1, 8 it/s) 2023-03-15T18:10:08 info registry auth in node 192.17.33.43:22 192.17.33.43:22: 2023-03-15T18:10:13 info domain sealos.hub:192.17.33.42 append success 192.17.33.43:22: WARNING! Using --password via the CLI is insecure. Use --password-stdin. 192.17.33.43:22: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. 192.17.33.43:22: Configure a credential helper to remove this warning. See 192.17.33.43:22: https://docs.docker.com/engine/reference/commandline/login/#credentials-store 192.17.33.43:22: 192.17.33.43:22: Login Succeeded 192.17.33.43:22: Image is up to date for sealos.hub:5000/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f 2023-03-15T18:10:09 info start to generator cert 192.17.33.43:22 as master 192.17.33.43:22: 2023-03-15T18:10:14 info apiserver altNames : {map[apiserver.cluster.local:apiserver.cluster.local k8s-m2:k8s-m2 kubernetes:kubernetes kubernetes.default:kubernetes.default kubernetes.default.svc:kubernetes.default.svc kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local localhost:localhost] map[10.103.97.2:10.103.97.2 10.96.0.1:10.96.0.1 127.0.0.1:127.0.0.1 192.17.33.42:192.17.33.42 192.17.33.43:192.17.33.43 192.17.33.44:192.17.33.44]} 192.17.33.43:22: 2023-03-15T18:10:14 info Etcd altnames : {map[k8s-m2:k8s-m2 localhost:localhost] map[127.0.0.1:127.0.0.1 192.17.33.43:192.17.33.43 ::1:::1]}, commonName : k8s-m2 192.17.33.43:22: 2023-03-15T18:10:14 info sa.key sa.pub already exist 192.17.33.43:22: 2023-03-15T18:10:15 info domain apiserver.cluster.local:192.17.33.42 append success 192.17.33.43:22: [preflight] Running pre-flight checks 192.17.33.43:22: [WARNING FileExisting-socat]: socat not found in system path 192.17.33.43:22: [WARNING Hostname]: hostname "k8s-m2" could not be reached 192.17.33.43:22: [WARNING Hostname]: hostname "k8s-m2": lookup k8s-m2 on 192.17.33.38:53: read udp 192.17.33.43:60357->192.17.33.38:53: read: connection refused 192.17.33.43:22: error execution phase preflight: [preflight] Some fatal errors occurred: 192.17.33.43:22: [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists 192.17.33.43:22: [preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=... 192.17.33.43:22: To see the stack trace of this error execute with --v=5 or higher 2023-03-15T18:10:31 error Applied to cluster error: exec kubeadm join in 192.17.33.43:22 failed failed to execute command(kubeadm join --config=/root/.sealos/default/etc/kubeadm-join-master.yaml -v 0 --ignore-preflight-errors=SystemVerification) on host(192.17.33.43:22): output([preflight] Running pre-flight checks [WARNING FileExisting-socat]: socat not found in system path [WARNING Hostname]: hostname "k8s-m2" could not be reached [WARNING Hostname]: hostname "k8s-m2": lookup k8s-m2 on 192.17.33.38:53: read udp 192.17.33.43:60357->192.17.33.38:53: read: connection refused error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists [preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=... To see the stack trace of this error execute with --v=5 or higher), error(Process exited with status 1) Error: exec kubeadm join in 192.17.33.43:22 failed failed to execute command(kubeadm join --config=/root/.sealos/default/etc/kubeadm-join-master.yaml -v 0 --ignore-preflight-errors=SystemVerification) on host(192.17.33.43:22): output([preflight] Running pre-flight checks [WARNING FileExisting-socat]: socat not found in system path [WARNING Hostname]: hostname "k8s-m2" could not be reached [WARNING Hostname]: hostname "k8s-m2": lookup k8s-m2 on 192.17.33.38:53: read udp 192.17.33.43:60357->192.17.33.38:53: read: connection refused error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists [preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=... To see the stack trace of this error execute with --v=5 or higher), error(Process exited with status 1) exec kubeadm join in 192.17.33.43:22 failed failed to execute command(kubeadm join --config=/root/.sealos/default/etc/kubeadm-join-master.yaml -v 0 --ignore-preflight-errors=SystemVerification) on host(192.17.33.43:22): output([preflight] Running pre-flight checks [WARNING FileExisting-socat]: socat not found in system path [WARNING Hostname]: hostname "k8s-m2" could not be reached [WARNING Hostname]: hostname "k8s-m2": lookup k8s-m2 on 192.17.33.38:53: read udp 192.17.33.43:60357->192.17.33.38:53: read: connection refused error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists [preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=... To see the stack trace of this error execute with --v=5 or higher), error(Process exited with status 1)

cuisongliu commented 1 year ago

using 4.1.7 fixed

txl7771328 commented 1 year ago

W0316 13:35:21.118498 57543 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubelet.config.k8s.io", Version:"v1beta1", Kind:"KubeletConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "enableDebugFlagsHandler" W0316 13:35:21.119000 57543 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeproxy.config.k8s.io", Version:"v1alpha1", Kind:"KubeProxyConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "detectLocal" W0316 13:35:21.120431 57543 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0 [init] Using Kubernetes version: v1.20.5 [preflight] Running pre-flight checks [WARNING FileExisting-socat]: socat not found in system path [WARNING Hostname]: hostname "k8s-m1" could not be reached [WARNING Hostname]: hostname "k8s-m1": lookup k8s-m1 on 192.17.33.38:53: read udp 192.17.33.42:50382->192.17.33.38:53: i/o timeout [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf" [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf" W0316 13:36:01.243211 57543 kubeconfig.go:246] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://192.17.33.42:6443, got: https://apiserver.cluster.local:6443 [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf" W0316 13:36:01.378013 57543 kubeconfig.go:246] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://192.17.33.42:6443, got: https://apiserver.cluster.local:6443 [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 177.501863 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k8s-m1 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)" [mark-control-plane] Marking the node k8s-m1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root:

kubeadm join apiserver.cluster.local:6443 --token \ --discovery-token-ca-cert-hash sha256:ddb9e783e25d2731f0a9473ee7dacb9eb2821b5b353e5f335f0305ae42b5628a \ --control-plane --certificate-key

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join apiserver.cluster.local:6443 --token \ --discovery-token-ca-cert-hash sha256:ddb9e783e25d2731f0a9473ee7dacb9eb2821b5b353e5f335f0305ae42b5628a 2023-03-16T13:39:18 info Executing pipeline Join in CreateProcessor. 2023-03-16T13:39:18 info [192.17.33.43:22 192.17.33.44:22] will be added as master 2023-03-16T13:39:18 info start to init filesystem join masters... 2023-03-16T13:39:18 info start to copy static files to masters 2023-03-16T13:39:18 info start to copy kubeconfig files to masters 2023-03-16T13:39:19 info start to copy etc pki files to masters/1, 8 it/s) [1/1]copying files to 192.17.33.43:22 100% [===============] (0/22, 0 it/min)2023-03-16T13:39:22 info start to get kubernetes token... 2023-03-16T13:39:32 info start to copy kubeadm join config to master: 192.17.33.44:22 2023-03-16T13:39:33 info start to copy kubeadm join config to master: 192.17.33.43:22 2023-03-16T13:39:34 info start to join 192.17.33.43:22 as master1, 9 it/s) 2023-03-16T13:39:34 info registry auth in node 192.17.33.43:22 192.17.33.43:22: 2023-03-16T13:39:34 info domain sealos.hub:192.17.33.42 append success 192.17.33.43:22: WARNING! Using --password via the CLI is insecure. Use --password-stdin. 192.17.33.43:22: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. 192.17.33.43:22: Configure a credential helper to remove this warning. See 192.17.33.43:22: https://docs.docker.com/engine/reference/commandline/login/#credentials-store 192.17.33.43:22: 192.17.33.43:22: Login Succeeded 192.17.33.43:22: Image is up to date for sealos.hub:5000/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f 2023-03-16T13:39:35 info start to generator cert 192.17.33.43:22 as master 192.17.33.43:22: 2023-03-16T13:39:35 info apiserver altNames : {map[apiserver.cluster.local:apiserver.cluster.local k8s-m2:k8s-m2 kubernetes:kubernetes kubernetes.default:kubernetes.default kubernetes.default.svc:kubernetes.default.svc kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local localhost:localhost] map[10.103.97.2:10.103.97.2 10.96.0.1:10.96.0.1 127.0.0.1:127.0.0.1 192.17.33.42:192.17.33.42 192.17.33.43:192.17.33.43 192.17.33.44:192.17.33.44]} 192.17.33.43:22: 2023-03-16T13:39:35 info Etcd altnames : {map[k8s-m2:k8s-m2 localhost:localhost] map[127.0.0.1:127.0.0.1 192.17.33.43:192.17.33.43 ::1:::1]}, commonName : k8s-m2 192.17.33.43:22: 2023-03-16T13:39:35 info sa.key sa.pub already exist 192.17.33.43:22: 2023-03-16T13:39:37 info domain apiserver.cluster.local:192.17.33.42 append success 192.17.33.43:22: [preflight] Running pre-flight checks 192.17.33.43:22: [WARNING FileExisting-socat]: socat not found in system path 192.17.33.43:22: [WARNING Hostname]: hostname "k8s-m2" could not be reached 192.17.33.43:22: [WARNING Hostname]: hostname "k8s-m2": lookup k8s-m2 on 192.17.33.38:53: read udp 192.17.33.43:43855->192.17.33.38:53: i/o timeout 192.17.33.43:22: [preflight] Reading configuration from the cluster... 192.17.33.43:22: [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' 192.17.33.43:22: W0316 13:40:07.626697 48331 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0 192.17.33.43:22: [preflight] Running pre-flight checks before initializing the new control plane instance 192.17.33.43:22: [preflight] Pulling images required for setting up a Kubernetes cluster 192.17.33.43:22: [preflight] This might take a minute or two, depending on the speed of your internet connection 192.17.33.43:22: [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' 192.17.33.43:22: [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace 192.17.33.43:22: [certs] Using certificateDir folder "/etc/kubernetes/pki" 192.17.33.43:22: [certs] Using the existing "apiserver" certificate and key 192.17.33.43:22: [certs] Using the existing "apiserver-kubelet-client" certificate and key 192.17.33.43:22: [certs] Using the existing "front-proxy-client" certificate and key 192.17.33.43:22: [certs] Using the existing "etcd/server" certificate and key 192.17.33.43:22: [certs] Using the existing "etcd/peer" certificate and key 192.17.33.43:22: [certs] Using the existing "apiserver-etcd-client" certificate and key 192.17.33.43:22: [certs] Using the existing "etcd/healthcheck-client" certificate and key 192.17.33.43:22: [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki" 192.17.33.43:22: [certs] Using the existing "sa" key 192.17.33.43:22: [kubeconfig] Generating kubeconfig files 192.17.33.43:22: [kubeconfig] Using kubeconfig folder "/etc/kubernetes" 192.17.33.43:22: [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf" 192.17.33.43:22: W0316 13:40:18.961889 48331 kubeconfig.go:246] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://192.17.33.43:6443, got: https://apiserver.cluster.local:6443 192.17.33.43:22: [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf" 192.17.33.43:22: W0316 13:40:19.111622 48331 kubeconfig.go:246] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://192.17.33.43:6443, got: https://apiserver.cluster.local:6443 192.17.33.43:22: [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf" 192.17.33.43:22: [control-plane] Using manifest folder "/etc/kubernetes/manifests" 192.17.33.43:22: [control-plane] Creating static Pod manifest for "kube-apiserver" 192.17.33.43:22: [control-plane] Creating static Pod manifest for "kube-controller-manager" 192.17.33.43:22: [control-plane] Creating static Pod manifest for "kube-scheduler" 192.17.33.43:22: [check-etcd] Checking that the etcd cluster is healthy 192.17.33.43:22: [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" 192.17.33.43:22: [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" 192.17.33.43:22: [kubelet-start] Starting the kubelet 192.17.33.43:22: [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... 192.17.33.43:22: [kubelet-check] Initial timeout of 40s passed. 192.17.33.43:22: [kubelet-check] It seems like the kubelet isn't running or healthy. 192.17.33.43:22: [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. 192.17.33.43:22: [kubelet-check] It seems like the kubelet isn't running or healthy. 192.17.33.43:22: [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. 192.17.33.43:22: [kubelet-check] It seems like the kubelet isn't running or healthy. 192.17.33.43:22: [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. 192.17.33.43:22: error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition 192.17.33.43:22: To see the stack trace of this error execute with --v=5 or higher 2023-03-16T13:42:24 error Applied to cluster error: exec kubeadm join in 192.17.33.43:22 failed failed to execute command(kubeadm join --config=/root/.sealos/default/etc/kubeadm-join-master.yaml -v 0 --ignore-preflight-errors=SystemVerification) on host(192.17.33.43:22): output([preflight] Running pre-flight checks [WARNING FileExisting-socat]: socat not found in system path [WARNING Hostname]: hostname "k8s-m2" could not be reached [WARNING Hostname]: hostname "k8s-m2": lookup k8s-m2 on 192.17.33.38:53: read udp 192.17.33.43:43855->192.17.33.38:53: i/o timeout [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' W0316 13:40:07.626697 48331 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0 [preflight] Running pre-flight checks before initializing the new control plane instance [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Using the existing "apiserver" certificate and key [certs] Using the existing "apiserver-kubelet-client" certificate and key [certs] Using the existing "front-proxy-client" certificate and key [certs] Using the existing "etcd/server" certificate and key [certs] Using the existing "etcd/peer" certificate and key [certs] Using the existing "apiserver-etcd-client" certificate and key [certs] Using the existing "etcd/healthcheck-client" certificate and key [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki" [certs] Using the existing "sa" key [kubeconfig] Generating kubeconfig files [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf" W0316 13:40:18.961889 48331 kubeconfig.go:246] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://192.17.33.43:6443, got: https://apiserver.cluster.local:6443 [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf" W0316 13:40:19.111622 48331 kubeconfig.go:246] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://192.17.33.43:6443, got: https://apiserver.cluster.local:6443 [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf" [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [check-etcd] Checking that the etcd cluster is healthy [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition To see the stack trace of this error execute with --v=5 or higher), error(Process exited with status 1) Error: exec kubeadm join in 192.17.33.43:22 failed failed to execute command(kubeadm join --config=/root/.sealos/default/etc/kubeadm-join-master.yaml -v 0 --ignore-preflight-errors=SystemVerification) on host(192.17.33.43:22): output([preflight] Running pre-flight checks [WARNING FileExisting-socat]: socat not found in system path [WARNING Hostname]: hostname "k8s-m2" could not be reached [WARNING Hostname]: hostname "k8s-m2": lookup k8s-m2 on 192.17.33.38:53: read udp 192.17.33.43:43855->192.17.33.38:53: i/o timeout [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' W0316 13:40:07.626697 48331 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0 [preflight] Running pre-flight checks before initializing the new control plane instance [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Using the existing "apiserver" certificate and key [certs] Using the existing "apiserver-kubelet-client" certificate and key [certs] Using the existing "front-proxy-client" certificate and key [certs] Using the existing "etcd/server" certificate and key [certs] Using the existing "etcd/peer" certificate and key [certs] Using the existing "apiserver-etcd-client" certificate and key [certs] Using the existing "etcd/healthcheck-client" certificate and key [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki" [certs] Using the existing "sa" key [kubeconfig] Generating kubeconfig files [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf" W0316 13:40:18.961889 48331 kubeconfig.go:246] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://192.17.33.43:6443, got: https://apiserver.cluster.local:6443 [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf" W0316 13:40:19.111622 48331 kubeconfig.go:246] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://192.17.33.43:6443, got: https://apiserver.cluster.local:6443 [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf" [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [check-etcd] Checking that the etcd cluster is healthy [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition To see the stack trace of this error execute with --v=5 or higher), error(Process exited with status 1)