Closed kernelsky closed 2 years ago
Can you paste some KubeKey logs? By default, the kk will generate the dir /etc/cni/net.d
. The log file is in the ./kubekey/logs
.
上面的日志都很正常,这是不正常的日志 16:33:28 CST [InitKubernetesModule] Generate kubeadm config 16:33:28 CST skipped: [sh5dnewoa-a-0143] 16:33:28 CST skipped: [sh5dnewoa-a-0144] 16:33:28 CST success: [sh5dnewoa-a-0145] 16:33:28 CST [InitKubernetesModule] Init cluster using kubeadm 16:38:00 CST stdout: [sh5dnewoa-a-0145] W0319 16:33:29.009616 8715 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10] W0319 16:33:29.047278 8715 kubelet.go:215] detected "cgroupfs" as the Docker cgroup driver, the provided value "systemd" in "KubeletConfiguration" will be overrided [init] Using Kubernetes version: v1.21.5 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost sh5dnewoa-a-0143 sh5dnewoa-a-0143.cluster.local sh5dnewoa-a-0144 sh5dnewoa-a-0144.cluster.local sh5dnewoa-a-0145 sh5dnewoa-a-0145.cluster.local sh5dnewoa-w-0121 sh5dnewoa-w-0121.cluster.local sh5dnewoa-w-0122 sh5dnewoa-w-0122.cluster.local sh5dnewoa-w-0123 sh5dnewoa-w-0123.cluster.local sh5dnewoa-w-0124 sh5dnewoa-w-0124.cluster.local sh5dnewoa-w-0125 sh5dnewoa-w-0125.cluster.local sh5dnewoa-w-0126 sh5dnewoa-w-0126.cluster.local sh5dnewoa-w-0127 sh5dnewoa-w-0127.cluster.local sh5dnewoa-w-0128 sh5dnewoa-w-0128.cluster.local sh5dnewoa-w-0129 sh5dnewoa-w-0129.cluster.local sh5dnewoa-w-0130 sh5dnewoa-w-0130.cluster.local sh5dnewoa-w-0131 sh5dnewoa-w-0131.cluster.local sh5dnewoa-w-0132 sh5dnewoa-w-0132.cluster.local sh5dnewoa-w-0133 sh5dnewoa-w-0133.cluster.local sh5dnewoa-w-0134 sh5dnewoa-w-0134.cluster.local sh5dnewoa-w-0135 sh5dnewoa-w-0135.cluster.local sh5dnewoa-w-0136 sh5dnewoa-w-0136.cluster.local sh5dnewoa-w-0137 sh5dnewoa-w-0137.cluster.local sh5dnewoa-w-0138 sh5dnewoa-w-0138.cluster.local sh5dnewoa-w-0139 sh5dnewoa-w-0139.cluster.local sh5dnewoa-w-0140 sh5dnewoa-w-0140.cluster.local sh5dnewoa-w-0141 sh5dnewoa-w-0141.cluster.local sh5dnewoa-w-0142 sh5dnewoa-w-0142.cluster.local] and IPs [10.233.0.1 172.40.2.38 127.0.0.1 172.40.2.64 172.40.2.232 172.40.2.187 172.40.2.57 172.40.2.141 172.40.2.113 172.40.2.96 172.40.2.19 172.40.2.29 172.40.2.248 172.40.2.249 172.40.2.204 172.40.2.220 172.40.2.210 172.40.2.161 172.40.2.233 172.40.2.203 172.40.2.188 172.40.2.103 172.40.2.165 172.40.2.212 172.40.2.17 172.40.2.148 172.40.1.116 172.40.1.163] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] External etcd mode: Skipping etcd/ca certificate authority generation [certs] External etcd mode: Skipping etcd/server certificate generation [certs] External etcd mode: Skipping etcd/peer certificate generation [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher 16:38:51 CST stdout: [sh5dnewoa-a-0145] [reset] Reading configuration from the cluster... [reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' W0319 16:38:40.439038 10813 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) [preflight] Running pre-flight checks W0319 16:38:40.439279 10813 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory [reset] No etcd config found. Assuming external etcd [reset] Please, manually reset etcd to prevent further issues [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] [reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file. 16:38:51 CST message: [sh5dnewoa-a-0145] init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl" W0319 16:33:29.009616 8715 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10] W0319 16:33:29.047278 8715 kubelet.go:215] detected "cgroupfs" as the Docker cgroup driver, the provided value "systemd" in "KubeletConfiguration" will be overrided [init] Using Kubernetes version: v1.21.5 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost sh5dnewoa-a-0143 sh5dnewoa-a-0143.cluster.local sh5dnewoa-a-0144 sh5dnewoa-a-0144.cluster.local sh5dnewoa-a-0145 sh5dnewoa-a-0145.cluster.local sh5dnewoa-w-0121 sh5dnewoa-w-0121.cluster.local sh5dnewoa-w-0122 sh5dnewoa-w-0122.cluster.local sh5dnewoa-w-0123 sh5dnewoa-w-0123.cluster.local sh5dnewoa-w-0124 sh5dnewoa-w-0124.cluster.local sh5dnewoa-w-0125 sh5dnewoa-w-0125.cluster.local sh5dnewoa-w-0126 sh5dnewoa-w-0126.cluster.local sh5dnewoa-w-0127 sh5dnewoa-w-0127.cluster.local sh5dnewoa-w-0128 sh5dnewoa-w-0128.cluster.local sh5dnewoa-w-0129 sh5dnewoa-w-0129.cluster.local sh5dnewoa-w-0130 sh5dnewoa-w-0130.cluster.local sh5dnewoa-w-0131 sh5dnewoa-w-0131.cluster.local sh5dnewoa-w-0132 sh5dnewoa-w-0132.cluster.local sh5dnewoa-w-0133 sh5dnewoa-w-0133.cluster.local sh5dnewoa-w-0134 sh5dnewoa-w-0134.cluster.local sh5dnewoa-w-0135 sh5dnewoa-w-0135.cluster.local sh5dnewoa-w-0136 sh5dnewoa-w-0136.cluster.local sh5dnewoa-w-0137 sh5dnewoa-w-0137.cluster.local sh5dnewoa-w-0138 sh5dnewoa-w-0138.cluster.local sh5dnewoa-w-0139 sh5dnewoa-w-0139.cluster.local sh5dnewoa-w-0140 sh5dnewoa-w-0140.cluster.local sh5dnewoa-w-0141 sh5dnewoa-w-0141.cluster.local sh5dnewoa-w-0142 sh5dnewoa-w-0142.cluster.local] and IPs [10.233.0.1 172.40.2.38 127.0.0.1 172.40.2.64 172.40.2.232 172.40.2.187 172.40.2.57 172.40.2.141 172.40.2.113 172.40.2.96 172.40.2.19 172.40.2.29 172.40.2.248 172.40.2.249 172.40.2.204 172.40.2.220 172.40.2.210 172.40.2.161 172.40.2.233 172.40.2.203 172.40.2.188 172.40.2.103 172.40.2.165 172.40.2.212 172.40.2.17 172.40.2.148 172.40.1.116 172.40.1.163] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] External etcd mode: Skipping etcd/ca certificate authority generation [certs] External etcd mode: Skipping etcd/server certificate generation [certs] External etcd mode: Skipping etcd/peer certificate generation [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1 16:38:51 CST retry: [sh5dnewoa-a-0145] 16:43:26 CST stdout: [sh5dnewoa-a-0145] W0319 16:38:56.286248 11234 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10] W0319 16:38:56.322809 11234 kubelet.go:215] detected "cgroupfs" as the Docker cgroup driver, the provided value "systemd" in "KubeletConfiguration" will be overrided [init] Using Kubernetes version: v1.21.5 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost sh5dnewoa-a-0143 sh5dnewoa-a-0143.cluster.local sh5dnewoa-a-0144 sh5dnewoa-a-0144.cluster.local sh5dnewoa-a-0145 sh5dnewoa-a-0145.cluster.local sh5dnewoa-w-0121 sh5dnewoa-w-0121.cluster.local sh5dnewoa-w-0122 sh5dnewoa-w-0122.cluster.local sh5dnewoa-w-0123 sh5dnewoa-w-0123.cluster.local sh5dnewoa-w-0124 sh5dnewoa-w-0124.cluster.local sh5dnewoa-w-0125 sh5dnewoa-w-0125.cluster.local sh5dnewoa-w-0126 sh5dnewoa-w-0126.cluster.local sh5dnewoa-w-0127 sh5dnewoa-w-0127.cluster.local sh5dnewoa-w-0128 sh5dnewoa-w-0128.cluster.local sh5dnewoa-w-0129 sh5dnewoa-w-0129.cluster.local sh5dnewoa-w-0130 sh5dnewoa-w-0130.cluster.local sh5dnewoa-w-0131 sh5dnewoa-w-0131.cluster.local sh5dnewoa-w-0132 sh5dnewoa-w-0132.cluster.local sh5dnewoa-w-0133 sh5dnewoa-w-0133.cluster.local sh5dnewoa-w-0134 sh5dnewoa-w-0134.cluster.local sh5dnewoa-w-0135 sh5dnewoa-w-0135.cluster.local sh5dnewoa-w-0136 sh5dnewoa-w-0136.cluster.local sh5dnewoa-w-0137 sh5dnewoa-w-0137.cluster.local sh5dnewoa-w-0138 sh5dnewoa-w-0138.cluster.local sh5dnewoa-w-0139 sh5dnewoa-w-0139.cluster.local sh5dnewoa-w-0140 sh5dnewoa-w-0140.cluster.local sh5dnewoa-w-0141 sh5dnewoa-w-0141.cluster.local sh5dnewoa-w-0142 sh5dnewoa-w-0142.cluster.local] and IPs [10.233.0.1 172.40.2.38 127.0.0.1 172.40.2.64 172.40.2.232 172.40.2.187 172.40.2.57 172.40.2.141 172.40.2.113 172.40.2.96 172.40.2.19 172.40.2.29 172.40.2.248 172.40.2.249 172.40.2.204 172.40.2.220 172.40.2.210 172.40.2.161 172.40.2.233 172.40.2.203 172.40.2.188 172.40.2.103 172.40.2.165 172.40.2.212 172.40.2.17 172.40.2.148 172.40.1.116 172.40.1.163] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] External etcd mode: Skipping etcd/ca certificate authority generation [certs] External etcd mode: Skipping etcd/server certificate generation [certs] External etcd mode: Skipping etcd/peer certificate generation [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher 16:44:09 CST stdout: [sh5dnewoa-a-0145] [reset] Reading configuration from the cluster... [reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' W0319 16:44:07.394409 13317 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) [preflight] Running pre-flight checks W0319 16:44:07.394613 13317 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory [reset] No etcd config found. Assuming external etcd [reset] Please, manually reset etcd to prevent further issues [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] [reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file. 16:44:09 CST message: [sh5dnewoa-a-0145] init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl" W0319 16:38:56.286248 11234 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10] W0319 16:38:56.322809 11234 kubelet.go:215] detected "cgroupfs" as the Docker cgroup driver, the provided value "systemd" in "KubeletConfiguration" will be overrided [init] Using Kubernetes version: v1.21.5 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost sh5dnewoa-a-0143 sh5dnewoa-a-0143.cluster.local sh5dnewoa-a-0144 sh5dnewoa-a-0144.cluster.local sh5dnewoa-a-0145 sh5dnewoa-a-0145.cluster.local sh5dnewoa-w-0121 sh5dnewoa-w-0121.cluster.local sh5dnewoa-w-0122 sh5dnewoa-w-0122.cluster.local sh5dnewoa-w-0123 sh5dnewoa-w-0123.cluster.local sh5dnewoa-w-0124 sh5dnewoa-w-0124.cluster.local sh5dnewoa-w-0125 sh5dnewoa-w-0125.cluster.local sh5dnewoa-w-0126 sh5dnewoa-w-0126.cluster.local sh5dnewoa-w-0127 sh5dnewoa-w-0127.cluster.local sh5dnewoa-w-0128 sh5dnewoa-w-0128.cluster.local sh5dnewoa-w-0129 sh5dnewoa-w-0129.cluster.local sh5dnewoa-w-0130 sh5dnewoa-w-0130.cluster.local sh5dnewoa-w-0131 sh5dnewoa-w-0131.cluster.local sh5dnewoa-w-0132 sh5dnewoa-w-0132.cluster.local sh5dnewoa-w-0133 sh5dnewoa-w-0133.cluster.local sh5dnewoa-w-0134 sh5dnewoa-w-0134.cluster.local sh5dnewoa-w-0135 sh5dnewoa-w-0135.cluster.local sh5dnewoa-w-0136 sh5dnewoa-w-0136.cluster.local sh5dnewoa-w-0137 sh5dnewoa-w-0137.cluster.local sh5dnewoa-w-0138 sh5dnewoa-w-0138.cluster.local sh5dnewoa-w-0139 sh5dnewoa-w-0139.cluster.local sh5dnewoa-w-0140 sh5dnewoa-w-0140.cluster.local sh5dnewoa-w-0141 sh5dnewoa-w-0141.cluster.local sh5dnewoa-w-0142 sh5dnewoa-w-0142.cluster.local] and IPs [10.233.0.1 172.40.2.38 127.0.0.1 172.40.2.64 172.40.2.232 172.40.2.187 172.40.2.57 172.40.2.141 172.40.2.113 172.40.2.96 172.40.2.19 172.40.2.29 172.40.2.248 172.40.2.249 172.40.2.204 172.40.2.220 172.40.2.210 172.40.2.161 172.40.2.233 172.40.2.203 172.40.2.188 172.40.2.103 172.40.2.165 172.40.2.212 172.40.2.17 172.40.2.148 172.40.1.116 172.40.1.163] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] External etcd mode: Skipping etcd/ca certificate authority generation [certs] External etcd mode: Skipping etcd/server certificate generation [certs] External etcd mode: Skipping etcd/peer certificate generation [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1 16:44:09 CST retry: [sh5dnewoa-a-0145] 16:48:47 CST stdout: [sh5dnewoa-a-0145] W0319 16:44:14.447169 13773 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10] W0319 16:44:14.484371 13773 kubelet.go:215] detected "cgroupfs" as the Docker cgroup driver, the provided value "systemd" in "KubeletConfiguration" will be overrided [init] Using Kubernetes version: v1.21.5 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost sh5dnewoa-a-0143 sh5dnewoa-a-0143.cluster.local sh5dnewoa-a-0144 sh5dnewoa-a-0144.cluster.local sh5dnewoa-a-0145 sh5dnewoa-a-0145.cluster.local sh5dnewoa-w-0121 sh5dnewoa-w-0121.cluster.local sh5dnewoa-w-0122 sh5dnewoa-w-0122.cluster.local sh5dnewoa-w-0123 sh5dnewoa-w-0123.cluster.local sh5dnewoa-w-0124 sh5dnewoa-w-0124.cluster.local sh5dnewoa-w-0125 sh5dnewoa-w-0125.cluster.local sh5dnewoa-w-0126 sh5dnewoa-w-0126.cluster.local sh5dnewoa-w-0127 sh5dnewoa-w-0127.cluster.local sh5dnewoa-w-0128 sh5dnewoa-w-0128.cluster.local sh5dnewoa-w-0129 sh5dnewoa-w-0129.cluster.local sh5dnewoa-w-0130 sh5dnewoa-w-0130.cluster.local sh5dnewoa-w-0131 sh5dnewoa-w-0131.cluster.local sh5dnewoa-w-0132 sh5dnewoa-w-0132.cluster.local sh5dnewoa-w-0133 sh5dnewoa-w-0133.cluster.local sh5dnewoa-w-0134 sh5dnewoa-w-0134.cluster.local sh5dnewoa-w-0135 sh5dnewoa-w-0135.cluster.local sh5dnewoa-w-0136 sh5dnewoa-w-0136.cluster.local sh5dnewoa-w-0137 sh5dnewoa-w-0137.cluster.local sh5dnewoa-w-0138 sh5dnewoa-w-0138.cluster.local sh5dnewoa-w-0139 sh5dnewoa-w-0139.cluster.local sh5dnewoa-w-0140 sh5dnewoa-w-0140.cluster.local sh5dnewoa-w-0141 sh5dnewoa-w-0141.cluster.local sh5dnewoa-w-0142 sh5dnewoa-w-0142.cluster.local] and IPs [10.233.0.1 172.40.2.38 127.0.0.1 172.40.2.64 172.40.2.232 172.40.2.187 172.40.2.57 172.40.2.141 172.40.2.113 172.40.2.96 172.40.2.19 172.40.2.29 172.40.2.248 172.40.2.249 172.40.2.204 172.40.2.220 172.40.2.210 172.40.2.161 172.40.2.233 172.40.2.203 172.40.2.188 172.40.2.103 172.40.2.165 172.40.2.212 172.40.2.17 172.40.2.148 172.40.1.116 172.40.1.163] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] External etcd mode: Skipping etcd/ca certificate authority generation [certs] External etcd mode: Skipping etcd/server certificate generation [certs] External etcd mode: Skipping etcd/peer certificate generation [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher 16:49:29 CST stdout: [sh5dnewoa-a-0145] [reset] Reading configuration from the cluster... [reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' W0319 16:49:27.925884 15819 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) [preflight] Running pre-flight checks W0319 16:49:27.926079 15819 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory [reset] No etcd config found. Assuming external etcd [reset] Please, manually reset etcd to prevent further issues [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] [reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file. 16:49:29 CST message: [sh5dnewoa-a-0145] init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl" W0319 16:44:14.447169 13773 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10] W0319 16:44:14.484371 13773 kubelet.go:215] detected "cgroupfs" as the Docker cgroup driver, the provided value "systemd" in "KubeletConfiguration" will be overrided [init] Using Kubernetes version: v1.21.5 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost sh5dnewoa-a-0143 sh5dnewoa-a-0143.cluster.local sh5dnewoa-a-0144 sh5dnewoa-a-0144.cluster.local sh5dnewoa-a-0145 sh5dnewoa-a-0145.cluster.local sh5dnewoa-w-0121 sh5dnewoa-w-0121.cluster.local sh5dnewoa-w-0122 sh5dnewoa-w-0122.cluster.local sh5dnewoa-w-0123 sh5dnewoa-w-0123.cluster.local sh5dnewoa-w-0124 sh5dnewoa-w-0124.cluster.local sh5dnewoa-w-0125 sh5dnewoa-w-0125.cluster.local sh5dnewoa-w-0126 sh5dnewoa-w-0126.cluster.local sh5dnewoa-w-0127 sh5dnewoa-w-0127.cluster.local sh5dnewoa-w-0128 sh5dnewoa-w-0128.cluster.local sh5dnewoa-w-0129 sh5dnewoa-w-0129.cluster.local sh5dnewoa-w-0130 sh5dnewoa-w-0130.cluster.local sh5dnewoa-w-0131 sh5dnewoa-w-0131.cluster.local sh5dnewoa-w-0132 sh5dnewoa-w-0132.cluster.local sh5dnewoa-w-0133 sh5dnewoa-w-0133.cluster.local sh5dnewoa-w-0134 sh5dnewoa-w-0134.cluster.local sh5dnewoa-w-0135 sh5dnewoa-w-0135.cluster.local sh5dnewoa-w-0136 sh5dnewoa-w-0136.cluster.local sh5dnewoa-w-0137 sh5dnewoa-w-0137.cluster.local sh5dnewoa-w-0138 sh5dnewoa-w-0138.cluster.local sh5dnewoa-w-0139 sh5dnewoa-w-0139.cluster.local sh5dnewoa-w-0140 sh5dnewoa-w-0140.cluster.local sh5dnewoa-w-0141 sh5dnewoa-w-0141.cluster.local sh5dnewoa-w-0142 sh5dnewoa-w-0142.cluster.local] and IPs [10.233.0.1 172.40.2.38 127.0.0.1 172.40.2.64 172.40.2.232 172.40.2.187 172.40.2.57 172.40.2.141 172.40.2.113 172.40.2.96 172.40.2.19 172.40.2.29 172.40.2.248 172.40.2.249 172.40.2.204 172.40.2.220 172.40.2.210 172.40.2.161 172.40.2.233 172.40.2.203 172.40.2.188 172.40.2.103 172.40.2.165 172.40.2.212 172.40.2.17 172.40.2.148 172.40.1.116 172.40.1.163] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] External etcd mode: Skipping etcd/ca certificate authority generation [certs] External etcd mode: Skipping etcd/server certificate generation [certs] External etcd mode: Skipping etcd/peer certificate generation [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1 16:49:29 CST skipped: [sh5dnewoa-a-0143] 16:49:29 CST skipped: [sh5dnewoa-a-0144] 16:49:29 CST failed: [sh5dnewoa-a-0145]****
Can you paste the log for the ConfigureOSModule
section? And can your paste your kk config in here? I don't know if your config is correct.
Or, I think you can delete the cluster first, and recreate it, then watch for the /etc/cni/net.d
directory to be created.
已经删除重试了,还是一样的报错,并且还是尝试了kubekey1.2.1也是一样的报错 `apiVersion: kubekey.kubesphere.io/v1alpha2 kind: Cluster metadata: name: sample spec: hosts:
{name: sh5dnewoa-w-0121, address: 10.50.1.163, internalAddress: 10.50.1.163, user: user, password: "test130"} roleGroups: etcd:
domain: lb.kubesphere.local address: "10.50.2.64" port: 6443 kubernetes: version: v1.21.5 clusterName: cluster.local network: plugin: calico kubePodsCIDR: 10.233.64.0/18 kubeServiceCIDR: 10.233.0.0/18
multusCNI: enabled: false registry: plainHTTP: false privateRegistry: "hub.test.com" namespaceOverride: "" registryMirrors: [] insecureRegistries: [] addons: []
apiVersion: installer.kubesphere.io/v1alpha1 kind: ClusterConfiguration metadata: name: ks-installer namespace: kubesphere-system labels: version: v3.2.1 spec: persistence: storageClass: "" authentication: jwtSecret: "" local_registry: "" namespace_override: ""
etcd: monitoring: false endpointIps: localhost port: 2379 tlsEnable: true common: core: console: enableMultiLogin: true port: 30880 type: NodePort
# resources: {}
# controllerManager:
# resources: {}
redis:
enabled: false
volumeSize: 2Gi
openldap:
enabled: false
volumeSize: 2Gi
minio:
volumeSize: 20Gi
monitoring:
# type: external
endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
GPUMonitoring:
enabled: false
gpu:
kinds:
- resourceName: "nvidia.com/gpu"
resourceType: "GPU"
default: true
es:
# master:
# volumeSize: 4Gi
# replicas: 1
# resources: {}
# data:
# volumeSize: 20Gi
# replicas: 1
# resources: {}
logMaxAge: 7
elkPrefix: logstash
basicAuth:
enabled: false
username: ""
password: ""
externalElasticsearchHost: ""
externalElasticsearchPort: ""
alerting: enabled: false
# replicas: 1
# resources: {}
auditing: enabled: false
# resources: {}
# webhook:
# resources: {}
devops: enabled: false jenkinsMemoryLim: 2Gi jenkinsMemoryReq: 1500Mi jenkinsVolumeSize: 8Gi jenkinsJavaOpts_Xms: 512m jenkinsJavaOpts_Xmx: 512m jenkinsJavaOpts_MaxRAM: 2g events: enabled: false
# resources: {}
# exporter:
# resources: {}
# ruler:
# enabled: true
# replicas: 2
# resources: {}
logging: enabled: false containerruntime: docker logsidecar: enabled: true replicas: 2
metrics_server: enabled: true monitoring: storageClass: ""
# resources: {}
# kube_state_metrics:
# resources: {}
# prometheus:
# replicas: 1
# volumeSize: 20Gi
# resources: {}
# operator:
# resources: {}
# adapter:
# resources: {}
# node_exporter:
# resources: {}
# alertmanager:
# replicas: 1
# resources: {}
# notification_manager:
# resources: {}
# operator:
# resources: {}
# proxy:
# resources: {}
gpu:
nvidia_dcgm_exporter:
enabled: false
# resources: {}
multicluster:
clusterRole: none
network:
networkpolicy:
enabled: false
ippool:
type: none
topology:
type: none
openpitrix:
store:
enabled: false
servicemesh:
enabled: false
kubeedge:
enabled: false
cloudCore:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
cloudhubPort: "10000"
cloudhubQuicPort: "10001"
cloudhubHttpsPort: "10002"
cloudstreamPort: "10003"
tunnelPort: "10004"
cloudHub:
advertiseAddress:
Does your host user user
has a sudo
privileges?
有sudo权限并且免密,etcd,apiserver等都正常启动,就卡到cni插件和kubelet了。
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost sh5dnewoa-a-0143 sh5dnewoa-a-0143.cluster.local sh5dnewoa-a-0144 sh5dnewoa-a-0144.cluster.local sh5dnewoa-a-0145 sh5dnewoa-a-0145.cluster.local sh5dnewoa-w-0121 sh5dnewoa-w-0121.cluster.local sh5dnewoa-w-0122 sh5dnewoa-w-0122.cluster.local sh5dnewoa-w-0123 sh5dnewoa-w-0123.cluster.local sh5dnewoa-w-0124 sh5dnewoa-w-0124.cluster.local sh5dnewoa-w-0125 sh5dnewoa-w-0125.cluster.local sh5dnewoa-w-0126 sh5dnewoa-w-0126.cluster.local sh5dnewoa-w-0127 sh5dnewoa-w-0127.cluster.local sh5dnewoa-w-0128 sh5dnewoa-w-0128.cluster.local sh5dnewoa-w-0129 sh5dnewoa-w-0129.cluster.local sh5dnewoa-w-0130 sh5dnewoa-w-0130.cluster.local sh5dnewoa-w-0131 sh5dnewoa-w-0131.cluster.local sh5dnewoa-w-0132 sh5dnewoa-w-0132.cluster.local sh5dnewoa-w-0133 sh5dnewoa-w-0133.cluster.local sh5dnewoa-w-0134 sh5dnewoa-w-0134.cluster.local sh5dnewoa-w-0135 sh5dnewoa-w-0135.cluster.local sh5dnewoa-w-0136 sh5dnewoa-w-0136.cluster.local sh5dnewoa-w-0137 sh5dnewoa-w-0137.cluster.local sh5dnewoa-w-0138 sh5dnewoa-w-0138.cluster.local sh5dnewoa-w-0139 sh5dnewoa-w-0139.cluster.local sh5dnewoa-w-0140 sh5dnewoa-w-0140.cluster.local sh5dnewoa-w-0141 sh5dnewoa-w-0141.cluster.local sh5dnewoa-w-0142 sh5dnewoa-w-0142.cluster.local] and IPs [10.233.0.1 172.40.2.38 127.0.0.1 172.40.2.64 172.40.2.232 172.40.2.187 172.40.2.57 172.40.2.141 172.40.2.113 172.40.2.96 172.40.2.19 172.40.2.29 172.40.2.248 172.40.2.249 172.40.2.204 172.40.2.220 172.40.2.210 172.40.2.161 172.40.2.233 172.40.2.203 172.40.2.188 172.40.2.103 172.40.2.165 172.40.2.212 172.40.2.17 172.40.2.148 172.40.1.116 172.40.1.163]
I found the mapping relationship between domain and IP shown in the logs, which is different from the host value 10.50.2.xx
in your config.
You can try to remove the kk workdir ./kubekey
(kk may be using the old certs file and kubeadm config file) and recreate the cluster.
刚才复制的时候配置文件的ip和密码我做过处理,172.40,改成了10.50 这是原始的ip配置文件: `apiVersion: kubekey.kubesphere.io/v1alpha2 kind: Cluster metadata: name: sample spec: hosts:
{name: sh5dnewoa-w-0121, address: 172.40.1.163, internalAddress: 172.40.1.163, user: user, password: "test130"} roleGroups: etcd:
domain: lb.kubesphere.local address: "172.40.2.64" port: 6443 kubernetes: version: v1.21.5 clusterName: cluster.local network: plugin: calico kubePodsCIDR: 10.233.64.0/18 kubeServiceCIDR: 10.233.0.0/18
multusCNI: enabled: false registry: plainHTTP: false privateRegistry: "hub.test.com" namespaceOverride: "" registryMirrors: [] insecureRegistries: [] addons: []
apiVersion: installer.kubesphere.io/v1alpha1 kind: ClusterConfiguration metadata: name: ks-installer namespace: kubesphere-system labels: version: v3.2.1 spec: persistence: storageClass: "" authentication: jwtSecret: "" local_registry: "" namespace_override: ""
etcd: monitoring: false endpointIps: localhost port: 2379 tlsEnable: true common: core: console: enableMultiLogin: true port: 30880 type: NodePort
# resources: {}
# controllerManager:
# resources: {}
redis:
enabled: false
volumeSize: 2Gi
openldap:
enabled: false
volumeSize: 2Gi
minio:
volumeSize: 20Gi
monitoring:
# type: external
endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
GPUMonitoring:
enabled: false
gpu:
kinds:
- resourceName: "nvidia.com/gpu"
resourceType: "GPU"
default: true
es:
# master:
# volumeSize: 4Gi
# replicas: 1
# resources: {}
# data:
# volumeSize: 20Gi
# replicas: 1
# resources: {}
logMaxAge: 7
elkPrefix: logstash
basicAuth:
enabled: false
username: ""
password: ""
externalElasticsearchHost: ""
externalElasticsearchPort: ""
alerting: enabled: false
# replicas: 1
# resources: {}
auditing: enabled: false
# resources: {}
# webhook:
# resources: {}
devops: enabled: false jenkinsMemoryLim: 2Gi jenkinsMemoryReq: 1500Mi jenkinsVolumeSize: 8Gi jenkinsJavaOpts_Xms: 512m jenkinsJavaOpts_Xmx: 512m jenkinsJavaOpts_MaxRAM: 2g events: enabled: false
# resources: {}
# exporter:
# resources: {}
# ruler:
# enabled: true
# replicas: 2
# resources: {}
logging: enabled: false containerruntime: docker logsidecar: enabled: true replicas: 2
metrics_server: enabled: true monitoring: storageClass: ""
# resources: {}
# kube_state_metrics:
# resources: {}
# prometheus:
# replicas: 1
# volumeSize: 20Gi
# resources: {}
# operator:
# resources: {}
# adapter:
# resources: {}
# node_exporter:
# resources: {}
# alertmanager:
# replicas: 1
# resources: {}
# notification_manager:
# resources: {}
# operator:
# resources: {}
# proxy:
# resources: {}
gpu:
nvidia_dcgm_exporter:
enabled: false
# resources: {}
multicluster:
clusterRole: none
network:
networkpolicy:
enabled: false
ippool:
type: none
topology:
type: none
openpitrix:
store:
enabled: false
servicemesh:
enabled: false
kubeedge:
enabled: false
cloudCore:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
cloudhubPort: "10000"
cloudhubQuicPort: "10001"
cloudhubHttpsPort: "10002"
cloudstreamPort: "10003"
tunnelPort: "10004"
cloudHub:
advertiseAddress:
Is your environment set up http_proxy
or https_proxy
? I found the same issue. See: https://github.com/kubernetes/kubernetes/issues/54918#issuecomment-385162637
没有设置,纯内网环境,新装的机器,不需要配置代理。
Execute journalctl -f -u kubelet
,and looking for fatal level log.
问题已解决,在安装master节点的时候是先安装其中一个节点,再加入其他master,当其中一个节点启动后kubelet会访问通过lb访问apiserver,但我这个集群的lb有bug,当他通过lb访问自己的时候造成了回环,无法正常访问,无法通过绑定hosts问题临时解决,集群安装成功,非常感谢leo li和pixiake的帮忙,再次感谢大家。
@kernelsky Thanks for sharing your experience.
And i will close this issue. Feel free to reopen it. /close
@24sama: Closing this issue.
Your current KubeKey version
2.0.0
Describe this feature
使用kubekey2.0默认安装的版本 kubesphere3.2.1和kubernetes1.21.5
Docker20.10.3 系统centos7.6
etcd集群正常启动,apiserver和其他master组件都正常,kubelet和cni插件异常,日志报错"Unable to update cni config" err="no networks found in /etc/cni/net.d" 没有calico相关docker运行,这个目录下面没有文件/etc/cni/net.d
Describe the solution you'd like
Mar 19 14:42:39 sh5dnewoa-a-0145 dbus[8517]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' Mar 19 14:42:39 sh5dnewoa-a-0145 systemd: Starting Hostname Service... Mar 19 14:42:39 sh5dnewoa-a-0145 dbus[8517]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 19 14:42:39 sh5dnewoa-a-0145 systemd: Started Hostname Service. Mar 19 14:42:39 sh5dnewoa-a-0145 systemd-hostnamed: Changed static host name to 'sh5dnewoa-a-0145' Mar 19 14:42:39 sh5dnewoa-a-0145 NetworkManager[8589]: [1647672159.5520] hostname: hostname changed from "sh5dnewoa-a-0145.novalocal" to "sh5dnewoa-a-0145"
Mar 19 14:42:39 sh5dnewoa-a-0145 dbus[8517]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service'
Mar 19 14:42:39 sh5dnewoa-a-0145 systemd-hostnamed: Changed host name to 'sh5dnewoa-a-0145'
Mar 19 14:42:39 sh5dnewoa-a-0145 NetworkManager[8589]: [1647672159.5568] policy: set-hostname: set hostname to 'sh5dnewoa-a-0145' (from system configuration)
Mar 19 14:42:39 sh5dnewoa-a-0145 systemd: Starting Network Manager Script Dispatcher Service...
Mar 19 14:42:39 sh5dnewoa-a-0145 dbus[8517]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher'
Mar 19 14:42:39 sh5dnewoa-a-0145 systemd: Started Network Manager Script Dispatcher Service.
Mar 19 14:42:39 sh5dnewoa-a-0145 nm-dispatcher: find-scripts: Cannot execute '/etc/NetworkManager/dispatcher.d/hook-network-manager': writable by group or other, or set-UID.
Mar 19 14:42:39 sh5dnewoa-a-0145 nm-dispatcher: req:1 'hostname': new request (3 scripts)
Mar 19 14:42:39 sh5dnewoa-a-0145 nm-dispatcher: req:1 'hostname': start running ordered scripts...
Mar 19 14:42:39 sh5dnewoa-a-0145 nm-dispatcher: find-scripts: Cannot execute '/etc/NetworkManager/dispatcher.d/hook-network-manager': writable by group or other, or set-UID.
Mar 19 14:42:39 sh5dnewoa-a-0145 nm-dispatcher: req:2 'hostname': new request (3 scripts)
Mar 19 14:42:39 sh5dnewoa-a-0145 nm-dispatcher: req:2 'hostname': start running ordered scripts...
Mar 19 14:42:41 sh5dnewoa-a-0145 systemd: Reloading.
Mar 19 14:42:41 sh5dnewoa-a-0145 kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP)
Mar 19 14:42:41 sh5dnewoa-a-0145 kernel: IPVS: Connection hash table configured (size=4096, memory=64Kbytes)
Mar 19 14:42:41 sh5dnewoa-a-0145 kernel: IPVS: Creating netns size=2200 id=0
Mar 19 14:42:41 sh5dnewoa-a-0145 kernel: IPVS: Creating netns size=2200 id=1
Mar 19 14:42:41 sh5dnewoa-a-0145 kernel: IPVS: ipvs loaded.
Mar 19 14:42:41 sh5dnewoa-a-0145 kernel: IPVS: [rr] scheduler registered.
Mar 19 14:42:41 sh5dnewoa-a-0145 kernel: IPVS: [wrr] scheduler registered.
Mar 19 14:42:41 sh5dnewoa-a-0145 kernel: IPVS: [sh] scheduler registered.
Mar 19 14:42:42 sh5dnewoa-a-0145 kernel: bash (176974): drop_caches: 3
Mar 19 14:43:40 sh5dnewoa-a-0145 systemd-logind: Removed session 374.
Mar 19 14:43:40 sh5dnewoa-a-0145 systemd: Removed slice User Slice of user.
Mar 19 14:50:01 sh5dnewoa-a-0145 systemd: Created slice User Slice of root.
Mar 19 14:50:01 sh5dnewoa-a-0145 systemd: Started Session 375 of user root.
Mar 19 14:50:01 sh5dnewoa-a-0145 systemd: Removed slice User Slice of root.
Mar 19 14:51:27 sh5dnewoa-a-0145 systemd: Created slice User Slice of user.
Mar 19 14:51:27 sh5dnewoa-a-0145 systemd-logind: New session 376 of user user.
Mar 19 14:51:27 sh5dnewoa-a-0145 systemd: Started Session 376 of user user.
Mar 19 14:51:39 sh5dnewoa-a-0145 dbus[8517]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service'
Mar 19 14:51:39 sh5dnewoa-a-0145 systemd: Starting Hostname Service...
Mar 19 14:51:39 sh5dnewoa-a-0145 kernel: IPVS: Creating netns size=2200 id=2
Mar 19 14:51:39 sh5dnewoa-a-0145 dbus[8517]: [system] Successfully activated service 'org.freedesktop.hostname1'
Mar 19 14:51:39 sh5dnewoa-a-0145 systemd: Started Hostname Service.
Mar 19 14:51:41 sh5dnewoa-a-0145 systemd: Reloading.
Mar 19 14:51:42 sh5dnewoa-a-0145 kernel: bash (2305): drop_caches: 3
Mar 19 14:52:13 sh5dnewoa-a-0145 systemd: Reloading.
Mar 19 14:52:13 sh5dnewoa-a-0145 systemd: Starting etcd...
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://172.40.2.38:2379
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: recognized and used environment variable ETCD_AUTO_COMPACTION_RETENTION=8
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: recognized and used environment variable ETCD_CERT_FILE=/etc/ssl/etcd/ssl/member-sh5dnewoa-a-0145.pem
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=true
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: recognized and used environment variable ETCD_DATA_DIR=/var/lib/etcd
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: recognized and used environment variable ETCD_ELECTION_TIMEOUT=5000
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: recognized and used environment variable ETCD_ENABLE_V2=true
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: recognized and used environment variable ETCD_HEARTBEAT_INTERVAL=250
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://172.40.2.38:2380
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: recognized and used environment variable ETCD_INITIAL_CLUSTER=etcd-sh5dnewoa-a-0145=https://172.40.2.38:2380,etcd-sh5dnewoa-a-0144=https://172.40.2.232:2380,etcd-sh5dnewoa-a-0143=https://172.40.2.187:2380
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=new
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=k8s_etcd
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: recognized and used environment variable ETCD_KEY_FILE=/etc/ssl/etcd/ssl/member-sh5dnewoa-a-0145-key.pem
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://172.40.2.38:2379,https://127.0.0.1:2379
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://172.40.2.38:2380
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: recognized and used environment variable ETCD_METRICS=basic
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: recognized and used environment variable ETCD_NAME=etcd-sh5dnewoa-a-0145
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: recognized and used environment variable ETCD_PEER_CERT_FILE=/etc/ssl/etcd/ssl/member-sh5dnewoa-a-0145.pem
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: recognized and used environment variable ETCD_PEER_CLIENT_CERT_AUTH=True
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: recognized and used environment variable ETCD_PEER_KEY_FILE=/etc/ssl/etcd/ssl/member-sh5dnewoa-a-0145-key.pem
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/etc/ssl/etcd/ssl/ca.pem
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: recognized and used environment variable ETCD_PROXY=off
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: recognized and used environment variable ETCD_SNAPSHOT_COUNT=10000
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/etc/ssl/etcd/ssl/ca.pem
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: etcd Version: 3.4.13
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: Git SHA: ae9734ed2
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: Go Version: go1.12.17
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: Go OS/Arch: linux/amd64
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: setting maximum number of CPUs to 16, total number of available CPUs is 16
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: peerTLS: cert = /etc/ssl/etcd/ssl/member-sh5dnewoa-a-0145.pem, key = /etc/ssl/etcd/ssl/member-sh5dnewoa-a-0145-key.pem, trusted-ca = /etc/ssl/etcd/ssl/ca.pem, client-cert-auth = true, crl-file =
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: name = etcd-sh5dnewoa-a-0145
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: data dir = /var/lib/etcd
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: member dir = /var/lib/etcd/member
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: heartbeat = 250ms
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: election = 5000ms
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: snapshot count = 10000
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: advertise client URLs = https://172.40.2.38:2379
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: check file permission: directory "/var/lib/etcd" exist, but the permission is "drwxr-xr-x". The recommended permission is "-rwx------" to prevent possible unprivileged access to the data.
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: starting member 641485789e9cc567 in cluster 52acf62c614d9c0b
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: raft2022/03/19 14:52:13 INFO: 641485789e9cc567 switched to configuration voters=()
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: raft2022/03/19 14:52:13 INFO: 641485789e9cc567 became follower at term 0
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: raft2022/03/19 14:52:13 INFO: newRaft 641485789e9cc567 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: raft2022/03/19 14:52:13 INFO: 641485789e9cc567 became follower at term 1
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: raft2022/03/19 14:52:13 INFO: 641485789e9cc567 switched to configuration voters=(646936831818417772)
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: raft2022/03/19 14:52:13 INFO: 641485789e9cc567 switched to configuration voters=(646936831818417772 5096715789367712600)
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: raft2022/03/19 14:52:13 INFO: 641485789e9cc567 switched to configuration voters=(646936831818417772 5096715789367712600 7211535656430650727)
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: simple token is not cryptographically signed
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: starting peer 8fa619ef4a7466c...
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: started HTTP pipelining with peer 8fa619ef4a7466c
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: started streaming with peer 8fa619ef4a7466c (writer)
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: started streaming with peer 8fa619ef4a7466c (writer)
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: started peer 8fa619ef4a7466c
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: added peer 8fa619ef4a7466c
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: starting peer 46bb2c01c2631b58...
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: started HTTP pipelining with peer 46bb2c01c2631b58
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: started peer 46bb2c01c2631b58
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: started streaming with peer 8fa619ef4a7466c (stream MsgApp v2 reader)
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: started streaming with peer 8fa619ef4a7466c (stream Message reader)
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: started streaming with peer 46bb2c01c2631b58 (writer)
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: started streaming with peer 46bb2c01c2631b58 (stream MsgApp v2 reader)
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: added peer 46bb2c01c2631b58
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: starting server... [version: 3.4.13, cluster version: to_be_decided]
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: started streaming with peer 46bb2c01c2631b58 (stream Message reader)
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: started streaming with peer 46bb2c01c2631b58 (writer)
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: raft2022/03/19 14:52:13 INFO: 641485789e9cc567 switched to configuration voters=(646936831818417772 5096715789367712600 7211535656430650727)
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: added member 8fa619ef4a7466c [https://172.40.2.187:2380] to cluster 52acf62c614d9c0b
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: raft2022/03/19 14:52:13 INFO: 641485789e9cc567 switched to configuration voters=(646936831818417772 5096715789367712600 7211535656430650727)
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: raft2022/03/19 14:52:13 INFO: 641485789e9cc567 switched to configuration voters=(646936831818417772 5096715789367712600 7211535656430650727)
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: added member 46bb2c01c2631b58 [https://172.40.2.232:2380] to cluster 52acf62c614d9c0b
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: added member 641485789e9cc567 [https://172.40.2.38:2380] to cluster 52acf62c614d9c0b
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: ClientTLS: cert = /etc/ssl/etcd/ssl/member-sh5dnewoa-a-0145.pem, key = /etc/ssl/etcd/ssl/member-sh5dnewoa-a-0145-key.pem, trusted-ca = /etc/ssl/etcd/ssl/ca.pem, client-cert-auth = true, crl-file =
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: listening for peers on 172.40.2.38:2380
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: peer 8fa619ef4a7466c became active
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: established a TCP streaming connection with peer 8fa619ef4a7466c (stream Message writer)
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: established a TCP streaming connection with peer 8fa619ef4a7466c (stream MsgApp v2 writer)
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: peer 46bb2c01c2631b58 became active
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: established a TCP streaming connection with peer 46bb2c01c2631b58 (stream Message writer)
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: established a TCP streaming connection with peer 46bb2c01c2631b58 (stream MsgApp v2 writer)
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: 641485789e9cc567 initialized peer connection; fast-forwarding 18 ticks (election ticks 20) with 2 active peer(s)
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: established a TCP streaming connection with peer 8fa619ef4a7466c (stream MsgApp v2 reader)
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: established a TCP streaming connection with peer 46bb2c01c2631b58 (stream MsgApp v2 reader)
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: established a TCP streaming connection with peer 8fa619ef4a7466c (stream Message reader)
Mar 19 14:52:13 sh5dnewoa-a-0145 etcd: established a TCP streaming connection with peer 46bb2c01c2631b58 (stream Message reader)
Mar 19 14:52:14 sh5dnewoa-a-0145 etcd: raft2022/03/19 14:52:14 INFO: 641485789e9cc567 [term: 1] received a MsgVote message with higher term from 46bb2c01c2631b58 [term: 2]
Mar 19 14:52:14 sh5dnewoa-a-0145 etcd: raft2022/03/19 14:52:14 INFO: 641485789e9cc567 became follower at term 2
Mar 19 14:52:14 sh5dnewoa-a-0145 etcd: raft2022/03/19 14:52:14 INFO: 641485789e9cc567 [logterm: 1, index: 3, vote: 0] cast MsgVote for 46bb2c01c2631b58 [logterm: 1, index: 3] at term 2
Mar 19 14:52:14 sh5dnewoa-a-0145 etcd: raft2022/03/19 14:52:14 INFO: raft.node: 641485789e9cc567 elected leader 46bb2c01c2631b58 at term 2
Mar 19 14:52:14 sh5dnewoa-a-0145 etcd: published {Name:etcd-sh5dnewoa-a-0145 ClientURLs:[https://172.40.2.38:2379]} to cluster 52acf62c614d9c0b
Mar 19 14:52:14 sh5dnewoa-a-0145 etcd: ready to serve client requests
Mar 19 14:52:14 sh5dnewoa-a-0145 etcd: ready to serve client requests
Mar 19 14:52:14 sh5dnewoa-a-0145 systemd: Started etcd.
Mar 19 14:52:14 sh5dnewoa-a-0145 etcd: serving client requests on 127.0.0.1:2379
Mar 19 14:52:14 sh5dnewoa-a-0145 etcd: serving client requests on 172.40.2.38:2379
Mar 19 14:52:14 sh5dnewoa-a-0145 etcd: set the initial cluster version to 3.4
Mar 19 14:52:14 sh5dnewoa-a-0145 etcd: enabled capabilities for version 3.4
Mar 19 14:52:14 sh5dnewoa-a-0145 systemd: Reloading.
Mar 19 14:52:14 sh5dnewoa-a-0145 etcd: /health OK (status code 200)
Mar 19 14:52:14 sh5dnewoa-a-0145 etcd: /health OK (status code 200)
Mar 19 14:52:14 sh5dnewoa-a-0145 etcd: /health OK (status code 200)
Mar 19 14:52:15 sh5dnewoa-a-0145 etcd: /health OK (status code 200)
Mar 19 14:52:15 sh5dnewoa-a-0145 etcd: /health OK (status code 200)
Mar 19 14:52:15 sh5dnewoa-a-0145 etcd: /health OK (status code 200)
Mar 19 14:52:18 sh5dnewoa-a-0145 etcd: sending database snapshot to client 20 kB [20480 bytes]
Mar 19 14:52:18 sh5dnewoa-a-0145 etcd: sending database sha256 checksum to client [32 bytes]
Mar 19 14:52:18 sh5dnewoa-a-0145 etcd: successfully sent database snapshot to client 20 kB [20480 bytes]
Mar 19 14:52:51 sh5dnewoa-a-0145 systemd: Reloading.
Mar 19 14:52:51 sh5dnewoa-a-0145 systemd: Reloading.
Mar 19 14:52:57 sh5dnewoa-a-0145 systemd: Reloading.
Mar 19 14:52:57 sh5dnewoa-a-0145 systemd: Started kubelet: The Kubernetes Node Agent.
Mar 19 14:52:57 sh5dnewoa-a-0145 kubelet: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Mar 19 14:52:57 sh5dnewoa-a-0145 kubelet: Flag --network-plugin has been deprecated, will be removed along with dockershim.
Mar 19 14:52:57 sh5dnewoa-a-0145 kubelet: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Mar 19 14:52:57 sh5dnewoa-a-0145 kubelet: Flag --network-plugin has been deprecated, will be removed along with dockershim.
Mar 19 14:52:58 sh5dnewoa-a-0145 systemd: Started Kubernetes systemd probe.
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.096200 7596 server.go:440] "Kubelet version" kubeletVersion="v1.21.5"
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.096584 7596 server.go:851] "Client rotation is on, will bootstrap in background"
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.100874 7596 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.187565 7596 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /"
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.187841 7596 container_manager_linux.go:278] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.187962 7596 container_manager_linux.go:283] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[cpu:{i:{value:200 scale:-3} d:{Dec:} s:200m Format:DecimalSI} memory:{i:{value:262144000 scale:0} d:{Dec:} s:250Mi Format:BinarySI}] SystemReserved:map[cpu:{i:{value:200 scale:-3} d:{Dec:} s:200m Format:DecimalSI} memory:{i:{value:262144000 scale:0} d:{Dec:} s:250Mi Format:BinarySI}] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:pid.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:1000 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.187991 7596 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.188004 7596 container_manager_linux.go:314] "Initializing Topology Manager" policy="none" scope="container"
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.188012 7596 container_manager_linux.go:319] "Creating device plugin manager" devicePluginEnabled=true
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.188111 7596 kubelet.go:307] "Using dockershim is deprecated, please consider using a full-fledged CRI implementation"
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.188158 7596 client.go:78] "Connecting to docker on the dockerEndpoint" endpoint="unix:///var/run/docker.sock"
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.188173 7596 client.go:97] "Start docker client with request timeout" timeout="2m0s"
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.198176 7596 docker_service.go:566] "Hairpin mode is set but kubenet is not enabled, falling back to HairpinVeth" hairpinMode=promiscuous-bridge
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.198200 7596 docker_service.go:242] "Hairpin mode is set" hairpinMode=hairpin-veth
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.200291 7596 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.228978 7596 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.229070 7596 docker_service.go:257] "Docker cri networking managed by the network plugin" networkPluginName="cni"
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.229221 7596 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.237898 7596 docker_service.go:264] "Docker Info" dockerInfo=&{ID:GR5H:CMMO:YBJE:BE3A:KTQ7:MW7T:2RRC:NHXN:LCEL:BXIX:UPLW:RADV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:40 SystemTime:2022-03-19T14:52:58.229738982+08:00 LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:1 NEventsListener:0 KernelVersion:4.4.249-1.el7.elrepo.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSVersion:7 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0009ec0e0 NCPU:16 MemTotal:33701453824 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:sh5dnewoa-a-0145 Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:} io.containerd.runtime.v1.linux:{Path:runc Args:[] Shim:} runc:{Path:runc Args:[] Shim:}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster: Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba2 Expected:v1.0.3-0-gf46b6ba2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine DefaultAddressPools:[] Warnings:[]}
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.237936 7596 docker_service.go:277] "Setting cgroupDriver" cgroupDriver="cgroupfs"
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.248952 7596 remote_runtime.go:62] parsed scheme: ""
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.248975 7596 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.249028 7596 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] }
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.249040 7596 clientconn.go:948] ClientConn switching balancer to "pick_first"
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.249096 7596 remote_image.go:50] parsed scheme: ""
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.249103 7596 remote_image.go:50] scheme "" not registered, fallback to default scheme
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.249111 7596 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] }
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.249116 7596 clientconn.go:948] ClientConn switching balancer to "pick_first"
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.249205 7596 kubelet.go:404] "Attempting to sync node with API server"
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.249232 7596 kubelet.go:272] "Adding static pod path" path="/etc/kubernetes/manifests"
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.249260 7596 kubelet.go:283] "Adding apiserver pod source"
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.249277 7596 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Mar 19 14:52:58 sh5dnewoa-a-0145 kubelet: I0319 14:52:58.266152 7596 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="docker" version="20.10.13" apiVersion="1.41.0"
Mar 19 14:53:03 sh5dnewoa-a-0145 kubelet: I0319 14:53:03.229982 7596 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: E0319 14:53:06.675623 7596 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: I0319 14:53:06.680340 7596 server.go:1190] "Started kubelet"
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: E0319 14:53:06.680486 7596 kubelet.go:1306] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache"
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: I0319 14:53:06.680658 7596 server.go:149] "Starting to listen" address="0.0.0.0" port=10250
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: I0319 14:53:06.681289 7596 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: I0319 14:53:06.681419 7596 volume_manager.go:271] "Starting Kubelet Volume Manager"
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: I0319 14:53:06.681482 7596 desired_state_of_world_populator.go:141] "Desired state populator starts to run"
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: I0319 14:53:06.683810 7596 server.go:409] "Adding debug handlers to kubelet server"
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: E0319 14:53:06.694934 7596 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: E0319 14:53:06.782236 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: I0319 14:53:06.812050 7596 cpu_manager.go:199] "Starting CPU manager" policy="none"
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: I0319 14:53:06.812078 7596 cpu_manager.go:200] "Reconciling" reconcilePeriod="10s"
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: I0319 14:53:06.812095 7596 state_mem.go:36] "Initialized new in-memory state store"
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: I0319 14:53:06.815504 7596 policy_none.go:44] "None policy: Start"
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: I0319 14:53:06.818510 7596 kubelet_node_status.go:71] "Attempting to register node" node="sh5dnewoa-a-0145"
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: W0319 14:53:06.818524 7596 container.go:586] Failed to update stats for container "/kubepods": /sys/fs/cgroup/cpuset/kubepods/cpuset.mems found to be empty, continuing to push stats
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: W0319 14:53:06.821102 7596 container.go:586] Failed to update stats for container "/kubepods/burstable": /sys/fs/cgroup/cpuset/kubepods/burstable/cpuset.mems found to be empty, continuing to push stats
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: W0319 14:53:06.823242 7596 container.go:586] Failed to update stats for container "/kubepods/besteffort": /sys/fs/cgroup/cpuset/kubepods/besteffort/cpuset.mems found to be empty, continuing to push stats
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: I0319 14:53:06.825154 7596 manager.go:600] "Failed to retrieve checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: I0319 14:53:06.825350 7596 plugin_manager.go:114] "Starting Kubelet Plugin Manager"
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: E0319 14:53:06.825701 7596 eviction_manager.go:255] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: I0319 14:53:06.863934 7596 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv4
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: E0319 14:53:06.882575 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:06 sh5dnewoa-a-0145 kernel: ip6_tables: (C) 2000-2006 Netfilter Core Team
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: I0319 14:53:06.961095 7596 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv6
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: I0319 14:53:06.961121 7596 status_manager.go:157] "Starting to sync pod status with apiserver"
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: I0319 14:53:06.961144 7596 kubelet.go:1846] "Starting kubelet main sync loop"
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: E0319 14:53:06.961186 7596 kubelet.go:1870] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Mar 19 14:53:06 sh5dnewoa-a-0145 kubelet: E0319 14:53:06.983483 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:07 sh5dnewoa-a-0145 kubelet: I0319 14:53:07.061740 7596 topology_manager.go:187] "Topology Admit Handler"
Mar 19 14:53:07 sh5dnewoa-a-0145 kubelet: E0319 14:53:07.083611 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:07 sh5dnewoa-a-0145 kubelet: I0319 14:53:07.083664 7596 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs-0\" (UniqueName: \"kubernetes.io/host-path/a456dd0781cd530ff8ecbd778ccbea9f-etcd-certs-0\") pod \"kube-apiserver-sh5dnewoa-a-0145\" (UID: \"a456dd0781cd530ff8ecbd778ccbea9f\") "
Mar 19 14:53:07 sh5dnewoa-a-0145 kubelet: I0319 14:53:07.083702 7596 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a456dd0781cd530ff8ecbd778ccbea9f-k8s-certs\") pod \"kube-apiserver-sh5dnewoa-a-0145\" (UID: \"a456dd0781cd530ff8ecbd778ccbea9f\") "
Mar 19 14:53:07 sh5dnewoa-a-0145 kubelet: I0319 14:53:07.083725 7596 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a456dd0781cd530ff8ecbd778ccbea9f-ca-certs\") pod \"kube-apiserver-sh5dnewoa-a-0145\" (UID: \"a456dd0781cd530ff8ecbd778ccbea9f\") "
Mar 19 14:53:07 sh5dnewoa-a-0145 kubelet: I0319 14:53:07.083748 7596 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-pki\" (UniqueName: \"kubernetes.io/host-path/a456dd0781cd530ff8ecbd778ccbea9f-etc-pki\") pod \"kube-apiserver-sh5dnewoa-a-0145\" (UID: \"a456dd0781cd530ff8ecbd778ccbea9f\") "
Mar 19 14:53:07 sh5dnewoa-a-0145 kubelet: I0319 14:53:07.091441 7596 topology_manager.go:187] "Topology Admit Handler"
Mar 19 14:53:07 sh5dnewoa-a-0145 kubelet: I0319 14:53:07.130058 7596 topology_manager.go:187] "Topology Admit Handler"
Mar 19 14:53:07 sh5dnewoa-a-0145 kubelet: E0319 14:53:07.183693 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:07 sh5dnewoa-a-0145 kubelet: I0319 14:53:07.184547 7596 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3ff24d656b000bd743012e274a97a603-kubeconfig\") pod \"kube-scheduler-sh5dnewoa-a-0145\" (UID: \"3ff24d656b000bd743012e274a97a603\") "
Mar 19 14:53:07 sh5dnewoa-a-0145 kubelet: I0319 14:53:07.184650 7596 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dbb464fe8d9281a5ab3d118409dee17a-flexvolume-dir\") pod \"kube-controller-manager-sh5dnewoa-a-0145\" (UID: \"dbb464fe8d9281a5ab3d118409dee17a\") "
Mar 19 14:53:07 sh5dnewoa-a-0145 kubelet: I0319 14:53:07.184674 7596 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-time\" (UniqueName: \"kubernetes.io/host-path/dbb464fe8d9281a5ab3d118409dee17a-host-time\") pod \"kube-controller-manager-sh5dnewoa-a-0145\" (UID: \"dbb464fe8d9281a5ab3d118409dee17a\") "
Mar 19 14:53:07 sh5dnewoa-a-0145 kubelet: I0319 14:53:07.184700 7596 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dbb464fe8d9281a5ab3d118409dee17a-k8s-certs\") pod \"kube-controller-manager-sh5dnewoa-a-0145\" (UID: \"dbb464fe8d9281a5ab3d118409dee17a\") "
Mar 19 14:53:07 sh5dnewoa-a-0145 kubelet: I0319 14:53:07.184723 7596 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dbb464fe8d9281a5ab3d118409dee17a-kubeconfig\") pod \"kube-controller-manager-sh5dnewoa-a-0145\" (UID: \"dbb464fe8d9281a5ab3d118409dee17a\") "
Mar 19 14:53:07 sh5dnewoa-a-0145 kubelet: I0319 14:53:07.184771 7596 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dbb464fe8d9281a5ab3d118409dee17a-ca-certs\") pod \"kube-controller-manager-sh5dnewoa-a-0145\" (UID: \"dbb464fe8d9281a5ab3d118409dee17a\") "
Mar 19 14:53:07 sh5dnewoa-a-0145 kubelet: I0319 14:53:07.184794 7596 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-pki\" (UniqueName: \"kubernetes.io/host-path/dbb464fe8d9281a5ab3d118409dee17a-etc-pki\") pod \"kube-controller-manager-sh5dnewoa-a-0145\" (UID: \"dbb464fe8d9281a5ab3d118409dee17a\") "
Mar 19 14:53:07 sh5dnewoa-a-0145 kubelet: W0319 14:53:07.185846 7596 container.go:586] Failed to update stats for container "/kubepods/burstable/pod3ff24d656b000bd743012e274a97a603": /sys/fs/cgroup/cpuset/kubepods/burstable/pod3ff24d656b000bd743012e274a97a603/cpuset.mems found to be empty, continuing to push stats
Mar 19 14:53:07 sh5dnewoa-a-0145 kubelet: E0319 14:53:07.284563 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:07 sh5dnewoa-a-0145 kubelet: E0319 14:53:07.384792 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:07 sh5dnewoa-a-0145 kubelet: E0319 14:53:07.485128 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:07 sh5dnewoa-a-0145 kubelet: E0319 14:53:07.586054 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:07 sh5dnewoa-a-0145 kubelet: E0319 14:53:07.686669 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:07 sh5dnewoa-a-0145 kubelet: E0319 14:53:07.787597 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:07 sh5dnewoa-a-0145 dockerd: time="2022-03-19T14:53:07.808106371+08:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ae6d8f442c0ffb182b9874bf96340a0a245100200cd5921a54dcfd478c4d71cd pid=8015
Mar 19 14:53:07 sh5dnewoa-a-0145 dockerd: time="2022-03-19T14:53:07.809811368+08:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/97869587e8babfa88a5a641ab65586ab104a3ee514e05cbda412b852a3c13e95 pid=8014
Mar 19 14:53:07 sh5dnewoa-a-0145 dockerd: time="2022-03-19T14:53:07.810721665+08:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f5e7f06948244e1a7b143de41ab75f203bcdd69e663eaf798c4cfceec12913e8 pid=8018
Mar 19 14:53:07 sh5dnewoa-a-0145 kubelet: E0319 14:53:07.887887 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:07 sh5dnewoa-a-0145 kubelet: E0319 14:53:07.988047 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:08 sh5dnewoa-a-0145 kubelet: I0319 14:53:08.004197 7596 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="ae6d8f442c0ffb182b9874bf96340a0a245100200cd5921a54dcfd478c4d71cd"
Mar 19 14:53:08 sh5dnewoa-a-0145 kubelet: I0319 14:53:08.030191 7596 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="f5e7f06948244e1a7b143de41ab75f203bcdd69e663eaf798c4cfceec12913e8"
Mar 19 14:53:08 sh5dnewoa-a-0145 dockerd: time="2022-03-19T14:53:08.055580502+08:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/cdae0ed9050cef2c7d7e19335f9ca550fcf027e1d35ec31d04e175dddab7405b pid=8214
Mar 19 14:53:08 sh5dnewoa-a-0145 kubelet: I0319 14:53:08.062945 7596 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="97869587e8babfa88a5a641ab65586ab104a3ee514e05cbda412b852a3c13e95"
Mar 19 14:53:08 sh5dnewoa-a-0145 dockerd: time="2022-03-19T14:53:08.063502767+08:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/322d91a6c1792f59ba8349331cb2a56d2fbd6311a73b0d3e19c14b96ac88d77c pid=8237
Mar 19 14:53:08 sh5dnewoa-a-0145 dockerd: time="2022-03-19T14:53:08.069084453+08:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2b7d159448d5b5e6424c7e6d25acc75d23e49a8e2f38a382733e1b7a0756da08 pid=8256
Mar 19 14:53:08 sh5dnewoa-a-0145 kubelet: E0319 14:53:08.089117 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:08 sh5dnewoa-a-0145 kubelet: E0319 14:53:08.189488 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:08 sh5dnewoa-a-0145 kubelet: I0319 14:53:08.230588 7596 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
Mar 19 14:53:08 sh5dnewoa-a-0145 kubelet: E0319 14:53:08.290136 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:08 sh5dnewoa-a-0145 kubelet: E0319 14:53:08.390191 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:11 sh5dnewoa-a-0145 kubelet: E0319 14:53:11.838311 7596 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
Mar 19 14:53:11 sh5dnewoa-a-0145 kubelet: E0319 14:53:11.913649 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:13 sh5dnewoa-a-0145 kubelet: I0319 14:53:13.231663 7596 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
Mar 19 14:53:16 sh5dnewoa-a-0145 kubelet: E0319 14:53:16.685618 7596 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://lb.kubesphere.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/sh5dnewoa-a-0145?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Mar 19 14:53:16 sh5dnewoa-a-0145 kubelet: E0319 14:53:16.741088 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:16 sh5dnewoa-a-0145 kubelet: E0319 14:53:16.826126 7596 eviction_manager.go:255] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:16 sh5dnewoa-a-0145 kubelet: E0319 14:53:16.841868 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:16 sh5dnewoa-a-0145 kubelet: E0319 14:53:16.849572 7596 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
Mar 19 14:53:16 sh5dnewoa-a-0145 kubelet: E0319 14:53:16.942213 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:17 sh5dnewoa-a-0145 kubelet: E0319 14:53:17.042982 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:21 sh5dnewoa-a-0145 kubelet: E0319 14:53:21.860318 7596 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
Mar 19 14:53:21 sh5dnewoa-a-0145 kubelet: E0319 14:53:21.871459 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:23 sh5dnewoa-a-0145 kubelet: I0319 14:53:23.232963 7596 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
Mar 19 14:53:23 sh5dnewoa-a-0145 kubelet: E0319 14:53:23.281275 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:26 sh5dnewoa-a-0145 kubelet: E0319 14:53:26.827213 7596 eviction_manager.go:255] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:26 sh5dnewoa-a-0145 kubelet: E0319 14:53:26.870807 7596 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
Mar 19 14:53:26 sh5dnewoa-a-0145 kubelet: E0319 14:53:26.886278 7596 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://lb.kubesphere.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/sh5dnewoa-a-0145?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Mar 19 14:53:26 sh5dnewoa-a-0145 kubelet: E0319 14:53:26.902362 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:27 sh5dnewoa-a-0145 kubelet: E0319 14:53:27.003082 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:28 sh5dnewoa-a-0145 kubelet: E0319 14:53:28.111116 7596 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://lb.kubesphere.local:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.40.2.64:6443: i/o timeout
Mar 19 14:53:28 sh5dnewoa-a-0145 kubelet: E0319 14:53:28.211529 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:28 sh5dnewoa-a-0145 kubelet: I0319 14:53:28.233904 7596 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
Mar 19 14:53:28 sh5dnewoa-a-0145 kubelet: I0319 14:53:28.250239 7596 trace.go:205] Trace[189269584]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (19-Mar-2022 14:52:58.249) (total time: 30000ms):
Mar 19 14:53:28 sh5dnewoa-a-0145 kubelet: Trace[189269584]: [30.000444822s] [30.000444822s] END
Mar 19 14:53:28 sh5dnewoa-a-0145 kubelet: E0319 14:53:28.250263 7596 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.Service: failed to list v1.Service: Get "https://lb.kubesphere.local:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.40.2.64:6443: i/o timeout
Mar 19 14:53:28 sh5dnewoa-a-0145 kubelet: I0319 14:53:28.250287 7596 trace.go:205] Trace[477519539]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (19-Mar-2022 14:52:58.249) (total time: 30000ms):
Mar 19 14:53:28 sh5dnewoa-a-0145 kubelet: Trace[477519539]: [30.000632525s] [30.000632525s] END
Mar 19 14:53:28 sh5dnewoa-a-0145 kubelet: E0319 14:53:28.250328 7596 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.Node: failed to list v1.Node: Get "https://lb.kubesphere.local:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsh5dnewoa-a-0145&limit=500&resourceVersion=0": dial tcp 172.40.2.64:6443: i/o timeout
Mar 19 14:53:28 sh5dnewoa-a-0145 kubelet: E0319 14:53:28.312170 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:28 sh5dnewoa-a-0145 kubelet: E0319 14:53:28.412609 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:28 sh5dnewoa-a-0145 kubelet: E0319 14:53:28.513052 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:31 sh5dnewoa-a-0145 kubelet: E0319 14:53:31.883646 7596 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
Mar 19 14:53:31 sh5dnewoa-a-0145 kubelet: E0319 14:53:31.933766 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:32 sh5dnewoa-a-0145 kubelet: E0319 14:53:32.034410 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:33 sh5dnewoa-a-0145 kubelet: I0319 14:53:33.234964 7596 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d" Mar 19 14:53:33 sh5dnewoa-a-0145 kubelet: E0319 14:53:33.241143 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found" Mar 19 14:53:33 sh5dnewoa-a-0145 kubelet: E0319 14:53:33.341391 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Mar 19 14:53:36 sh5dnewoa-a-0145 kubelet: E0319 14:53:36.681570 7596 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"sh5dnewoa-a-0145.16ddb5ab74284237", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"sh5dnewoa-a-0145", UID:"sh5dnewoa-a-0145", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"sh5dnewoa-a-0145"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0857d54a88c8e37, ext:9295881998, loc:(time.Location)(0x74f4aa0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0857d54a88c8e37, ext:9295881998, loc:(time.Location)(0x74f4aa0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://lb.kubesphere.local:6443/api/v1/namespaces/default/events": dial tcp 172.40.2.64:6443: i/o timeout'(may retry after sleeping) Mar 19 14:53:36 sh5dnewoa-a-0145 kubelet: I0319 14:53:36.684934 7596 trace.go:205] Trace[103060332]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (19-Mar-2022 14:53:06.681) (total time: 30003ms): Mar 19 14:53:36 sh5dnewoa-a-0145 kubelet: Trace[103060332]: [30.003224349s] [30.003224349s] END Mar 19 14:53:36 sh5dnewoa-a-0145 kubelet: E0319 14:53:36.684960 7596 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.CSIDriver: failed to list v1.CSIDriver: Get "https://lb.kubesphere.local:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.40.2.64:6443: i/o timeout Mar 19 14:53:36 sh5dnewoa-a-0145 kubelet: E0319 14:53:36.762687 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found" Mar 19 14:53:36 sh5dnewoa-a-0145 kubelet: E0319 14:53:36.819216 7596 kubelet_node_status.go:93] "Unable to register node with API server" err="Post \"https://lb.kubesphere.local:6443/api/v1/nodes\": dial tcp 172.40.2.64:6443: i/o timeout" node="sh5dnewoa-a-0145" Mar 19 14:53:36 sh5dnewoa-a-0145 kubelet: E0319 14:53:36.827333 7596 eviction_manager.go:255] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"sh5dnewoa-a-0145\" not found" Mar 19 14:53:36 sh5dnewoa-a-0145 kubelet: E0319 14:53:36.863615 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found" Mar 19 14:53:36 sh5dnewoa-a-0145 kubelet: E0319 14:53:36.893706 7596 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" Mar 19 14:53:36 sh5dnewoa-a-0145 kubelet: I0319 14:53:36.962071 7596 trace.go:205] Trace[1698303353]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (19-Mar-2022 14:53:06.961) (total time: 30000ms): Mar 19 14:53:36 sh5dnewoa-a-0145 kubelet: Trace[1698303353]: [30.000697596s] [30.000697596s] END Mar 19 14:53:36 sh5dnewoa-a-0145 kubelet: E0319 14:53:36.962099 7596 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.RuntimeClass: failed to list v1.RuntimeClass: Get "https://lb.kubesphere.local:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.40.2.64:6443: i/o timeout Mar 19 14:53:36 sh5dnewoa-a-0145 kubelet: E0319 14:53:36.964034 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found" Mar 19 14:53:37 sh5dnewoa-a-0145 kubelet: I0319 14:53:37.048903 7596 kubelet_node_status.go:71] "Attempting to register node" node="sh5dnewoa-a-0145" Mar 19 14:53:37 sh5dnewoa-a-0145 kubelet: E0319 14:53:37.064988 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found" Mar 19 14:53:37 sh5dnewoa-a-0145 kubelet: I0319 14:53:37.122410 7596 status_manager.go:566] "Failed to get status for pod" podUID=a456dd0781cd530ff8ecbd778ccbea9f pod="kube-system/kube-apiserver-sh5dnewoa-a-0145" error="Get \"https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-sh5dnewoa-a-0145\": dial tcp 172.40.2.64:6443: i/o timeout" Mar 19 14:53:37 sh5dnewoa-a-0145 kubelet: E0319 14:53:37.165459 7596 kubelet.go:2291] "Error getting node" err="node \"sh5dnewoa-a-0145\" not found"
Additional information
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred: timed out waiting for the condition
This error is likely caused by:
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1