kubesphere / kubekey

Install Kubernetes/K3s only, both Kubernetes/K3s and KubeSphere, and related cloud-native add-ons, it supports all-in-one, multi-node, and HA 🔥 ⎈ 🐳
https://kubesphere.io
Apache License 2.0
2.3k stars 543 forks source link

kubekey安装1.25.3失败 #1600

Open bfbz opened 1 year ago

bfbz commented 1 year ago

What is version of KubeKey has the issue?

v3.0.0

What is your os environment?

debian11

KubeKey config file

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: k8s
spec:
  hosts:
  - {name: k8s-master-1-192-168-8-31, address: 192.168.8.31, internalAddress: 192.168.8.31, user: root, password: "123456"}
  - {name: k8s-master-2-192-168-8-32, address: 192.168.8.32, internalAddress: 192.168.8.32, user: root, password: "123456"}
  - {name: k8s-master-3-192-168-8-33, address: 192.168.8.33, internalAddress: 192.168.8.33, user: root, password: "123456"}
  - {name: k8s-node-1-192-168-8-34, address: 192.168.8.34, internalAddress: 192.168.8.34, user: root, password: "123456"}
  roleGroups:
    etcd:
    - k8s-master-1-192-168-8-31
    - k8s-master-2-192-168-8-32
    - k8s-master-3-192-168-8-33
    master:
    - k8s-master-1-192-168-8-31
    - k8s-master-2-192-168-8-32
    - k8s-master-3-192-168-8-33
    worker:
    - k8s-node-1-192-168-8-34
  controlPlaneEndpoint:
    internalLoadbalancer: kube-vip #Internal loadbalancer for apiservers. Support: haproxy, kube-vip [Default: ""]
    domain: lb.kubesphere.local
    address: "192.168.8.30"      # The IP address of your load balancer. If you use internalLoadblancer in "kube-vip" mode, a VIP is required here.
    port: 6443
  system:
    ntpServers: #  The ntp servers of chrony.
      - time1.cloud.tencent.com
      - ntp.aliyun.com
    timezone: "Asia/Shanghai"
    #rpms: # Specify additional packages to be installed. The ISO file which is contained in the artifact is required.
    #  - nfs-utils
    #debs: # Specify additional packages to be installed. The ISO file which is contained in the artifact is required.
    #  - nfs-common
    #preInstall:  # Specify custom init shell scripts for each nodes, and execute according to the list order.
    #  - name: format and mount disk  
    #    bash: /bin/bash -x setup-disk.sh
    #    materials: # scripts can has some dependency materials. those will copy to the node        
    #      - ./setup-disk.sh # the script which shell execute need
    #      -  xxx            # other tools materials need by this script
    #postInstall: # Specify custom finish clean up shell scripts for each nodes after the kubernetes install.
    #  - name: clean tmps files
    #    bash: |
    #       rm -fr /tmp/kubekey/*

  kubernetes:
    version: v1.25.3
    imageRepo: kubesphere
    containerManager: containerd
    clusterName: cluster.local
    autoRenewCerts: true # Whether to install a script which can automatically renew the Kubernetes control plane certificates. [Default: false]
    masqueradeAll: false  # masqueradeAll tells kube-proxy to SNAT everything if using the pure iptables proxy mode. [Default: false].
    maxPods: 110  # maxPods is the number of Pods that can run on this Kubelet. [Default: 110]
    podPidsLimit: 10000 # podPidsLimit is the maximum number of PIDs in any pod. [Default: 10000]
    nodeCidrMaskSize: 24  # The internal network node size allocation. This is the size allocated to each node on your network. [Default: 24]
    proxyMode: ipvs  # Specify which proxy mode to use. [Default: ipvs]
    featureGates: # enable featureGates, [Default: {"ExpandCSIVolumes":true,"RotateKubeletServerCertificate": true,"CSIStorageCapacity":true, "TTLAfterFinished":true}]
      CSIStorageCapacity: true
      ExpandCSIVolumes: true
      RotateKubeletServerCertificate: true
      TTLAfterFinished: true
    ## support kata and NFD
    # kata:
    #   enabled: true
    # nodeFeatureDiscovery
    #   enabled: true

  etcd:
    type: kubeadm  # Specify the type of etcd used by the cluster. When the cluster type is k3s, setting this parameter to kubeadm is invalid. [kubekey | kubeadm | external] [Default: kubekey]
    ## The following parameters need to be added only when the type is set to external.
    ## caFile, certFile and keyFile need not be set, if TLS authentication is not enabled for the existing etcd.
    # external:
    #   endpoints:
    #     - https://192.168.6.6:2379
    #   caFile: /pki/etcd/ca.crt
    #   certFile: /pki/etcd/etcd.crt
    #   keyFile: /pki/etcd/etcd.key

  network:
    plugin: calico
    calico:
      ipipMode: Always  # IPIP Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, vxlanMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Always]
      vxlanMode: Never  # VXLAN Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, ipipMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Never]
      vethMTU: 0  # The maximum transmission unit (MTU) setting determines the largest packet size that can be transmitted through your network. By default, MTU is auto-detected. [Default: 0]
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18

  storage:
    openebs:
      basePath: /opt/openebs/local # base path of the local PV provisioner

  registry:
    #registryMirrors: []
    #insecureRegistries: []
    #privateRegistry: ""
    #namespaceOverride: ""
    #auths: # if docker add by `docker login`, if containerd append to `/etc/containerd/config.toml`
    #  "dockerhub.kubekey.local":
    #    username: "xxx"
    #    password: "***"
    #    skipTLSVerify: false # Allow contacting registries over HTTPS with failed TLS verification.
    #    plainHTTP: false # Allow contacting registries over HTTP.
    #    certsPath: "/etc/docker/certs.d/dockerhub.kubekey.local" # Use certificates at path (*.crt, *.cert, *.key) to connect to the registry.

A clear and concise description of what happend.

./kk create cluster -f config-k8s.yaml 报错

Relevant log output

[root@k8s-master-1-192-168-8-31:~/kubekey]# ./kk create cluster  -f config-k8s.yaml 

 _   __      _          _   __           
| | / /     | |        | | / /           
| |/ / _   _| |__   ___| |/ /  ___ _   _ 
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                    __/ |
                                   |___/

21:38:55 CST [GreetingsModule] Greetings
21:38:55 CST message: [k8s-node-1-192-168-8-34]
sudo: unable to resolve host k8s-node-1-192-168-8-34: Name or service not known
Greetings, KubeKey!
21:38:56 CST message: [k8s-master-2-192-168-8-32]
sudo: unable to resolve host k8s-master-2-192-168-8-32: Name or service not known
Greetings, KubeKey!
21:38:56 CST message: [k8s-master-1-192-168-8-31]
sudo: unable to resolve host k8s-master-1-192-168-8-31: Name or service not known
Greetings, KubeKey!
21:38:56 CST message: [k8s-master-3-192-168-8-33]
sudo: unable to resolve host k8s-master-3-192-168-8-33: Name or service not known
Greetings, KubeKey!
21:38:56 CST success: [k8s-node-1-192-168-8-34]
21:38:56 CST success: [k8s-master-2-192-168-8-32]
21:38:56 CST success: [k8s-master-1-192-168-8-31]
21:38:56 CST success: [k8s-master-3-192-168-8-33]
21:38:56 CST [NodePreCheckModule] A pre-check on nodes
21:39:57 CST success: [k8s-master-2-192-168-8-32]
21:39:57 CST success: [k8s-node-1-192-168-8-34]
21:39:57 CST success: [k8s-master-3-192-168-8-33]
21:39:57 CST success: [k8s-master-1-192-168-8-31]
21:39:57 CST [ConfirmModule] Display confirmation form
+---------------------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name                      | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time         |
+---------------------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| k8s-master-1-192-168-8-31 | y    | y    | y       | y        | y     | y     | y       | y         | y      |        |            | y          |             |                  | CST 21:39:57 |
| k8s-master-2-192-168-8-32 | y    | y    | y       | y        | y     | y     | y       | y         | y      |        |            | y          |             |                  | CST 21:39:57 |
| k8s-master-3-192-168-8-33 | y    | y    | y       | y        | y     | y     | y       | y         | y      |        |            | y          |             |                  | CST 21:39:57 |
| k8s-node-1-192-168-8-34   | y    | y    | y       | y        | y     | y     | y       | y         | y      |        |            | y          |             |                  | CST 21:39:57 |
+---------------------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
21:40:02 CST success: [LocalHost]
21:40:02 CST [NodeBinariesModule] Download installation binaries
21:40:02 CST message: [localhost]
downloading amd64 kubeadm v1.25.3 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 41.7M  100 41.7M    0     0  1014k      0  0:00:42  0:00:42 --:--:-- 1007k
21:40:44 CST message: [localhost]
downloading amd64 kubelet v1.25.3 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  108M  100  108M    0     0  1014k      0  0:01:49  0:01:49 --:--:-- 1052k
21:42:35 CST message: [localhost]
downloading amd64 kubectl v1.25.3 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 42.9M  100 42.9M    0     0  1019k      0  0:00:43  0:00:43 --:--:-- 1091k
21:43:19 CST message: [localhost]
downloading amd64 helm v3.9.0 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 44.0M  100 44.0M    0     0  1021k      0  0:00:44  0:00:44 --:--:-- 1052k
21:44:03 CST message: [localhost]
downloading amd64 kubecni v0.9.1 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 37.9M  100 37.9M    0     0  1021k      0  0:00:38  0:00:38 --:--:-- 1052k
21:44:41 CST message: [localhost]
downloading amd64 crictl v1.24.0 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 13.8M  100 13.8M    0     0  1021k      0  0:00:13  0:00:13 --:--:-- 1054k
21:44:55 CST message: [localhost]
downloading amd64 etcd v3.4.13 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 16.5M  100 16.5M    0     0  1012k      0  0:00:16  0:00:16 --:--:-- 1066k
21:45:12 CST message: [localhost]
downloading amd64 containerd 1.6.4 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 42.3M  100 42.3M    0     0  1016k      0  0:00:42  0:00:42 --:--:-- 1036k
21:45:55 CST message: [localhost]
downloading amd64 runc v1.1.1 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 9194k  100 9194k    0     0  1005k      0  0:00:09  0:00:09 --:--:-- 1042k
21:46:04 CST success: [LocalHost]
21:46:04 CST [ConfigureOSModule] Get OS release
panic: runtime error: index out of range [1] with length 1

goroutine 262 [running]:
github.com/dominodatalab/os-release.Parse({0xc00088e160, 0x15d})
    github.com/dominodatalab/os-release@v0.0.0-20190522011736-bcdb4a3e3c2f/osrelease.go:37 +0x3c9
github.com/kubesphere/kubekey/cmd/kk/pkg/bootstrap/os.(*GetOSData).Execute(0x23865c0?, {0x27a56e8, 0xc00010ea80})
    github.com/kubesphere/kubekey/cmd/kk/pkg/bootstrap/os/tasks.go:265 +0x7f
github.com/kubesphere/kubekey/cmd/kk/pkg/core/task.(*RemoteTask).ExecuteWithRetry(0xc0001be410, {0x27a56e8, 0xc00010ea80})
    github.com/kubesphere/kubekey/cmd/kk/pkg/core/task/remote_task.go:217 +0x119
github.com/kubesphere/kubekey/cmd/kk/pkg/core/task.(*RemoteTask).Run(0xc0001be410, {0x27a56e8, 0xc00010ea80}, {0x27abc78, 0xc0004e9680}, 0x276dfc0?, 0xc0006d8840?)
    github.com/kubesphere/kubekey/cmd/kk/pkg/core/task/remote_task.go:153 +0x1c5
created by github.com/kubesphere/kubekey/cmd/kk/pkg/core/task.(*RemoteTask).RunWithTimeout
    github.com/kubesphere/kubekey/cmd/kk/pkg/core/task/remote_task.go:110 +0x16d

Additional information

No response

bfbz commented 1 year ago

上面问题已经解决,是debian的hosts文件里面要加一条主机名的解析,不能用localhost,但是继续运行还是遇到了报错,如下:

[root@k8s-master-1-192-168-8-31:~/kubekey]# ./kk create cluster -f config-k8s.yaml --yes

 _   __      _          _   __           
| | / /     | |        | | / /           
| |/ / _   _| |__   ___| |/ /  ___ _   _ 
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                    __/ |
                                   |___/

11:06:56 CST [GreetingsModule] Greetings
11:06:56 CST message: [k8s-node-1-192-168-8-34]
Greetings, KubeKey!
11:06:56 CST message: [k8s-master-2-192-168-8-32]
Greetings, KubeKey!
11:06:56 CST message: [k8s-master-1-192-168-8-31]
Greetings, KubeKey!
11:06:56 CST message: [k8s-master-3-192-168-8-33]
Greetings, KubeKey!
11:06:56 CST success: [k8s-node-1-192-168-8-34]
11:06:56 CST success: [k8s-master-2-192-168-8-32]
11:06:56 CST success: [k8s-master-1-192-168-8-31]
11:06:56 CST success: [k8s-master-3-192-168-8-33]
11:06:56 CST [NodePreCheckModule] A pre-check on nodes
11:06:57 CST success: [k8s-master-1-192-168-8-31]
11:06:57 CST success: [k8s-node-1-192-168-8-34]
11:06:57 CST success: [k8s-master-3-192-168-8-33]
11:06:57 CST success: [k8s-master-2-192-168-8-32]
11:06:57 CST [NodeBinariesModule] Download installation binaries
11:06:57 CST message: [localhost]
downloading amd64 kubeadm v1.25.3 ...
11:06:57 CST message: [localhost]
kubeadm is existed
11:06:57 CST message: [localhost]
downloading amd64 kubelet v1.25.3 ...
11:06:59 CST message: [localhost]
kubelet is existed
11:06:59 CST message: [localhost]
downloading amd64 kubectl v1.25.3 ...
11:06:59 CST message: [localhost]
kubectl is existed
11:06:59 CST message: [localhost]
downloading amd64 helm v3.9.0 ...
11:06:59 CST message: [localhost]
helm is existed
11:06:59 CST message: [localhost]
downloading amd64 kubecni v0.9.1 ...
11:07:00 CST message: [localhost]
kubecni is existed
11:07:00 CST message: [localhost]
downloading amd64 crictl v1.24.0 ...
11:07:00 CST message: [localhost]
crictl is existed
11:07:00 CST message: [localhost]
downloading amd64 etcd v3.4.13 ...
11:07:00 CST message: [localhost]
etcd is existed
11:07:00 CST message: [localhost]
downloading amd64 containerd 1.6.4 ...
11:07:00 CST message: [localhost]
containerd is existed
11:07:00 CST message: [localhost]
downloading amd64 runc v1.1.1 ...
11:07:00 CST message: [localhost]
runc is existed
11:07:00 CST success: [LocalHost]
11:07:00 CST [ConfigureOSModule] Get OS release
11:07:00 CST success: [k8s-master-1-192-168-8-31]
11:07:00 CST success: [k8s-node-1-192-168-8-34]
11:07:00 CST success: [k8s-master-3-192-168-8-33]
11:07:00 CST success: [k8s-master-2-192-168-8-32]
11:07:00 CST [ConfigureOSModule] Prepare to init OS
11:07:00 CST success: [k8s-node-1-192-168-8-34]
11:07:00 CST success: [k8s-master-1-192-168-8-31]
11:07:00 CST success: [k8s-master-2-192-168-8-32]
11:07:00 CST success: [k8s-master-3-192-168-8-33]
11:07:00 CST [ConfigureOSModule] Generate init os script
11:07:01 CST success: [k8s-node-1-192-168-8-34]
11:07:01 CST success: [k8s-master-1-192-168-8-31]
11:07:01 CST success: [k8s-master-3-192-168-8-33]
11:07:01 CST success: [k8s-master-2-192-168-8-32]
11:07:01 CST [ConfigureOSModule] Exec init os script
11:07:01 CST stdout: [k8s-master-2-192-168-8-32]
net.ipv4.ip_forward = 1
fs.nr_open = 10000000
fs.file-max = 11000000
fs.inotify.max_user_watches = 11000000
vm.dirty_writeback_centisecs = 100
vm.dirty_expire_centisecs = 100
vm.overcommit_memory = 1
vm.max_map_count = 262144
net.ipv4.ip_local_port_range = 10000    65001
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 4000000
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 2048
vm.swappiness = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
11:07:01 CST stdout: [k8s-master-3-192-168-8-33]
net.ipv4.ip_forward = 1
fs.nr_open = 10000000
fs.file-max = 11000000
fs.inotify.max_user_watches = 11000000
vm.dirty_writeback_centisecs = 100
vm.dirty_expire_centisecs = 100
vm.overcommit_memory = 1
vm.max_map_count = 262144
net.ipv4.ip_local_port_range = 10000    65001
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 4000000
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 2048
vm.swappiness = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
11:07:01 CST stdout: [k8s-node-1-192-168-8-34]
net.ipv4.ip_forward = 1
fs.nr_open = 10000000
fs.file-max = 11000000
fs.inotify.max_user_watches = 11000000
vm.dirty_writeback_centisecs = 100
vm.dirty_expire_centisecs = 100
vm.overcommit_memory = 1
vm.max_map_count = 262144
net.ipv4.ip_local_port_range = 10000    65001
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 4000000
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 2048
vm.swappiness = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
11:07:01 CST stdout: [k8s-master-1-192-168-8-31]
net.ipv4.ip_forward = 1
fs.nr_open = 10000000
fs.file-max = 11000000
fs.inotify.max_user_watches = 11000000
vm.dirty_writeback_centisecs = 100
vm.dirty_expire_centisecs = 100
vm.overcommit_memory = 1
vm.max_map_count = 262144
net.ipv4.ip_local_port_range = 10000    65001
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 4000000
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 2048
vm.swappiness = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
11:07:01 CST success: [k8s-master-2-192-168-8-32]
11:07:01 CST success: [k8s-master-3-192-168-8-33]
11:07:01 CST success: [k8s-node-1-192-168-8-34]
11:07:01 CST success: [k8s-master-1-192-168-8-31]
11:07:01 CST [ConfigureOSModule] configure the ntp server for each node
11:07:02 CST stdout: [k8s-master-3-192-168-8-33]
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
11:07:02 CST stdout: [k8s-node-1-192-168-8-34]
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
11:07:02 CST stdout: [k8s-master-2-192-168-8-32]
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^? 203.107.6.88                  0   6     0     -     +0ns[   +0ns] +/-    0ns
^? 139.199.215.251               0   6     0     -     +0ns[   +0ns] +/-    0ns
11:07:02 CST stdout: [k8s-master-1-192-168-8-31]
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^? 203.107.6.88                  0   6     0     -     +0ns[   +0ns] +/-    0ns
^? 139.199.215.251               0   6     0     -     +0ns[   +0ns] +/-    0ns
^? 119.28.206.193                0   6     0     -     +0ns[   +0ns] +/-    0ns
^? ntp1.flashdance.cx            0   6     0     -     +0ns[   +0ns] +/-    0ns
^? time.cloudflare.com           0   6     0     -     +0ns[   +0ns] +/-    0ns
11:07:02 CST success: [k8s-master-3-192-168-8-33]
11:07:02 CST success: [k8s-node-1-192-168-8-34]
11:07:02 CST success: [k8s-master-2-192-168-8-32]
11:07:02 CST success: [k8s-master-1-192-168-8-31]
11:07:02 CST [KubernetesStatusModule] Get kubernetes cluster status
11:07:02 CST success: [k8s-master-1-192-168-8-31]
11:07:02 CST success: [k8s-master-2-192-168-8-32]
11:07:02 CST success: [k8s-master-3-192-168-8-33]
11:07:02 CST [InstallContainerModule] Sync containerd binaries
11:07:02 CST skipped: [k8s-master-1-192-168-8-31]
11:07:02 CST skipped: [k8s-master-3-192-168-8-33]
11:07:02 CST skipped: [k8s-node-1-192-168-8-34]
11:07:02 CST skipped: [k8s-master-2-192-168-8-32]
11:07:02 CST [InstallContainerModule] Sync crictl binaries
11:07:02 CST skipped: [k8s-master-2-192-168-8-32]
11:07:02 CST skipped: [k8s-master-1-192-168-8-31]
11:07:02 CST skipped: [k8s-node-1-192-168-8-34]
11:07:02 CST skipped: [k8s-master-3-192-168-8-33]
11:07:02 CST [InstallContainerModule] Generate containerd service
11:07:02 CST skipped: [k8s-node-1-192-168-8-34]
11:07:02 CST skipped: [k8s-master-3-192-168-8-33]
11:07:02 CST skipped: [k8s-master-1-192-168-8-31]
11:07:02 CST skipped: [k8s-master-2-192-168-8-32]
11:07:02 CST [InstallContainerModule] Generate containerd config
11:07:02 CST skipped: [k8s-master-3-192-168-8-33]
11:07:02 CST skipped: [k8s-master-1-192-168-8-31]
11:07:02 CST skipped: [k8s-node-1-192-168-8-34]
11:07:02 CST skipped: [k8s-master-2-192-168-8-32]
11:07:02 CST [InstallContainerModule] Generate crictl config
11:07:02 CST skipped: [k8s-master-1-192-168-8-31]
11:07:02 CST skipped: [k8s-master-2-192-168-8-32]
11:07:02 CST skipped: [k8s-master-3-192-168-8-33]
11:07:02 CST skipped: [k8s-node-1-192-168-8-34]
11:07:02 CST [InstallContainerModule] Enable containerd
11:07:02 CST skipped: [k8s-master-1-192-168-8-31]
11:07:02 CST skipped: [k8s-node-1-192-168-8-34]
11:07:02 CST skipped: [k8s-master-3-192-168-8-33]
11:07:02 CST skipped: [k8s-master-2-192-168-8-32]
11:07:02 CST [PullModule] Start to pull images on all nodes
11:07:02 CST message: [k8s-master-2-192-168-8-32]
downloading image: kubesphere/etcd:v3.4.13
11:07:02 CST message: [k8s-node-1-192-168-8-34]
downloading image: kubesphere/pause:3.7
11:07:02 CST message: [k8s-master-3-192-168-8-33]
downloading image: kubesphere/etcd:v3.4.13
11:07:02 CST message: [k8s-master-1-192-168-8-31]
downloading image: kubesphere/etcd:v3.4.13
11:07:04 CST message: [k8s-master-2-192-168-8-32]
downloading image: kubesphere/pause:3.7
11:07:04 CST message: [k8s-master-1-192-168-8-31]
downloading image: kubesphere/pause:3.7
11:07:04 CST message: [k8s-master-3-192-168-8-33]
downloading image: kubesphere/pause:3.7
11:07:04 CST message: [k8s-node-1-192-168-8-34]
downloading image: kubesphere/kube-proxy:v1.25.3
11:07:06 CST message: [k8s-master-2-192-168-8-32]
downloading image: kubesphere/kube-apiserver:v1.25.3
11:07:07 CST message: [k8s-master-3-192-168-8-33]
downloading image: kubesphere/kube-apiserver:v1.25.3
11:07:07 CST message: [k8s-master-1-192-168-8-31]
downloading image: kubesphere/kube-apiserver:v1.25.3
11:07:07 CST message: [k8s-node-1-192-168-8-34]
downloading image: coredns/coredns:1.8.6
11:07:09 CST message: [k8s-master-2-192-168-8-32]
downloading image: kubesphere/kube-controller-manager:v1.25.3
11:07:09 CST message: [k8s-master-1-192-168-8-31]
downloading image: kubesphere/kube-controller-manager:v1.25.3
11:07:09 CST message: [k8s-master-3-192-168-8-33]
downloading image: kubesphere/kube-controller-manager:v1.25.3
11:07:09 CST message: [k8s-node-1-192-168-8-34]
downloading image: kubesphere/k8s-dns-node-cache:1.15.12
11:07:11 CST message: [k8s-master-2-192-168-8-32]
downloading image: kubesphere/kube-scheduler:v1.25.3
11:07:11 CST message: [k8s-master-3-192-168-8-33]
downloading image: kubesphere/kube-scheduler:v1.25.3
11:07:12 CST message: [k8s-master-1-192-168-8-31]
downloading image: kubesphere/kube-scheduler:v1.25.3
11:07:12 CST message: [k8s-node-1-192-168-8-34]
downloading image: cilium/cilium:v1.11.6
11:07:14 CST message: [k8s-master-2-192-168-8-32]
downloading image: kubesphere/kube-proxy:v1.25.3
11:07:14 CST message: [k8s-master-3-192-168-8-33]
downloading image: kubesphere/kube-proxy:v1.25.3
11:07:14 CST message: [k8s-master-1-192-168-8-31]
downloading image: kubesphere/kube-proxy:v1.25.3
11:07:14 CST message: [k8s-node-1-192-168-8-34]
downloading image: cilium/operator-generic:v1.11.6
11:07:16 CST message: [k8s-master-2-192-168-8-32]
downloading image: coredns/coredns:1.8.6
11:07:16 CST message: [k8s-master-3-192-168-8-33]
downloading image: coredns/coredns:1.8.6
11:07:16 CST message: [k8s-master-1-192-168-8-31]
downloading image: coredns/coredns:1.8.6
11:07:18 CST message: [k8s-master-2-192-168-8-32]
downloading image: kubesphere/k8s-dns-node-cache:1.15.12
11:07:18 CST message: [k8s-master-1-192-168-8-31]
downloading image: kubesphere/k8s-dns-node-cache:1.15.12
11:07:19 CST message: [k8s-master-3-192-168-8-33]
downloading image: kubesphere/k8s-dns-node-cache:1.15.12
11:07:21 CST message: [k8s-master-2-192-168-8-32]
downloading image: cilium/cilium:v1.11.6
11:07:21 CST message: [k8s-master-1-192-168-8-31]
downloading image: cilium/cilium:v1.11.6
11:07:21 CST message: [k8s-master-3-192-168-8-33]
downloading image: cilium/cilium:v1.11.6
11:07:23 CST message: [k8s-master-1-192-168-8-31]
downloading image: cilium/operator-generic:v1.11.6
11:07:23 CST message: [k8s-master-2-192-168-8-32]
downloading image: cilium/operator-generic:v1.11.6
11:07:23 CST message: [k8s-master-3-192-168-8-33]
downloading image: cilium/operator-generic:v1.11.6
11:07:25 CST message: [k8s-master-3-192-168-8-33]
downloading image: plndr/kube-vip:v0.5.0
11:07:25 CST message: [k8s-master-1-192-168-8-31]
downloading image: plndr/kube-vip:v0.5.0
11:07:25 CST message: [k8s-master-2-192-168-8-32]
downloading image: plndr/kube-vip:v0.5.0
11:07:28 CST success: [k8s-node-1-192-168-8-34]
11:07:28 CST success: [k8s-master-3-192-168-8-33]
11:07:28 CST success: [k8s-master-2-192-168-8-32]
11:07:28 CST success: [k8s-master-1-192-168-8-31]
11:07:28 CST [InstallKubeBinariesModule] Synchronize kubernetes binaries
11:07:41 CST success: [k8s-master-1-192-168-8-31]
11:07:41 CST success: [k8s-node-1-192-168-8-34]
11:07:41 CST success: [k8s-master-3-192-168-8-33]
11:07:41 CST success: [k8s-master-2-192-168-8-32]
11:07:41 CST [InstallKubeBinariesModule] Synchronize kubelet
11:07:41 CST success: [k8s-master-3-192-168-8-33]
11:07:41 CST success: [k8s-master-2-192-168-8-32]
11:07:41 CST success: [k8s-master-1-192-168-8-31]
11:07:41 CST success: [k8s-node-1-192-168-8-34]
11:07:41 CST [InstallKubeBinariesModule] Generate kubelet service
11:07:41 CST success: [k8s-node-1-192-168-8-34]
11:07:41 CST success: [k8s-master-2-192-168-8-32]
11:07:41 CST success: [k8s-master-1-192-168-8-31]
11:07:41 CST success: [k8s-master-3-192-168-8-33]
11:07:41 CST [InstallKubeBinariesModule] Enable kubelet service
11:07:42 CST success: [k8s-master-2-192-168-8-32]
11:07:42 CST success: [k8s-node-1-192-168-8-34]
11:07:42 CST success: [k8s-master-3-192-168-8-33]
11:07:42 CST success: [k8s-master-1-192-168-8-31]
11:07:42 CST [InstallKubeBinariesModule] Generate kubelet env
11:07:42 CST success: [k8s-node-1-192-168-8-34]
11:07:42 CST success: [k8s-master-2-192-168-8-32]
11:07:42 CST success: [k8s-master-1-192-168-8-31]
11:07:42 CST success: [k8s-master-3-192-168-8-33]
11:07:42 CST [InternalLoadbalancerModule] Check VIP Address
11:07:42 CST skipped: [k8s-master-3-192-168-8-33]
11:07:42 CST success: [k8s-master-1-192-168-8-31]
11:07:42 CST skipped: [k8s-master-2-192-168-8-32]
11:07:42 CST [InternalLoadbalancerModule] Get Node Interface
11:07:42 CST success: [k8s-master-2-192-168-8-32]
11:07:42 CST success: [k8s-master-3-192-168-8-33]
11:07:42 CST success: [k8s-master-1-192-168-8-31]
11:07:42 CST [InternalLoadbalancerModule] Generate kubevip manifest at first master
11:07:42 CST skipped: [k8s-master-3-192-168-8-33]
11:07:42 CST skipped: [k8s-master-2-192-168-8-32]
11:07:42 CST success: [k8s-master-1-192-168-8-31]
11:07:42 CST [InitKubernetesModule] Generate kubeadm config
11:07:42 CST skipped: [k8s-master-3-192-168-8-33]
11:07:42 CST skipped: [k8s-master-2-192-168-8-32]
11:07:42 CST success: [k8s-master-1-192-168-8-31]
11:07:42 CST [InitKubernetesModule] Init cluster using kubeadm
11:09:46 CST stdout: [k8s-master-1-192-168-8-31]
W1111 11:07:42.449239    1308 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1111 11:07:42.450637    1308 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1111 11:07:42.452720    1308 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.25.3
[preflight] Running pre-flight checks
    [WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-1-192-168-8-31 k8s-master-1-192-168-8-31.cluster.local k8s-master-2-192-168-8-32 k8s-master-2-192-168-8-32.cluster.local k8s-master-3-192-168-8-33 k8s-master-3-192-168-8-33.cluster.local k8s-node-1-192-168-8-34 k8s-node-1-192-168-8-34.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.cluster.local localhost] and IPs [10.233.0.1 192.168.8.31 127.0.0.1 192.168.8.30 192.168.8.32 192.168.8.33 192.168.8.34]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-1-192-168-8-31 localhost] and IPs [192.168.8.31 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-1-192-168-8-31 localhost] and IPs [192.168.8.31 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
    - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
11:09:57 CST stdout: [k8s-master-1-192-168-8-31]
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1111 11:09:57.643988    1470 reset.go:103] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://lb.cluster.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp 192.168.8.30:6443: connect: no route to host
[preflight] Running pre-flight checks
W1111 11:09:57.644432    1470 removeetcdmember.go:85] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
11:09:57 CST message: [k8s-master-1-192-168-8-31]
init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull" 
W1111 11:07:42.449239    1308 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1111 11:07:42.450637    1308 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1111 11:07:42.452720    1308 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.25.3
[preflight] Running pre-flight checks
    [WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-1-192-168-8-31 k8s-master-1-192-168-8-31.cluster.local k8s-master-2-192-168-8-32 k8s-master-2-192-168-8-32.cluster.local k8s-master-3-192-168-8-33 k8s-master-3-192-168-8-33.cluster.local k8s-node-1-192-168-8-34 k8s-node-1-192-168-8-34.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.cluster.local localhost] and IPs [10.233.0.1 192.168.8.31 127.0.0.1 192.168.8.30 192.168.8.32 192.168.8.33 192.168.8.34]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-1-192-168-8-31 localhost] and IPs [192.168.8.31 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-1-192-168-8-31 localhost] and IPs [192.168.8.31 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
    - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1
11:09:57 CST retry: [k8s-master-1-192-168-8-31]
11:11:59 CST stdout: [k8s-master-1-192-168-8-31]
W1111 11:10:02.683121    1488 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1111 11:10:02.684292    1488 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1111 11:10:02.685209    1488 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.25.3
[preflight] Running pre-flight checks
    [WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-1-192-168-8-31 k8s-master-1-192-168-8-31.cluster.local k8s-master-2-192-168-8-32 k8s-master-2-192-168-8-32.cluster.local k8s-master-3-192-168-8-33 k8s-master-3-192-168-8-33.cluster.local k8s-node-1-192-168-8-34 k8s-node-1-192-168-8-34.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.cluster.local localhost] and IPs [10.233.0.1 192.168.8.31 127.0.0.1 192.168.8.30 192.168.8.32 192.168.8.33 192.168.8.34]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-1-192-168-8-31 localhost] and IPs [192.168.8.31 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-1-192-168-8-31 localhost] and IPs [192.168.8.31 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
    - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
11:12:10 CST stdout: [k8s-master-1-192-168-8-31]
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1111 11:12:10.572192    1642 reset.go:103] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://lb.cluster.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp 192.168.8.30:6443: connect: no route to host
[preflight] Running pre-flight checks
W1111 11:12:10.572297    1642 removeetcdmember.go:85] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
11:12:10 CST message: [k8s-master-1-192-168-8-31]
init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull" 
W1111 11:10:02.683121    1488 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1111 11:10:02.684292    1488 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1111 11:10:02.685209    1488 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.25.3
[preflight] Running pre-flight checks
    [WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-1-192-168-8-31 k8s-master-1-192-168-8-31.cluster.local k8s-master-2-192-168-8-32 k8s-master-2-192-168-8-32.cluster.local k8s-master-3-192-168-8-33 k8s-master-3-192-168-8-33.cluster.local k8s-node-1-192-168-8-34 k8s-node-1-192-168-8-34.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.cluster.local localhost] and IPs [10.233.0.1 192.168.8.31 127.0.0.1 192.168.8.30 192.168.8.32 192.168.8.33 192.168.8.34]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-1-192-168-8-31 localhost] and IPs [192.168.8.31 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-1-192-168-8-31 localhost] and IPs [192.168.8.31 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
    - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1
11:12:10 CST retry: [k8s-master-1-192-168-8-31]
11:14:12 CST stdout: [k8s-master-1-192-168-8-31]
W1111 11:12:15.610528    1659 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1111 11:12:15.611279    1659 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1111 11:12:15.613186    1659 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.25.3
[preflight] Running pre-flight checks
    [WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-1-192-168-8-31 k8s-master-1-192-168-8-31.cluster.local k8s-master-2-192-168-8-32 k8s-master-2-192-168-8-32.cluster.local k8s-master-3-192-168-8-33 k8s-master-3-192-168-8-33.cluster.local k8s-node-1-192-168-8-34 k8s-node-1-192-168-8-34.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.cluster.local localhost] and IPs [10.233.0.1 192.168.8.31 127.0.0.1 192.168.8.30 192.168.8.32 192.168.8.33 192.168.8.34]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-1-192-168-8-31 localhost] and IPs [192.168.8.31 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-1-192-168-8-31 localhost] and IPs [192.168.8.31 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
    - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
11:14:23 CST stdout: [k8s-master-1-192-168-8-31]
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1111 11:14:23.788099    1806 reset.go:103] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://lb.cluster.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp 192.168.8.30:6443: connect: no route to host
[preflight] Running pre-flight checks
W1111 11:14:23.788171    1806 removeetcdmember.go:85] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
11:14:23 CST message: [k8s-master-1-192-168-8-31]
init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull" 
W1111 11:12:15.610528    1659 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1111 11:12:15.611279    1659 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1111 11:12:15.613186    1659 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.25.3
[preflight] Running pre-flight checks
    [WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-1-192-168-8-31 k8s-master-1-192-168-8-31.cluster.local k8s-master-2-192-168-8-32 k8s-master-2-192-168-8-32.cluster.local k8s-master-3-192-168-8-33 k8s-master-3-192-168-8-33.cluster.local k8s-node-1-192-168-8-34 k8s-node-1-192-168-8-34.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.cluster.local localhost] and IPs [10.233.0.1 192.168.8.31 127.0.0.1 192.168.8.30 192.168.8.32 192.168.8.33 192.168.8.34]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-1-192-168-8-31 localhost] and IPs [192.168.8.31 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-1-192-168-8-31 localhost] and IPs [192.168.8.31 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
    - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1
11:14:23 CST skipped: [k8s-master-3-192-168-8-33]
11:14:23 CST skipped: [k8s-master-2-192-168-8-32]
11:14:23 CST failed: [k8s-master-1-192-168-8-31]
error: Pipeline[CreateClusterPipeline] execute failed: Module[InitKubernetesModule] exec failed: 
failed: [k8s-master-1-192-168-8-31] [KubeadmInit] exec failed after 3 retires: init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull" 
W1111 11:12:15.610528    1659 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1111 11:12:15.611279    1659 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1111 11:12:15.613186    1659 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.25.3
[preflight] Running pre-flight checks
    [WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-1-192-168-8-31 k8s-master-1-192-168-8-31.cluster.local k8s-master-2-192-168-8-32 k8s-master-2-192-168-8-32.cluster.local k8s-master-3-192-168-8-33 k8s-master-3-192-168-8-33.cluster.local k8s-node-1-192-168-8-34 k8s-node-1-192-168-8-34.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.cluster.local localhost] and IPs [10.233.0.1 192.168.8.31 127.0.0.1 192.168.8.30 192.168.8.32 192.168.8.33 192.168.8.34]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-1-192-168-8-31 localhost] and IPs [192.168.8.31 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-1-192-168-8-31 localhost] and IPs [192.168.8.31 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
    - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1

我该如何处理?

24sama commented 1 year ago

This maybe a bug in v3.0.0, see: #1581 You can try to use the latest version: https://github.com/kubesphere/kubekey/releases/tag/v3.0.1-alpha.0

bfbz commented 1 year ago

这可能是 v3.0.0 中的一个错误,请参阅:#1581您可以尝试使用最新版本:https://github.com/kubesphere/kubekey/releases/tag/v3.0.1-alpha.0

还是不行,错误如下:


[root@k8s-master-1-192-168-8-31:~/kubekey]# ./kk create cluster -f config-k8s.yaml --yes

 _   __      _          _   __           
| | / /     | |        | | / /           
| |/ / _   _| |__   ___| |/ /  ___ _   _ 
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                    __/ |
                                   |___/

12:50:27 CST [GreetingsModule] Greetings
12:50:27 CST message: [k8s-node-1-192-168-8-34]
Greetings, KubeKey!
12:50:27 CST message: [k8s-master-2-192-168-8-32]
Greetings, KubeKey!
12:50:27 CST message: [k8s-master-1-192-168-8-31]
Greetings, KubeKey!
12:50:27 CST message: [k8s-master-3-192-168-8-33]
Greetings, KubeKey!
12:50:27 CST success: [k8s-node-1-192-168-8-34]
12:50:27 CST success: [k8s-master-2-192-168-8-32]
12:50:27 CST success: [k8s-master-1-192-168-8-31]
12:50:27 CST success: [k8s-master-3-192-168-8-33]
12:50:27 CST [NodePreCheckModule] A pre-check on nodes
12:50:27 CST success: [k8s-master-1-192-168-8-31]
12:50:27 CST success: [k8s-node-1-192-168-8-34]
12:50:27 CST success: [k8s-master-3-192-168-8-33]
12:50:27 CST success: [k8s-master-2-192-168-8-32]
12:50:27 CST [ConfirmModule] Display confirmation form
+---------------------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name                      | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time         |
+---------------------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| k8s-master-1-192-168-8-31 | y    | y    | y       | y        | y     | y     | y       | y         | y      |        | v1.6.4     | y          |             |                  | CST 12:50:27 |
| k8s-master-2-192-168-8-32 | y    | y    | y       | y        | y     | y     | y       | y         | y      |        | v1.6.4     | y          |             |                  | CST 12:50:27 |
| k8s-master-3-192-168-8-33 | y    | y    | y       | y        | y     | y     | y       | y         | y      |        | v1.6.4     | y          |             |                  | CST 12:50:27 |
| k8s-node-1-192-168-8-34   | y    | y    | y       | y        | y     | y     | y       | y         | y      |        | v1.6.4     | y          |             |                  | CST 12:50:27 |
+---------------------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

12:50:27 CST success: [LocalHost]
12:50:27 CST [NodeBinariesModule] Download installation binaries
12:50:27 CST message: [localhost]
downloading amd64 kubeadm v1.25.3 ...
12:50:28 CST message: [localhost]
kubeadm is existed
12:50:28 CST message: [localhost]
downloading amd64 kubelet v1.25.3 ...
12:50:28 CST message: [localhost]
kubelet is existed
12:50:28 CST message: [localhost]
downloading amd64 kubectl v1.25.3 ...
12:50:28 CST message: [localhost]
kubectl is existed
12:50:28 CST message: [localhost]
downloading amd64 helm v3.9.0 ...
12:50:29 CST message: [localhost]
helm is existed
12:50:29 CST message: [localhost]
downloading amd64 kubecni v0.9.1 ...
12:50:29 CST message: [localhost]
kubecni is existed
12:50:29 CST message: [localhost]
downloading amd64 crictl v1.24.0 ...
12:50:29 CST message: [localhost]
crictl is existed
12:50:29 CST message: [localhost]
downloading amd64 etcd v3.4.13 ...
12:50:29 CST message: [localhost]
etcd is existed
12:50:29 CST message: [localhost]
downloading amd64 containerd 1.6.4 ...
12:50:29 CST message: [localhost]
containerd is existed
12:50:29 CST message: [localhost]
downloading amd64 runc v1.1.1 ...
12:50:29 CST message: [localhost]
runc is existed
12:50:29 CST success: [LocalHost]
12:50:29 CST [ConfigureOSModule] Get OS release
12:50:29 CST success: [k8s-node-1-192-168-8-34]
12:50:29 CST success: [k8s-master-1-192-168-8-31]
12:50:29 CST success: [k8s-master-3-192-168-8-33]
12:50:29 CST success: [k8s-master-2-192-168-8-32]
12:50:29 CST [ConfigureOSModule] Prepare to init OS
12:50:30 CST success: [k8s-node-1-192-168-8-34]
12:50:30 CST success: [k8s-master-3-192-168-8-33]
12:50:30 CST success: [k8s-master-2-192-168-8-32]
12:50:30 CST success: [k8s-master-1-192-168-8-31]
12:50:30 CST [ConfigureOSModule] Generate init os script
12:50:30 CST success: [k8s-master-1-192-168-8-31]
12:50:30 CST success: [k8s-master-3-192-168-8-33]
12:50:30 CST success: [k8s-master-2-192-168-8-32]
12:50:30 CST success: [k8s-node-1-192-168-8-34]
12:50:30 CST [ConfigureOSModule] Exec init os script
12:50:30 CST stdout: [k8s-master-3-192-168-8-33]
net.ipv4.ip_forward = 1
fs.nr_open = 10000000
fs.file-max = 11000000
fs.inotify.max_user_watches = 11000000
vm.dirty_writeback_centisecs = 100
vm.dirty_expire_centisecs = 100
vm.overcommit_memory = 1
vm.max_map_count = 262144
net.ipv4.ip_local_port_range = 10000    65001
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 4000000
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 2048
vm.swappiness = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
12:50:30 CST stdout: [k8s-node-1-192-168-8-34]
net.ipv4.ip_forward = 1
fs.nr_open = 10000000
fs.file-max = 11000000
fs.inotify.max_user_watches = 11000000
vm.dirty_writeback_centisecs = 100
vm.dirty_expire_centisecs = 100
vm.overcommit_memory = 1
vm.max_map_count = 262144
net.ipv4.ip_local_port_range = 10000    65001
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 4000000
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 2048
vm.swappiness = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
12:50:30 CST stdout: [k8s-master-2-192-168-8-32]
net.ipv4.ip_forward = 1
fs.nr_open = 10000000
fs.file-max = 11000000
fs.inotify.max_user_watches = 11000000
vm.dirty_writeback_centisecs = 100
vm.dirty_expire_centisecs = 100
vm.overcommit_memory = 1
vm.max_map_count = 262144
net.ipv4.ip_local_port_range = 10000    65001
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 4000000
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 2048
vm.swappiness = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
12:50:30 CST stdout: [k8s-master-1-192-168-8-31]
net.ipv4.ip_forward = 1
fs.nr_open = 10000000
fs.file-max = 11000000
fs.inotify.max_user_watches = 11000000
vm.dirty_writeback_centisecs = 100
vm.dirty_expire_centisecs = 100
vm.overcommit_memory = 1
vm.max_map_count = 262144
net.ipv4.ip_local_port_range = 10000    65001
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 4000000
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 2048
vm.swappiness = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
12:50:30 CST success: [k8s-master-3-192-168-8-33]
12:50:30 CST success: [k8s-node-1-192-168-8-34]
12:50:30 CST success: [k8s-master-2-192-168-8-32]
12:50:30 CST success: [k8s-master-1-192-168-8-31]
12:50:30 CST [ConfigureOSModule] configure the ntp server for each node
12:50:31 CST stdout: [k8s-node-1-192-168-8-34]
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
12:50:31 CST stdout: [k8s-master-1-192-168-8-31]
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
12:50:31 CST stdout: [k8s-master-3-192-168-8-33]
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^? 203.107.6.88                  0   6     0     -     +0ns[   +0ns] +/-    0ns
^? 139.199.215.251               0   6     0     -     +0ns[   +0ns] +/-    0ns
^? stratum2-1.ntp.mow01.ru.>     0   6     0     -     +0ns[   +0ns] +/-    0ns
^? 119.28.206.193                0   6     0     -     +0ns[   +0ns] +/-    0ns
^? ntp7.flashdance.cx            0   6     0     -     +0ns[   +0ns] +/-    0ns
12:50:31 CST stdout: [k8s-master-2-192-168-8-32]
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^? 203.107.6.88                  0   6     0     -     +0ns[   +0ns] +/-    0ns
^? 139.199.215.251               0   6     0     -     +0ns[   +0ns] +/-    0ns
^? 119.28.206.193                0   6     0     -     +0ns[   +0ns] +/-    0ns
^? stratum2-1.ntp.mow01.ru.>     0   6     0     -     +0ns[   +0ns] +/-    0ns
^? 120.25.115.20                 0   6     0     -     +0ns[   +0ns] +/-    0ns
12:50:31 CST success: [k8s-node-1-192-168-8-34]
12:50:31 CST success: [k8s-master-1-192-168-8-31]
12:50:31 CST success: [k8s-master-3-192-168-8-33]
12:50:31 CST success: [k8s-master-2-192-168-8-32]
12:50:31 CST [KubernetesStatusModule] Get kubernetes cluster status
12:50:31 CST success: [k8s-master-1-192-168-8-31]
12:50:31 CST success: [k8s-master-2-192-168-8-32]
12:50:31 CST success: [k8s-master-3-192-168-8-33]
12:50:31 CST [InstallContainerModule] Sync containerd binaries
12:50:31 CST skipped: [k8s-node-1-192-168-8-34]
12:50:31 CST skipped: [k8s-master-1-192-168-8-31]
12:50:31 CST skipped: [k8s-master-3-192-168-8-33]
12:50:31 CST skipped: [k8s-master-2-192-168-8-32]
12:50:31 CST [InstallContainerModule] Sync crictl binaries
12:50:31 CST skipped: [k8s-master-1-192-168-8-31]
12:50:31 CST skipped: [k8s-master-2-192-168-8-32]
12:50:31 CST skipped: [k8s-master-3-192-168-8-33]
12:50:31 CST skipped: [k8s-node-1-192-168-8-34]
12:50:31 CST [InstallContainerModule] Generate containerd service
12:50:31 CST skipped: [k8s-master-1-192-168-8-31]
12:50:31 CST skipped: [k8s-node-1-192-168-8-34]
12:50:31 CST skipped: [k8s-master-3-192-168-8-33]
12:50:31 CST skipped: [k8s-master-2-192-168-8-32]
12:50:31 CST [InstallContainerModule] Generate containerd config
12:50:31 CST skipped: [k8s-node-1-192-168-8-34]
12:50:31 CST skipped: [k8s-master-2-192-168-8-32]
12:50:31 CST skipped: [k8s-master-1-192-168-8-31]
12:50:31 CST skipped: [k8s-master-3-192-168-8-33]
12:50:31 CST [InstallContainerModule] Generate crictl config
12:50:31 CST skipped: [k8s-master-1-192-168-8-31]
12:50:31 CST skipped: [k8s-master-3-192-168-8-33]
12:50:31 CST skipped: [k8s-node-1-192-168-8-34]
12:50:31 CST skipped: [k8s-master-2-192-168-8-32]
12:50:31 CST [InstallContainerModule] Enable containerd
12:50:31 CST skipped: [k8s-node-1-192-168-8-34]
12:50:31 CST skipped: [k8s-master-1-192-168-8-31]
12:50:31 CST skipped: [k8s-master-3-192-168-8-33]
12:50:31 CST skipped: [k8s-master-2-192-168-8-32]
12:50:31 CST [PullModule] Start to pull images on all nodes
12:50:31 CST message: [k8s-node-1-192-168-8-34]
downloading image: kubesphere/pause:3.8
12:50:31 CST message: [k8s-master-1-192-168-8-31]
downloading image: kubesphere/etcd:v3.4.13
12:50:31 CST message: [k8s-master-3-192-168-8-33]
downloading image: kubesphere/etcd:v3.4.13
12:50:31 CST message: [k8s-master-2-192-168-8-32]
downloading image: kubesphere/etcd:v3.4.13
12:50:33 CST message: [k8s-node-1-192-168-8-34]
downloading image: kubesphere/kube-proxy:v1.25.3
12:50:33 CST message: [k8s-master-3-192-168-8-33]
downloading image: kubesphere/pause:3.8
12:50:33 CST message: [k8s-master-2-192-168-8-32]
downloading image: kubesphere/pause:3.8
12:50:33 CST message: [k8s-master-1-192-168-8-31]
downloading image: kubesphere/pause:3.8
12:50:36 CST message: [k8s-node-1-192-168-8-34]
downloading image: coredns/coredns:1.9.3
12:50:36 CST message: [k8s-master-1-192-168-8-31]
downloading image: kubesphere/kube-apiserver:v1.25.3
12:50:36 CST message: [k8s-master-2-192-168-8-32]
downloading image: kubesphere/kube-apiserver:v1.25.3
12:50:36 CST message: [k8s-master-3-192-168-8-33]
downloading image: kubesphere/kube-apiserver:v1.25.3
12:50:38 CST message: [k8s-node-1-192-168-8-34]
downloading image: kubesphere/k8s-dns-node-cache:1.15.12
12:50:38 CST message: [k8s-master-2-192-168-8-32]
downloading image: kubesphere/kube-controller-manager:v1.25.3
12:50:38 CST message: [k8s-master-1-192-168-8-31]
downloading image: kubesphere/kube-controller-manager:v1.25.3
12:50:38 CST message: [k8s-master-3-192-168-8-33]
downloading image: kubesphere/kube-controller-manager:v1.25.3
12:50:40 CST message: [k8s-master-1-192-168-8-31]
downloading image: kubesphere/kube-scheduler:v1.25.3
12:50:40 CST message: [k8s-node-1-192-168-8-34]
downloading image: cilium/cilium:v1.11.6
12:50:40 CST message: [k8s-master-2-192-168-8-32]
downloading image: kubesphere/kube-scheduler:v1.25.3
12:50:41 CST message: [k8s-master-3-192-168-8-33]
downloading image: kubesphere/kube-scheduler:v1.25.3
12:50:42 CST message: [k8s-master-1-192-168-8-31]
downloading image: kubesphere/kube-proxy:v1.25.3
12:50:43 CST message: [k8s-master-2-192-168-8-32]
downloading image: kubesphere/kube-proxy:v1.25.3
12:50:43 CST message: [k8s-node-1-192-168-8-34]
downloading image: cilium/operator-generic:v1.11.6
12:50:43 CST message: [k8s-master-3-192-168-8-33]
downloading image: kubesphere/kube-proxy:v1.25.3
12:50:45 CST message: [k8s-master-1-192-168-8-31]
downloading image: coredns/coredns:1.9.3
12:50:45 CST message: [k8s-master-2-192-168-8-32]
downloading image: coredns/coredns:1.9.3
12:50:45 CST message: [k8s-master-3-192-168-8-33]
downloading image: coredns/coredns:1.9.3
12:50:47 CST message: [k8s-master-1-192-168-8-31]
downloading image: kubesphere/k8s-dns-node-cache:1.15.12
12:50:47 CST message: [k8s-master-2-192-168-8-32]
downloading image: kubesphere/k8s-dns-node-cache:1.15.12
12:50:48 CST message: [k8s-master-3-192-168-8-33]
downloading image: kubesphere/k8s-dns-node-cache:1.15.12
12:50:49 CST message: [k8s-master-2-192-168-8-32]
downloading image: cilium/cilium:v1.11.6
12:50:50 CST message: [k8s-master-1-192-168-8-31]
downloading image: cilium/cilium:v1.11.6
12:50:50 CST message: [k8s-master-3-192-168-8-33]
downloading image: cilium/cilium:v1.11.6
12:50:52 CST message: [k8s-master-2-192-168-8-32]
downloading image: cilium/operator-generic:v1.11.6
12:50:52 CST message: [k8s-master-1-192-168-8-31]
downloading image: cilium/operator-generic:v1.11.6
12:50:53 CST message: [k8s-master-3-192-168-8-33]
downloading image: cilium/operator-generic:v1.11.6
12:50:54 CST message: [k8s-master-1-192-168-8-31]
downloading image: plndr/kube-vip:v0.5.0
12:50:54 CST message: [k8s-master-2-192-168-8-32]
downloading image: plndr/kube-vip:v0.5.0
12:50:55 CST message: [k8s-master-3-192-168-8-33]
downloading image: plndr/kube-vip:v0.5.0
12:50:58 CST success: [k8s-node-1-192-168-8-34]
12:50:58 CST success: [k8s-master-1-192-168-8-31]
12:50:58 CST success: [k8s-master-2-192-168-8-32]
12:50:58 CST success: [k8s-master-3-192-168-8-33]
12:50:58 CST [InstallKubeBinariesModule] Synchronize kubernetes binaries
12:51:12 CST success: [k8s-master-1-192-168-8-31]
12:51:12 CST success: [k8s-master-3-192-168-8-33]
12:51:12 CST success: [k8s-master-2-192-168-8-32]
12:51:12 CST success: [k8s-node-1-192-168-8-34]
12:51:12 CST [InstallKubeBinariesModule] Synchronize kubelet
12:51:12 CST success: [k8s-master-1-192-168-8-31]
12:51:12 CST success: [k8s-master-3-192-168-8-33]
12:51:12 CST success: [k8s-node-1-192-168-8-34]
12:51:12 CST success: [k8s-master-2-192-168-8-32]
12:51:12 CST [InstallKubeBinariesModule] Generate kubelet service
12:51:12 CST success: [k8s-master-3-192-168-8-33]
12:51:12 CST success: [k8s-node-1-192-168-8-34]
12:51:12 CST success: [k8s-master-2-192-168-8-32]
12:51:12 CST success: [k8s-master-1-192-168-8-31]
12:51:12 CST [InstallKubeBinariesModule] Enable kubelet service
12:51:13 CST success: [k8s-node-1-192-168-8-34]
12:51:13 CST success: [k8s-master-3-192-168-8-33]
12:51:13 CST success: [k8s-master-1-192-168-8-31]
12:51:13 CST success: [k8s-master-2-192-168-8-32]
12:51:13 CST [InstallKubeBinariesModule] Generate kubelet env
12:51:13 CST success: [k8s-master-3-192-168-8-33]
12:51:13 CST success: [k8s-master-2-192-168-8-32]
12:51:13 CST success: [k8s-master-1-192-168-8-31]
12:51:13 CST success: [k8s-node-1-192-168-8-34]
12:51:13 CST [InternalLoadbalancerModule] Check VIP Address
12:51:13 CST skipped: [k8s-master-3-192-168-8-33]
12:51:13 CST success: [k8s-master-1-192-168-8-31]
12:51:13 CST skipped: [k8s-master-2-192-168-8-32]
12:51:13 CST [InternalLoadbalancerModule] Get Node Interface
12:51:13 CST success: [k8s-master-3-192-168-8-33]
12:51:13 CST success: [k8s-master-1-192-168-8-31]
12:51:13 CST success: [k8s-master-2-192-168-8-32]
12:51:13 CST [InternalLoadbalancerModule] Generate kubevip manifest at first master
12:51:13 CST skipped: [k8s-master-3-192-168-8-33]
12:51:13 CST skipped: [k8s-master-2-192-168-8-32]
12:51:13 CST success: [k8s-master-1-192-168-8-31]
12:51:13 CST [InitKubernetesModule] Generate kubeadm config
12:51:13 CST skipped: [k8s-master-3-192-168-8-33]
12:51:13 CST skipped: [k8s-master-2-192-168-8-32]
12:51:13 CST success: [k8s-master-1-192-168-8-31]
12:51:13 CST [InitKubernetesModule] Init cluster using kubeadm
12:53:10 CST stdout: [k8s-master-1-192-168-8-31]
W1111 12:51:13.711507   15357 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1111 12:51:13.713724   15357 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1111 12:51:13.716813   15357 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.25.3
[preflight] Running pre-flight checks
    [WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-1-192-168-8-31 k8s-master-1-192-168-8-31.cluster.local k8s-master-2-192-168-8-32 k8s-master-2-192-168-8-32.cluster.local k8s-master-3-192-168-8-33 k8s-master-3-192-168-8-33.cluster.local k8s-node-1-192-168-8-34 k8s-node-1-192-168-8-34.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.cluster.local localhost] and IPs [10.233.0.1 192.168.8.31 127.0.0.1 192.168.8.30 192.168.8.32 192.168.8.33 192.168.8.34]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-1-192-168-8-31 localhost] and IPs [192.168.8.31 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-1-192-168-8-31 localhost] and IPs [192.168.8.31 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
    - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
12:53:21 CST stdout: [k8s-master-1-192-168-8-31]
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1111 12:53:21.836073   15508 reset.go:103] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://lb.cluster.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp 192.168.8.30:6443: connect: no route to host
[preflight] Running pre-flight checks
W1111 12:53:21.836168   15508 removeetcdmember.go:85] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
12:53:21 CST message: [k8s-master-1-192-168-8-31]
init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull" 
W1111 12:51:13.711507   15357 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1111 12:51:13.713724   15357 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1111 12:51:13.716813   15357 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.25.3
[preflight] Running pre-flight checks
    [WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-1-192-168-8-31 k8s-master-1-192-168-8-31.cluster.local k8s-master-2-192-168-8-32 k8s-master-2-192-168-8-32.cluster.local k8s-master-3-192-168-8-33 k8s-master-3-192-168-8-33.cluster.local k8s-node-1-192-168-8-34 k8s-node-1-192-168-8-34.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.cluster.local localhost] and IPs [10.233.0.1 192.168.8.31 127.0.0.1 192.168.8.30 192.168.8.32 192.168.8.33 192.168.8.34]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-1-192-168-8-31 localhost] and IPs [192.168.8.31 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-1-192-168-8-31 localhost] and IPs [192.168.8.31 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
    - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1
12:53:21 CST retry: [k8s-master-1-192-168-8-31]

kubelet 错误如下:

[root@k8s-master-1-192-168-8-31:~]# journalctl -xeu kubelet | less
-- Journal begins at Thu 2022-08-25 18:27:44 CST, ends at Fri 2022-11-11 12:46:20 CST. --
Nov 11 12:08:38 k8s-master-1-192-168-8-31 kubelet[6402]: I1111 12:08:38.788320    6402 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Nov 11 12:08:38 k8s-master-1-192-168-8-31 kubelet[6402]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 11 12:08:38 k8s-master-1-192-168-8-31 kubelet[6402]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote'
Nov 11 12:08:38 k8s-master-1-192-168-8-31 kubelet[6402]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
Nov 11 12:08:38 k8s-master-1-192-168-8-31 kubelet[6402]: W1111 12:08:38.789304    6402 feature_gate.go:237] Setting GA feature gate CSIStorageCapacity=true. It will be removed in a future release.
Nov 11 12:08:38 k8s-master-1-192-168-8-31 kubelet[6402]: W1111 12:08:38.789309    6402 feature_gate.go:237] Setting GA feature gate ExpandCSIVolumes=true. It will be removed in a future release.
Nov 11 12:08:38 k8s-master-1-192-168-8-31 kubelet[6402]: E1111 12:08:38.789320    6402 run.go:74] "command failed" err="failed to set feature gates from initial flags-based config: unrecognized feature gate: TTLAfterFinished"
Nov 11 12:08:38 k8s-master-1-192-168-8-31 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░ 
░░ An ExecStart= process belonging to unit kubelet.service has exited.
░░ 
░░ The process' exit code is 'exited' and its exit status is 1.
Nov 11 12:08:38 k8s-master-1-192-168-8-31 systemd[1]: kubelet.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░ 
░░ The unit kubelet.service has entered the 'failed' state with result 'exit-code'.
Nov 11 12:08:48 k8s-master-1-192-168-8-31 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
░░ Subject: Automatic restarting of a unit has been scheduled
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░ 
24sama commented 1 year ago

Hi @bfbz , maybe you need to use ./kk delete cluster -f config.yaml to clean the environment at first, and then re-create the cluster.

bfbz commented 1 year ago

Nov 11 12:08:38 k8s-master-1-192-168-8-31 kubelet[6402]: E1111 12:08:38.789320 6402 run.go:74] "command failed" err="failed to set feature gates from initial flags-based config: unrecognized feature gate: TTLAfterFinished"

我就是这样做的,每次先清理才会安装。 重点是不是kubelet的报错?

Nov 11 12:08:38 k8s-master-1-192-168-8-31 kubelet[6402]: E1111 12:08:38.789320    6402 run.go:74] "command failed" err="failed to set feature gates from initial flags-based config: unrecognized feature gate: TTLAfterFinished"
24sama commented 1 year ago

Are you sure using the kk v3.0.1-alpha.0 and having a clean environment? I test it successfully using that kk.

Try to check if there are the /etc/kuerbentes in every node. If it is, remove them.

Similar issue: https://github.com/kubesphere/kubekey/issues/1593

bfbz commented 1 year ago

Are you sure using the kk v3.0.1-alpha.0 and having a clean environment? I test it successfully using that kk.

Try to check if there are the /etc/kuerbentes in every node. If it is, remove them.

Similar issue: #1593

 77  [2022-11-11 12:38:41 root] ls
   78  [2022-11-11 12:38:52 root] wget https://gh.flyinbug.top/gh/https://github.com/kubesphere/kubekey/releases/download/v3.0.1-alpha.0/kubekey-v3.0.1-alpha.0-linux-amd64.tar.gz
   79  [2022-11-11 12:38:56 root] ls
   80  [2022-11-11 12:38:59 root] tar -xf kubekey-v3.0.1-alpha.0-linux-amd64.tar.gz 
   81  [2022-11-11 12:39:00 root] ls
   82  [2022-11-11 12:39:04 root] vim config-k8s.yaml 
   83  [2022-11-11 12:39:13 root] history | grep ver
   84  [2022-11-11 12:39:15 root] ./kk version --show-supported-k8s
   85  [2022-11-11 12:39:16 root] ls
   86  [2022-11-11 12:39:18 root] vim config-k8s.yaml 
   87  [2022-11-11 12:39:34 root] ls
   88  [2022-11-11 12:39:43 root] ./kk create cluster -f config-k8s.yaml --yes
   89  [2022-11-11 12:45:51 root] ./kk create cluster -f config-k8s.yaml --yes
   90  [2022-11-11 12:49:57 root] ./kk delete cluster -f config-k8s.yaml --yes
   91  [2022-11-11 12:50:27 root] ./kk create cluster -f config-k8s.yaml --yes
   92  [2022-11-11 13:06:11 root] ./kk version --show-supported-k8s

我尝试更换版本为1.24.x就可以正常部署。

24sama commented 1 year ago

I know that, you specify the feature gates in the config.yaml, please remove it.

image
bfbz commented 1 year ago

成功了,非常感谢。这个参数是根据配置示例配置,您可能需要在文档里备注下。另外我注意到日志有些告警信息,这些不影响吧?

[root@k8s-master-1-192-168-8-31:~/kubekey]# ./kk create cluster -f config-k8s.yaml --yes

 _   __      _          _   __           
| | / /     | |        | | / /           
| |/ / _   _| |__   ___| |/ /  ___ _   _ 
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                    __/ |
                                   |___/

13:57:29 CST [GreetingsModule] Greetings
13:57:30 CST message: [k8s-node-1-192-168-8-34]
Greetings, KubeKey!
13:57:30 CST message: [k8s-master-2-192-168-8-32]
Greetings, KubeKey!
13:57:30 CST message: [k8s-master-1-192-168-8-31]
Greetings, KubeKey!
13:57:30 CST message: [k8s-master-3-192-168-8-33]
Greetings, KubeKey!
13:57:30 CST success: [k8s-node-1-192-168-8-34]
13:57:30 CST success: [k8s-master-2-192-168-8-32]
13:57:30 CST success: [k8s-master-1-192-168-8-31]
13:57:30 CST success: [k8s-master-3-192-168-8-33]
13:57:30 CST [NodePreCheckModule] A pre-check on nodes
13:57:30 CST success: [k8s-node-1-192-168-8-34]
13:57:30 CST success: [k8s-master-1-192-168-8-31]
13:57:30 CST success: [k8s-master-2-192-168-8-32]
13:57:30 CST success: [k8s-master-3-192-168-8-33]
13:57:30 CST [ConfirmModule] Display confirmation form
+---------------------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name                      | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time         |
+---------------------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| k8s-master-1-192-168-8-31 | y    | y    | y       | y        | y     | y     | y       | y         | y      |        | v1.6.4     | y          |             |                  | CST 13:57:30 |
| k8s-master-2-192-168-8-32 | y    | y    | y       | y        | y     | y     | y       | y         | y      |        | v1.6.4     | y          |             |                  | CST 13:57:30 |
| k8s-master-3-192-168-8-33 | y    | y    | y       | y        | y     | y     | y       | y         | y      |        | v1.6.4     | y          |             |                  | CST 13:57:30 |
| k8s-node-1-192-168-8-34   | y    | y    | y       | y        | y     | y     | y       | y         | y      |        | v1.6.4     | y          |             |                  | CST 13:57:30 |
+---------------------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

13:57:30 CST success: [LocalHost]
13:57:30 CST [NodeBinariesModule] Download installation binaries
13:57:30 CST message: [localhost]
downloading amd64 kubeadm v1.25.3 ...
13:57:31 CST message: [localhost]
kubeadm is existed
13:57:31 CST message: [localhost]
downloading amd64 kubelet v1.25.3 ...
13:57:31 CST message: [localhost]
kubelet is existed
13:57:31 CST message: [localhost]
downloading amd64 kubectl v1.25.3 ...
13:57:31 CST message: [localhost]
kubectl is existed
13:57:31 CST message: [localhost]
downloading amd64 helm v3.9.0 ...
13:57:32 CST message: [localhost]
helm is existed
13:57:32 CST message: [localhost]
downloading amd64 kubecni v0.9.1 ...
13:57:32 CST message: [localhost]
kubecni is existed
13:57:32 CST message: [localhost]
downloading amd64 crictl v1.24.0 ...
13:57:32 CST message: [localhost]
crictl is existed
13:57:32 CST message: [localhost]
downloading amd64 etcd v3.4.13 ...
13:57:32 CST message: [localhost]
etcd is existed
13:57:32 CST message: [localhost]
downloading amd64 containerd 1.6.4 ...
13:57:32 CST message: [localhost]
containerd is existed
13:57:32 CST message: [localhost]
downloading amd64 runc v1.1.1 ...
13:57:32 CST message: [localhost]
runc is existed
13:57:32 CST success: [LocalHost]
13:57:32 CST [ConfigureOSModule] Get OS release
13:57:32 CST success: [k8s-node-1-192-168-8-34]
13:57:32 CST success: [k8s-master-1-192-168-8-31]
13:57:32 CST success: [k8s-master-2-192-168-8-32]
13:57:32 CST success: [k8s-master-3-192-168-8-33]
13:57:32 CST [ConfigureOSModule] Prepare to init OS
13:57:33 CST success: [k8s-node-1-192-168-8-34]
13:57:33 CST success: [k8s-master-3-192-168-8-33]
13:57:33 CST success: [k8s-master-1-192-168-8-31]
13:57:33 CST success: [k8s-master-2-192-168-8-32]
13:57:33 CST [ConfigureOSModule] Generate init os script
13:57:33 CST success: [k8s-node-1-192-168-8-34]
13:57:33 CST success: [k8s-master-3-192-168-8-33]
13:57:33 CST success: [k8s-master-1-192-168-8-31]
13:57:33 CST success: [k8s-master-2-192-168-8-32]
13:57:33 CST [ConfigureOSModule] Exec init os script
13:57:33 CST stdout: [k8s-master-2-192-168-8-32]
net.ipv4.ip_forward = 1
fs.nr_open = 10000000
fs.file-max = 11000000
fs.inotify.max_user_watches = 11000000
vm.dirty_writeback_centisecs = 100
vm.dirty_expire_centisecs = 100
vm.overcommit_memory = 1
vm.max_map_count = 262144
net.ipv4.ip_local_port_range = 10000    65001
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 4000000
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 2048
vm.swappiness = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
13:57:33 CST stdout: [k8s-master-3-192-168-8-33]
net.ipv4.ip_forward = 1
fs.nr_open = 10000000
fs.file-max = 11000000
fs.inotify.max_user_watches = 11000000
vm.dirty_writeback_centisecs = 100
vm.dirty_expire_centisecs = 100
vm.overcommit_memory = 1
vm.max_map_count = 262144
net.ipv4.ip_local_port_range = 10000    65001
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 4000000
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 2048
vm.swappiness = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
13:57:33 CST stdout: [k8s-node-1-192-168-8-34]
net.ipv4.ip_forward = 1
fs.nr_open = 10000000
fs.file-max = 11000000
fs.inotify.max_user_watches = 11000000
vm.dirty_writeback_centisecs = 100
vm.dirty_expire_centisecs = 100
vm.overcommit_memory = 1
vm.max_map_count = 262144
net.ipv4.ip_local_port_range = 10000    65001
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 4000000
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 2048
vm.swappiness = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
13:57:33 CST stdout: [k8s-master-1-192-168-8-31]
net.ipv4.ip_forward = 1
fs.nr_open = 10000000
fs.file-max = 11000000
fs.inotify.max_user_watches = 11000000
vm.dirty_writeback_centisecs = 100
vm.dirty_expire_centisecs = 100
vm.overcommit_memory = 1
vm.max_map_count = 262144
net.ipv4.ip_local_port_range = 10000    65001
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 4000000
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 2048
vm.swappiness = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
13:57:33 CST success: [k8s-master-2-192-168-8-32]
13:57:33 CST success: [k8s-master-3-192-168-8-33]
13:57:33 CST success: [k8s-node-1-192-168-8-34]
13:57:33 CST success: [k8s-master-1-192-168-8-31]
13:57:33 CST [ConfigureOSModule] configure the ntp server for each node
13:57:34 CST stdout: [k8s-master-2-192-168-8-32]
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^? 203.107.6.88                  0   6     0     -     +0ns[   +0ns] +/-    0ns
^? 139.199.215.251               0   6     0     -     +0ns[   +0ns] +/-    0ns
^? 119.28.183.184                0   6     0     -     +0ns[   +0ns] +/-    0ns
^? ntp1.flashdance.cx            0   6     0     -     +0ns[   +0ns] +/-    0ns
^? ntp1.ams1.nl.leaseweb.net     0   6     0     -     +0ns[   +0ns] +/-    0ns
13:57:34 CST stdout: [k8s-master-3-192-168-8-33]
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
13:57:34 CST stdout: [k8s-node-1-192-168-8-34]
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^? 203.107.6.88                  0   6     0     -     +0ns[   +0ns] +/-    0ns
^? 139.199.215.251               0   6     0     -     +0ns[   +0ns] +/-    0ns
^? ntp6.flashdance.cx            0   6     0     -     +0ns[   +0ns] +/-    0ns
^? ntp1.flashdance.cx            0   6     0     -     +0ns[   +0ns] +/-    0ns
^? 119.28.183.184                0   6     0     -     +0ns[   +0ns] +/-    0ns
13:57:34 CST stdout: [k8s-master-1-192-168-8-31]
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^? 203.107.6.88                  0   6     0     -     +0ns[   +0ns] +/-    0ns
^? 139.199.215.251               0   6     0     -     +0ns[   +0ns] +/-    0ns
^? 119.28.183.184                0   6     0     -     +0ns[   +0ns] +/-    0ns
^? ntp1.flashdance.cx            0   6     0     -     +0ns[   +0ns] +/-    0ns
^? ntp1.ams1.nl.leaseweb.net     0   6     0     -     +0ns[   +0ns] +/-    0ns
13:57:34 CST success: [k8s-master-2-192-168-8-32]
13:57:34 CST success: [k8s-master-3-192-168-8-33]
13:57:34 CST success: [k8s-node-1-192-168-8-34]
13:57:34 CST success: [k8s-master-1-192-168-8-31]
13:57:34 CST [KubernetesStatusModule] Get kubernetes cluster status
13:57:34 CST success: [k8s-master-1-192-168-8-31]
13:57:34 CST success: [k8s-master-2-192-168-8-32]
13:57:34 CST success: [k8s-master-3-192-168-8-33]
13:57:34 CST [InstallContainerModule] Sync containerd binaries
13:57:34 CST skipped: [k8s-node-1-192-168-8-34]
13:57:34 CST skipped: [k8s-master-3-192-168-8-33]
13:57:34 CST skipped: [k8s-master-2-192-168-8-32]
13:57:34 CST skipped: [k8s-master-1-192-168-8-31]
13:57:34 CST [InstallContainerModule] Sync crictl binaries
13:57:34 CST skipped: [k8s-master-1-192-168-8-31]
13:57:34 CST skipped: [k8s-master-2-192-168-8-32]
13:57:34 CST skipped: [k8s-node-1-192-168-8-34]
13:57:34 CST skipped: [k8s-master-3-192-168-8-33]
13:57:34 CST [InstallContainerModule] Generate containerd service
13:57:34 CST skipped: [k8s-node-1-192-168-8-34]
13:57:34 CST skipped: [k8s-master-1-192-168-8-31]
13:57:34 CST skipped: [k8s-master-3-192-168-8-33]
13:57:34 CST skipped: [k8s-master-2-192-168-8-32]
13:57:34 CST [InstallContainerModule] Generate containerd config
13:57:34 CST skipped: [k8s-master-3-192-168-8-33]
13:57:34 CST skipped: [k8s-master-2-192-168-8-32]
13:57:34 CST skipped: [k8s-master-1-192-168-8-31]
13:57:34 CST skipped: [k8s-node-1-192-168-8-34]
13:57:34 CST [InstallContainerModule] Generate crictl config
13:57:34 CST skipped: [k8s-master-1-192-168-8-31]
13:57:34 CST skipped: [k8s-master-3-192-168-8-33]
13:57:34 CST skipped: [k8s-master-2-192-168-8-32]
13:57:34 CST skipped: [k8s-node-1-192-168-8-34]
13:57:34 CST [InstallContainerModule] Enable containerd
13:57:34 CST skipped: [k8s-master-3-192-168-8-33]
13:57:34 CST skipped: [k8s-master-2-192-168-8-32]
13:57:34 CST skipped: [k8s-master-1-192-168-8-31]
13:57:34 CST skipped: [k8s-node-1-192-168-8-34]
13:57:34 CST [PullModule] Start to pull images on all nodes
13:57:34 CST message: [k8s-node-1-192-168-8-34]
downloading image: kubesphere/pause:3.8
13:57:34 CST message: [k8s-master-2-192-168-8-32]
downloading image: kubesphere/etcd:v3.4.13
13:57:34 CST message: [k8s-master-1-192-168-8-31]
downloading image: kubesphere/etcd:v3.4.13
13:57:34 CST message: [k8s-master-3-192-168-8-33]
downloading image: kubesphere/etcd:v3.4.13
13:57:36 CST message: [k8s-node-1-192-168-8-34]
downloading image: kubesphere/kube-proxy:v1.25.3
13:57:37 CST message: [k8s-master-3-192-168-8-33]
downloading image: kubesphere/pause:3.8
13:57:37 CST message: [k8s-master-1-192-168-8-31]
downloading image: kubesphere/pause:3.8
13:57:37 CST message: [k8s-master-2-192-168-8-32]
downloading image: kubesphere/pause:3.8
13:57:39 CST message: [k8s-node-1-192-168-8-34]
downloading image: coredns/coredns:1.9.3
13:57:39 CST message: [k8s-master-3-192-168-8-33]
downloading image: kubesphere/kube-apiserver:v1.25.3
13:57:39 CST message: [k8s-master-1-192-168-8-31]
downloading image: kubesphere/kube-apiserver:v1.25.3
13:57:40 CST message: [k8s-master-2-192-168-8-32]
downloading image: kubesphere/kube-apiserver:v1.25.3
13:57:41 CST message: [k8s-master-3-192-168-8-33]
downloading image: kubesphere/kube-controller-manager:v1.25.3
13:57:41 CST message: [k8s-master-1-192-168-8-31]
downloading image: kubesphere/kube-controller-manager:v1.25.3
13:57:42 CST message: [k8s-node-1-192-168-8-34]
downloading image: kubesphere/k8s-dns-node-cache:1.15.12
13:57:42 CST message: [k8s-master-2-192-168-8-32]
downloading image: kubesphere/kube-controller-manager:v1.25.3
13:57:44 CST message: [k8s-master-1-192-168-8-31]
downloading image: kubesphere/kube-scheduler:v1.25.3
13:57:44 CST message: [k8s-master-3-192-168-8-33]
downloading image: kubesphere/kube-scheduler:v1.25.3
13:57:44 CST message: [k8s-node-1-192-168-8-34]
downloading image: cilium/cilium:v1.11.6
13:57:44 CST message: [k8s-master-2-192-168-8-32]
downloading image: kubesphere/kube-scheduler:v1.25.3
13:57:46 CST message: [k8s-master-3-192-168-8-33]
downloading image: kubesphere/kube-proxy:v1.25.3
13:57:46 CST message: [k8s-master-1-192-168-8-31]
downloading image: kubesphere/kube-proxy:v1.25.3
13:57:46 CST message: [k8s-node-1-192-168-8-34]
downloading image: cilium/operator-generic:v1.11.6
13:57:47 CST message: [k8s-master-2-192-168-8-32]
downloading image: kubesphere/kube-proxy:v1.25.3
13:57:48 CST message: [k8s-master-3-192-168-8-33]
downloading image: coredns/coredns:1.9.3
13:57:48 CST message: [k8s-master-1-192-168-8-31]
downloading image: coredns/coredns:1.9.3
13:57:49 CST message: [k8s-master-2-192-168-8-32]
downloading image: coredns/coredns:1.9.3
13:57:50 CST message: [k8s-master-1-192-168-8-31]
downloading image: kubesphere/k8s-dns-node-cache:1.15.12
13:57:51 CST message: [k8s-master-3-192-168-8-33]
downloading image: kubesphere/k8s-dns-node-cache:1.15.12
13:57:51 CST message: [k8s-master-2-192-168-8-32]
downloading image: kubesphere/k8s-dns-node-cache:1.15.12
13:57:53 CST message: [k8s-master-1-192-168-8-31]
downloading image: cilium/cilium:v1.11.6
13:57:53 CST message: [k8s-master-3-192-168-8-33]
downloading image: cilium/cilium:v1.11.6
13:57:54 CST message: [k8s-master-2-192-168-8-32]
downloading image: cilium/cilium:v1.11.6
13:57:55 CST message: [k8s-master-3-192-168-8-33]
downloading image: cilium/operator-generic:v1.11.6
13:57:55 CST message: [k8s-master-1-192-168-8-31]
downloading image: cilium/operator-generic:v1.11.6
13:57:56 CST message: [k8s-master-2-192-168-8-32]
downloading image: cilium/operator-generic:v1.11.6
13:57:58 CST message: [k8s-master-3-192-168-8-33]
downloading image: plndr/kube-vip:v0.5.0
13:57:58 CST message: [k8s-master-1-192-168-8-31]
downloading image: plndr/kube-vip:v0.5.0
13:57:59 CST message: [k8s-master-2-192-168-8-32]
downloading image: plndr/kube-vip:v0.5.0
13:58:01 CST success: [k8s-node-1-192-168-8-34]
13:58:01 CST success: [k8s-master-3-192-168-8-33]
13:58:01 CST success: [k8s-master-1-192-168-8-31]
13:58:01 CST success: [k8s-master-2-192-168-8-32]
13:58:01 CST [InstallKubeBinariesModule] Synchronize kubernetes binaries
13:58:15 CST success: [k8s-master-1-192-168-8-31]
13:58:15 CST success: [k8s-master-3-192-168-8-33]
13:58:15 CST success: [k8s-master-2-192-168-8-32]
13:58:15 CST success: [k8s-node-1-192-168-8-34]
13:58:15 CST [InstallKubeBinariesModule] Synchronize kubelet
13:58:15 CST success: [k8s-node-1-192-168-8-34]
13:58:15 CST success: [k8s-master-3-192-168-8-33]
13:58:15 CST success: [k8s-master-1-192-168-8-31]
13:58:15 CST success: [k8s-master-2-192-168-8-32]
13:58:15 CST [InstallKubeBinariesModule] Generate kubelet service
13:58:15 CST success: [k8s-master-3-192-168-8-33]
13:58:15 CST success: [k8s-node-1-192-168-8-34]
13:58:15 CST success: [k8s-master-2-192-168-8-32]
13:58:15 CST success: [k8s-master-1-192-168-8-31]
13:58:15 CST [InstallKubeBinariesModule] Enable kubelet service
13:58:16 CST success: [k8s-master-1-192-168-8-31]
13:58:16 CST success: [k8s-node-1-192-168-8-34]
13:58:16 CST success: [k8s-master-3-192-168-8-33]
13:58:16 CST success: [k8s-master-2-192-168-8-32]
13:58:16 CST [InstallKubeBinariesModule] Generate kubelet env
13:58:16 CST success: [k8s-master-1-192-168-8-31]
13:58:16 CST success: [k8s-master-3-192-168-8-33]
13:58:16 CST success: [k8s-master-2-192-168-8-32]
13:58:16 CST success: [k8s-node-1-192-168-8-34]
13:58:16 CST [InternalLoadbalancerModule] Check VIP Address
13:58:16 CST skipped: [k8s-master-3-192-168-8-33]
13:58:16 CST success: [k8s-master-1-192-168-8-31]
13:58:16 CST skipped: [k8s-master-2-192-168-8-32]
13:58:16 CST [InternalLoadbalancerModule] Get Node Interface
13:58:16 CST success: [k8s-master-3-192-168-8-33]
13:58:16 CST success: [k8s-master-1-192-168-8-31]
13:58:16 CST success: [k8s-master-2-192-168-8-32]
13:58:16 CST [InternalLoadbalancerModule] Generate kubevip manifest at first master
13:58:16 CST skipped: [k8s-master-3-192-168-8-33]
13:58:16 CST skipped: [k8s-master-2-192-168-8-32]
13:58:16 CST success: [k8s-master-1-192-168-8-31]
13:58:16 CST [InitKubernetesModule] Generate kubeadm config
13:58:16 CST skipped: [k8s-master-3-192-168-8-33]
13:58:16 CST skipped: [k8s-master-2-192-168-8-32]
13:58:16 CST success: [k8s-master-1-192-168-8-31]
13:58:16 CST [InitKubernetesModule] Init cluster using kubeadm
13:58:26 CST stdout: [k8s-master-1-192-168-8-31]
W1111 13:58:16.917460   16794 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1111 13:58:16.918848   16794 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1111 13:58:16.920173   16794 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.25.3
[preflight] Running pre-flight checks
    [WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-1-192-168-8-31 k8s-master-1-192-168-8-31.cluster.local k8s-master-2-192-168-8-32 k8s-master-2-192-168-8-32.cluster.local k8s-master-3-192-168-8-33 k8s-master-3-192-168-8-33.cluster.local k8s-node-1-192-168-8-34 k8s-node-1-192-168-8-34.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.cluster.local localhost] and IPs [10.233.0.1 192.168.8.31 127.0.0.1 192.168.8.30 192.168.8.32 192.168.8.33 192.168.8.34]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-1-192-168-8-31 localhost] and IPs [192.168.8.31 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-1-192-168-8-31 localhost] and IPs [192.168.8.31 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.569836 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master-1-192-168-8-31 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master-1-192-168-8-31 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: fjny47.0ijl33yh9iea32w6
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join lb.cluster.local:6443 --token fjny47.0ijl33yh9iea32w6 \
    --discovery-token-ca-cert-hash sha256:a4296e7d89bbecaf78fe193a22074997a8148846d7f940362eef4a399b1d5c39 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join lb.cluster.local:6443 --token fjny47.0ijl33yh9iea32w6 \
    --discovery-token-ca-cert-hash sha256:a4296e7d89bbecaf78fe193a22074997a8148846d7f940362eef4a399b1d5c39
13:58:26 CST skipped: [k8s-master-3-192-168-8-33]
13:58:26 CST skipped: [k8s-master-2-192-168-8-32]
13:58:26 CST success: [k8s-master-1-192-168-8-31]
13:58:26 CST [InitKubernetesModule] Copy admin.conf to ~/.kube/config
13:58:26 CST skipped: [k8s-master-3-192-168-8-33]
13:58:26 CST skipped: [k8s-master-2-192-168-8-32]
13:58:26 CST success: [k8s-master-1-192-168-8-31]
13:58:26 CST [InitKubernetesModule] Remove master taint
13:58:26 CST skipped: [k8s-master-3-192-168-8-33]
13:58:26 CST skipped: [k8s-master-1-192-168-8-31]
13:58:26 CST skipped: [k8s-master-2-192-168-8-32]
13:58:26 CST [InitKubernetesModule] Add worker label
13:58:26 CST skipped: [k8s-master-3-192-168-8-33]
13:58:26 CST skipped: [k8s-master-1-192-168-8-31]
13:58:26 CST skipped: [k8s-master-2-192-168-8-32]
13:58:26 CST [ClusterDNSModule] Generate coredns service
13:58:27 CST skipped: [k8s-master-3-192-168-8-33]
13:58:27 CST skipped: [k8s-master-2-192-168-8-32]
13:58:27 CST success: [k8s-master-1-192-168-8-31]
13:58:27 CST [ClusterDNSModule] Override coredns service
13:58:27 CST stdout: [k8s-master-1-192-168-8-31]
service "kube-dns" deleted
13:58:27 CST stdout: [k8s-master-1-192-168-8-31]
service/coredns created
Warning: resource clusterroles/system:coredns is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/system:coredns configured
13:58:27 CST skipped: [k8s-master-3-192-168-8-33]
13:58:27 CST skipped: [k8s-master-2-192-168-8-32]
13:58:27 CST success: [k8s-master-1-192-168-8-31]
13:58:27 CST [ClusterDNSModule] Generate nodelocaldns
13:58:27 CST skipped: [k8s-master-3-192-168-8-33]
13:58:27 CST skipped: [k8s-master-2-192-168-8-32]
13:58:27 CST success: [k8s-master-1-192-168-8-31]
13:58:27 CST [ClusterDNSModule] Deploy nodelocaldns
13:58:28 CST stdout: [k8s-master-1-192-168-8-31]
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
13:58:28 CST skipped: [k8s-master-3-192-168-8-33]
13:58:28 CST skipped: [k8s-master-2-192-168-8-32]
13:58:28 CST success: [k8s-master-1-192-168-8-31]
13:58:28 CST [ClusterDNSModule] Generate nodelocaldns configmap
13:58:28 CST skipped: [k8s-master-3-192-168-8-33]
13:58:28 CST skipped: [k8s-master-2-192-168-8-32]
13:58:28 CST success: [k8s-master-1-192-168-8-31]
13:58:28 CST [ClusterDNSModule] Apply nodelocaldns configmap
13:58:28 CST stdout: [k8s-master-1-192-168-8-31]
configmap/nodelocaldns created
13:58:28 CST skipped: [k8s-master-3-192-168-8-33]
13:58:28 CST skipped: [k8s-master-2-192-168-8-32]
13:58:28 CST success: [k8s-master-1-192-168-8-31]
13:58:28 CST [KubernetesStatusModule] Get kubernetes cluster status
13:58:28 CST stdout: [k8s-master-1-192-168-8-31]
v1.25.3
13:58:28 CST stdout: [k8s-master-1-192-168-8-31]
k8s-master-1-192-168-8-31   v1.25.3   [map[address:192.168.8.31 type:InternalIP] map[address:k8s-master-1-192-168-8-31 type:Hostname]]
13:58:29 CST stdout: [k8s-master-1-192-168-8-31]
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
b3e5ece1ac89ace3c06449f547f225d6850b9ea9909cf1b65e8fa263d4450a18
13:58:29 CST stdout: [k8s-master-1-192-168-8-31]
secret/kubeadm-certs patched
13:58:29 CST stdout: [k8s-master-1-192-168-8-31]
secret/kubeadm-certs patched
13:58:29 CST stdout: [k8s-master-1-192-168-8-31]
secret/kubeadm-certs patched
13:58:29 CST stdout: [k8s-master-1-192-168-8-31]
xmp4ko.zyb4h3xd96sj7a4o
13:58:29 CST success: [k8s-master-1-192-168-8-31]
13:58:29 CST success: [k8s-master-2-192-168-8-32]
13:58:29 CST success: [k8s-master-3-192-168-8-33]
13:58:29 CST [JoinNodesModule] Generate kubeadm config
13:58:29 CST skipped: [k8s-master-1-192-168-8-31]
13:58:29 CST success: [k8s-master-3-192-168-8-33]
13:58:29 CST success: [k8s-node-1-192-168-8-34]
13:58:29 CST success: [k8s-master-2-192-168-8-32]
13:58:29 CST [JoinNodesModule] Join control-plane node
13:58:59 CST stdout: [k8s-master-3-192-168-8-33]
W1111 13:58:29.978262   13726 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
[preflight] Running pre-flight checks
    [WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1111 13:58:42.427816   13726 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-3-192-168-8-33 localhost] and IPs [192.168.8.33 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-3-192-168-8-33 localhost] and IPs [192.168.8.33 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-1-192-168-8-31 k8s-master-1-192-168-8-31.cluster.local k8s-master-2-192-168-8-32 k8s-master-2-192-168-8-32.cluster.local k8s-master-3-192-168-8-33 k8s-master-3-192-168-8-33.cluster.local k8s-node-1-192-168-8-34 k8s-node-1-192-168-8-34.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.cluster.local localhost] and IPs [10.233.0.1 192.168.8.33 127.0.0.1 192.168.8.30 192.168.8.31 192.168.8.32 192.168.8.34]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node k8s-master-3-192-168-8-33 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master-3-192-168-8-33 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.
13:59:05 CST stdout: [k8s-master-2-192-168-8-32]
W1111 13:58:29.975593   14486 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
[preflight] Running pre-flight checks
    [WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1111 13:58:42.425197   14486 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-2-192-168-8-32 localhost] and IPs [192.168.8.32 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-2-192-168-8-32 localhost] and IPs [192.168.8.32 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-1-192-168-8-31 k8s-master-1-192-168-8-31.cluster.local k8s-master-2-192-168-8-32 k8s-master-2-192-168-8-32.cluster.local k8s-master-3-192-168-8-33 k8s-master-3-192-168-8-33.cluster.local k8s-node-1-192-168-8-34 k8s-node-1-192-168-8-34.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.cluster.local localhost] and IPs [10.233.0.1 192.168.8.32 127.0.0.1 192.168.8.30 192.168.8.31 192.168.8.33 192.168.8.34]
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
{"level":"warn","ts":"2022-11-11T13:58:58.590+0800","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000373180/192.168.8.31:2379","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: unhealthy cluster"}
{"level":"warn","ts":"2022-11-11T13:58:58.701+0800","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0002c3180/192.168.8.31:2379","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: unhealthy cluster"}
{"level":"warn","ts":"2022-11-11T13:58:58.864+0800","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000372a80/192.168.8.31:2379","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: unhealthy cluster"}
{"level":"warn","ts":"2022-11-11T13:58:59.111+0800","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0002c2e00/192.168.8.31:2379","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: unhealthy cluster"}
{"level":"warn","ts":"2022-11-11T13:58:59.463+0800","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0002c3340/192.168.8.31:2379","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: unhealthy cluster"}
{"level":"warn","ts":"2022-11-11T13:58:59.984+0800","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000373500/192.168.8.31:2379","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: unhealthy cluster"}
{"level":"warn","ts":"2022-11-11T13:59:00.758+0800","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0002c2a80/192.168.8.31:2379","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: unhealthy cluster"}
{"level":"warn","ts":"2022-11-11T13:59:01.939+0800","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0002c3180/192.168.8.31:2379","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: unhealthy cluster"}
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node k8s-master-2-192-168-8-32 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master-2-192-168-8-32 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.
13:59:05 CST skipped: [k8s-master-1-192-168-8-31]
13:59:05 CST success: [k8s-master-3-192-168-8-33]
13:59:05 CST success: [k8s-master-2-192-168-8-32]
13:59:05 CST [JoinNodesModule] Join worker node
13:59:19 CST stdout: [k8s-node-1-192-168-8-34]
W1111 13:59:05.358063   12000 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
[preflight] Running pre-flight checks
    [WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1111 13:59:05.486532   12000 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
13:59:19 CST success: [k8s-node-1-192-168-8-34]
13:59:19 CST [JoinNodesModule] Copy admin.conf to ~/.kube/config
13:59:19 CST skipped: [k8s-master-1-192-168-8-31]
13:59:19 CST success: [k8s-master-2-192-168-8-32]
13:59:19 CST success: [k8s-master-3-192-168-8-33]
13:59:19 CST [JoinNodesModule] Remove master taint
13:59:19 CST skipped: [k8s-master-3-192-168-8-33]
13:59:19 CST skipped: [k8s-master-1-192-168-8-31]
13:59:19 CST skipped: [k8s-master-2-192-168-8-32]
13:59:19 CST [JoinNodesModule] Add worker label to master
13:59:19 CST skipped: [k8s-master-3-192-168-8-33]
13:59:19 CST skipped: [k8s-master-1-192-168-8-31]
13:59:19 CST skipped: [k8s-master-2-192-168-8-32]
13:59:19 CST [JoinNodesModule] Synchronize kube config to worker
13:59:19 CST success: [k8s-node-1-192-168-8-34]
13:59:19 CST [JoinNodesModule] Add worker label to worker
13:59:19 CST stdout: [k8s-node-1-192-168-8-34]
node/k8s-node-1-192-168-8-34 labeled
13:59:19 CST success: [k8s-node-1-192-168-8-34]
13:59:19 CST [InternalLoadbalancerModule] Check VIP Address
13:59:19 CST skipped: [k8s-master-3-192-168-8-33]
13:59:19 CST skipped: [k8s-master-2-192-168-8-32]
13:59:19 CST success: [k8s-master-1-192-168-8-31]
13:59:19 CST [InternalLoadbalancerModule] Get Node Interface
13:59:19 CST success: [k8s-master-3-192-168-8-33]
13:59:19 CST success: [k8s-master-2-192-168-8-32]
13:59:19 CST success: [k8s-master-1-192-168-8-31]
13:59:19 CST [InternalLoadbalancerModule] Generate kubevip manifest at other master
13:59:19 CST skipped: [k8s-master-1-192-168-8-31]
13:59:19 CST success: [k8s-master-2-192-168-8-32]
13:59:19 CST success: [k8s-master-3-192-168-8-33]
13:59:19 CST [DeployNetworkPluginModule] Generate cilium chart
13:59:19 CST success: [LocalHost]
13:59:19 CST [DeployNetworkPluginModule] Synchronize kubernetes binaries
13:59:19 CST skipped: [k8s-master-3-192-168-8-33]
13:59:19 CST skipped: [k8s-master-2-192-168-8-32]
13:59:19 CST success: [k8s-master-1-192-168-8-31]
13:59:19 CST [DeployNetworkPluginModule] Deploy cilium
13:59:20 CST stdout: [k8s-master-1-192-168-8-31]
Release "cilium" does not exist. Installing it now.
W1111 13:59:20.216780   18092 warnings.go:70] spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[1].matchExpressions[0].key: beta.kubernetes.io/os is deprecated since v1.14; use "kubernetes.io/os" instead
NAME: cilium
LAST DEPLOYED: Fri Nov 11 13:59:19 2022
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble.

Your release version is 1.11.6.

For any further help, visit https://docs.cilium.io/en/v1.11/gettinghelp
13:59:20 CST skipped: [k8s-master-3-192-168-8-33]
13:59:20 CST skipped: [k8s-master-2-192-168-8-32]
13:59:20 CST success: [k8s-master-1-192-168-8-31]
13:59:20 CST [ConfigureKubernetesModule] Configure kubernetes
13:59:20 CST success: [k8s-node-1-192-168-8-34]
13:59:20 CST success: [k8s-master-3-192-168-8-33]
13:59:20 CST success: [k8s-master-2-192-168-8-32]
13:59:20 CST success: [k8s-master-1-192-168-8-31]
13:59:20 CST [ChownModule] Chown user $HOME/.kube dir
13:59:20 CST success: [k8s-node-1-192-168-8-34]
13:59:20 CST success: [k8s-master-1-192-168-8-31]
13:59:20 CST success: [k8s-master-2-192-168-8-32]
13:59:20 CST success: [k8s-master-3-192-168-8-33]
13:59:20 CST [AutoRenewCertsModule] Generate k8s certs renew script
13:59:20 CST success: [k8s-master-3-192-168-8-33]
13:59:20 CST success: [k8s-master-2-192-168-8-32]
13:59:20 CST success: [k8s-master-1-192-168-8-31]
13:59:20 CST [AutoRenewCertsModule] Generate k8s certs renew service
13:59:20 CST success: [k8s-master-2-192-168-8-32]
13:59:20 CST success: [k8s-master-3-192-168-8-33]
13:59:20 CST success: [k8s-master-1-192-168-8-31]
13:59:20 CST [AutoRenewCertsModule] Generate k8s certs renew timer
13:59:20 CST success: [k8s-master-3-192-168-8-33]
13:59:20 CST success: [k8s-master-1-192-168-8-31]
13:59:20 CST success: [k8s-master-2-192-168-8-32]
13:59:20 CST [AutoRenewCertsModule] Enable k8s certs renew service
13:59:21 CST success: [k8s-master-1-192-168-8-31]
13:59:21 CST success: [k8s-master-3-192-168-8-33]
13:59:21 CST success: [k8s-master-2-192-168-8-32]
13:59:21 CST [SaveKubeConfigModule] Save kube config as a configmap
13:59:21 CST success: [LocalHost]
13:59:21 CST [AddonsModule] Install addons
13:59:21 CST success: [LocalHost]
13:59:21 CST Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.

Please check the result using the command:

    kubectl get pod -A

[root@k8s-master-1-192-168-8-31:~/kubekey]# kubectl get nodes
NAME                        STATUS     ROLES           AGE   VERSION
k8s-master-1-192-168-8-31   Ready      control-plane   71s   v1.25.3
k8s-master-2-192-168-8-32   Ready      control-plane   37s   v1.25.3
k8s-master-3-192-168-8-33   Ready      control-plane   37s   v1.25.3
k8s-node-1-192-168-8-34     NotReady   worker          16s   v1.25.3
24sama commented 1 year ago

Yeah, would you be interested in it? Welcome to create PR to fix that.

xiaods commented 1 year ago

@bfbz 麻烦根据你的经验帮范例文档更新一下。谢谢。

Black-Gold commented 11 months ago

I know that, you specify the feature gates in the config.yaml, please remove it. image

@24sama @xiaods I think shoud update config-sample.md!comment this why should remove and when should remote it please! https://github.com/kubesphere/kubekey/blob/master/docs/config-example.md