kubesphere / kubekey

Install Kubernetes/K3s only, both Kubernetes/K3s and KubeSphere, and related cloud-native add-ons, it supports all-in-one, multi-node, and HA 🔥 ⎈ 🐳
https://kubesphere.io
Apache License 2.0
2.39k stars 555 forks source link

Install default cluster is not available #1637

Open ghost opened 2 years ago

ghost commented 2 years ago

What is version of KubeKey has the issue?

v3.0.2

What is your os environment?

Centos7.9

KubeKey config file

No response

A clear and concise description of what happend.

Install default cluster is not available. Why is it not copied from the local but /tmp ?

tree kubekey/
kubekey/
├── cni
│   └── v0.9.1
│       └── amd64
│           └── cni-plugins-linux-amd64-v0.9.1.tgz
├── config-sample
├── crictl
│   └── v1.24.0
│       └── amd64
│           └── crictl-v1.24.0-linux-amd64.tar.gz
├── docker
│   └── 20.10.8
│       └── amd64
│           └── docker-20.10.8.tgz
├── etcd
│   └── v3.4.13
│       └── amd64
│           └── etcd-v3.4.13-linux-amd64.tar.gz
├── helm
│   └── v3.9.0
│       └── amd64
│           └── helm
├── kube
│   └── v1.23.10
│       └── amd64
│           ├── kubeadm
│           ├── kubectl
│           └── kubelet
├── logs
│   ├── kubekey.log -> kubekey.log.20221127
│   └── kubekey.log.20221127
├── master1
│   ├── 10-kubeadm.conf
│   ├── backup-etcd.service
│   ├── backup-etcd.timer
│   ├── etcd-backup.sh
│   ├── etcd.env
│   ├── etcd.service
│   ├── initOS.sh
│   ├── kubeadm-config.yaml
│   ├── kubelet.service
│   └── nodelocaldns.yaml
├── node1
│   ├── 10-kubeadm.conf
│   ├── initOS.sh
│   ├── kubeadm-config.yaml
│   └── kubelet.service
└── pki
    └── etcd
        ├── admin-master1-key.pem
        ├── admin-master1.pem
        ├── ca-key.pem
        ├── ca.pem
        ├── member-master1-key.pem
        ├── member-master1.pem
        ├── node-master1-key.pem
        └── node-master1.pem

image

Relevant log output

./kk create cluster -f config-sample.yaml

 _   __      _          _   __
| | / /     | |        | | / /
| |/ / _   _| |__   ___| |/ /  ___ _   _
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                    __/ |
                                   |___/

01:39:55 CST [GreetingsModule] Greetings
01:39:56 CST message: [node1]
Greetings, KubeKey!
01:39:56 CST message: [master1]
Greetings, KubeKey!
01:39:56 CST success: [node1]
01:39:56 CST success: [master1]
01:39:56 CST [NodePreCheckModule] A pre-check on nodes
01:39:56 CST success: [master1]
01:39:56 CST success: [node1]
01:39:56 CST [ConfirmModule] Display confirmation form
+---------+------+------+---------+----------+-------+-------+---------+-----------+--------+----------+------------+------------+-------------+------------------+--------------+
| name    | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker   | containerd | nfs client | ceph client | glusterfs client | time         |
+---------+------+------+---------+----------+-------+-------+---------+-----------+--------+----------+------------+------------+-------------+------------------+--------------+
| master1 | y    | y    | y       | y        | y     | y     | y       | y         | y      | 20.10.21 | 1.6.10     | y          |             |                  | CST 01:39:56 |
| node1   | y    | y    | y       | y        | y     | y     | y       | y         | y      | 20.10.21 | 1.6.10     | y          |             |                  | CST 01:39:56 |
+---------+------+------+---------+----------+-------+-------+---------+-----------+--------+----------+------------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
01:39:58 CST success: [LocalHost]
01:39:58 CST [NodeBinariesModule] Download installation binaries
01:39:58 CST message: [localhost]
downloading amd64 kubeadm v1.23.10 ...
01:39:58 CST message: [localhost]
kubeadm is existed
01:39:58 CST message: [localhost]
downloading amd64 kubelet v1.23.10 ...
01:39:59 CST message: [localhost]
kubelet is existed
01:39:59 CST message: [localhost]
downloading amd64 kubectl v1.23.10 ...
01:39:59 CST message: [localhost]
kubectl is existed
01:39:59 CST message: [localhost]
downloading amd64 helm v3.9.0 ...
01:39:59 CST message: [localhost]
helm is existed
01:39:59 CST message: [localhost]
downloading amd64 kubecni v0.9.1 ...
01:39:59 CST message: [localhost]
kubecni is existed
01:39:59 CST message: [localhost]
downloading amd64 crictl v1.24.0 ...
01:39:59 CST message: [localhost]
crictl is existed
01:39:59 CST message: [localhost]
downloading amd64 etcd v3.4.13 ...
01:39:59 CST message: [localhost]
etcd is existed
01:39:59 CST message: [localhost]
downloading amd64 docker 20.10.8 ...
01:40:00 CST message: [localhost]
docker is existed
01:40:00 CST success: [LocalHost]
01:40:00 CST [ConfigureOSModule] Get OS release
01:40:00 CST success: [master1]
01:40:00 CST success: [node1]
01:40:00 CST [ConfigureOSModule] Prepare to init OS
01:40:00 CST success: [node1]
01:40:00 CST success: [master1]
01:40:00 CST [ConfigureOSModule] Generate init os script
01:40:00 CST message: [node1]
scp file /root/ks-yaml/kubekey/node1/initOS.sh to remote /usr/local/bin/kube-scripts/initOS.sh failed: Failed to exec command: sudo -E /bin/bash -c "mv -f /tmp/kubekey/usr/local/bin/kube-scripts/initOS.sh /usr/local/bin/kube-scripts/initOS.sh"
mv: cannot stat ‘/tmp/kubekey/usr/local/bin/kube-scripts/initOS.sh’: No such file or directory: Process exited with status 1
01:40:00 CST retry: [node1]
01:40:05 CST success: [master1]
01:40:05 CST success: [node1]
01:40:05 CST [ConfigureOSModule] Exec init os script
01:40:05 CST stdout: [master1]
setenforce: SELinux is disabled
Disabled
vm.drop_caches = 3
fs.file-max = 655360
vm.max_map_count = 262144
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-arptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
01:40:05 CST stdout: [node1]
setenforce: SELinux is disabled
Disabled
vm.drop_caches = 3
fs.file-max = 655360
vm.max_map_count = 262144
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-arptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
01:40:05 CST success: [master1]
01:40:05 CST success: [node1]
01:40:05 CST [ConfigureOSModule] configure the ntp server for each node
01:40:05 CST skipped: [node1]
01:40:05 CST skipped: [master1]
01:40:05 CST [KubernetesStatusModule] Get kubernetes cluster status
01:40:05 CST success: [master1]
01:40:05 CST [InstallContainerModule] Sync docker binaries
01:40:05 CST skipped: [node1]
01:40:05 CST skipped: [master1]
01:40:05 CST [InstallContainerModule] Generate docker service
01:40:06 CST skipped: [node1]
01:40:06 CST skipped: [master1]
01:40:06 CST [InstallContainerModule] Generate docker config
01:40:06 CST skipped: [master1]
01:40:06 CST skipped: [node1]
01:40:06 CST [InstallContainerModule] Enable docker
01:40:06 CST skipped: [node1]
01:40:06 CST skipped: [master1]
01:40:06 CST [InstallContainerModule] Add auths to container runtime
01:40:06 CST skipped: [node1]
01:40:06 CST skipped: [master1]
01:40:06 CST [PullModule] Start to pull images on all nodes
01:40:06 CST message: [node1]
downloading image: harbor.dockerregistry.com:8080/kubesphere/pause:3.6
01:40:06 CST message: [master1]
downloading image: harbor.dockerregistry.com:8080/kubesphere/pause:3.6
01:40:06 CST message: [master1]
downloading image: harbor.dockerregistry.com:8080/kubesphere/kube-apiserver:v1.23.10
01:40:06 CST message: [node1]
downloading image: harbor.dockerregistry.com:8080/kubesphere/kube-proxy:v1.23.10
01:40:06 CST message: [master1]
downloading image: harbor.dockerregistry.com:8080/kubesphere/kube-controller-manager:v1.23.10
01:40:06 CST message: [node1]
downloading image: harbor.dockerregistry.com:8080/coredns/coredns:1.8.6
01:40:06 CST message: [node1]
downloading image: harbor.dockerregistry.com:8080/kubesphere/k8s-dns-node-cache:1.15.12
01:40:06 CST message: [master1]
downloading image: harbor.dockerregistry.com:8080/kubesphere/kube-scheduler:v1.23.10
01:40:06 CST message: [master1]
downloading image: harbor.dockerregistry.com:8080/kubesphere/kube-proxy:v1.23.10
01:40:06 CST message: [node1]
downloading image: harbor.dockerregistry.com:8080/calico/kube-controllers:v3.23.2
01:40:06 CST message: [node1]
downloading image: harbor.dockerregistry.com:8080/calico/cni:v3.23.2
01:40:06 CST message: [master1]
downloading image: harbor.dockerregistry.com:8080/coredns/coredns:1.8.6
01:40:06 CST message: [node1]
downloading image: harbor.dockerregistry.com:8080/calico/node:v3.23.2
01:40:06 CST message: [master1]
downloading image: harbor.dockerregistry.com:8080/kubesphere/k8s-dns-node-cache:1.15.12
01:40:06 CST message: [master1]
downloading image: harbor.dockerregistry.com:8080/calico/kube-controllers:v3.23.2
01:40:06 CST message: [node1]
downloading image: harbor.dockerregistry.com:8080/calico/pod2daemon-flexvol:v3.23.2
01:40:06 CST message: [master1]
downloading image: harbor.dockerregistry.com:8080/calico/cni:v3.23.2
01:40:06 CST message: [master1]
downloading image: harbor.dockerregistry.com:8080/calico/node:v3.23.2
01:40:06 CST message: [master1]
downloading image: harbor.dockerregistry.com:8080/calico/pod2daemon-flexvol:v3.23.2
01:40:06 CST success: [node1]
01:40:06 CST success: [master1]
01:40:06 CST [ETCDPreCheckModule] Get etcd status
01:40:06 CST stdout: [master1]
ETCD_NAME=etcd-master1
01:40:06 CST success: [master1]
01:40:06 CST [CertsModule] Fetch etcd certs
01:40:07 CST success: [master1]
01:40:07 CST [CertsModule] Generate etcd Certs
[certs] Using existing ca certificate authority
[certs] Using existing admin-master1 certificate and key on disk
[certs] Using existing member-master1 certificate and key on disk
[certs] Using existing node-master1 certificate and key on disk
01:40:07 CST success: [LocalHost]
01:40:07 CST [CertsModule] Synchronize certs file
01:40:07 CST success: [master1]
01:40:07 CST [CertsModule] Synchronize certs file to master
01:40:07 CST skipped: [master1]
01:40:07 CST [InstallETCDBinaryModule] Install etcd using binary
01:40:08 CST success: [master1]
01:40:08 CST [InstallETCDBinaryModule] Generate etcd service
01:40:08 CST success: [master1]
01:40:08 CST [InstallETCDBinaryModule] Generate access address
01:40:08 CST success: [master1]
01:40:08 CST [ETCDConfigureModule] Health check on exist etcd
01:40:08 CST success: [master1]
01:40:08 CST [ETCDConfigureModule] Generate etcd.env config on new etcd
01:40:08 CST skipped: [master1]
01:40:08 CST [ETCDConfigureModule] Join etcd member
01:40:08 CST skipped: [master1]
01:40:08 CST [ETCDConfigureModule] Restart etcd
01:40:08 CST skipped: [master1]
01:40:08 CST [ETCDConfigureModule] Health check on new etcd
01:40:08 CST skipped: [master1]
01:40:08 CST [ETCDConfigureModule] Check etcd member
01:40:08 CST skipped: [master1]
01:40:08 CST [ETCDConfigureModule] Refresh etcd.env config on all etcd
01:40:08 CST success: [master1]
01:40:08 CST [ETCDConfigureModule] Health check on all etcd
01:40:08 CST success: [master1]
01:40:08 CST [ETCDBackupModule] Backup etcd data regularly
01:40:08 CST success: [master1]
01:40:08 CST [ETCDBackupModule] Generate backup ETCD service
01:40:08 CST success: [master1]
01:40:08 CST [ETCDBackupModule] Generate backup ETCD timer
01:40:09 CST success: [master1]
01:40:09 CST [ETCDBackupModule] Enable backup etcd service
01:40:09 CST success: [master1]
01:40:09 CST [InstallKubeBinariesModule] Synchronize kubernetes binaries
01:40:09 CST message: [node1]
sync kube binaries failed: Failed to exec command: sudo -E /bin/bash -c "mv -f /tmp/kubekey/usr/local/bin/kubeadm /usr/local/bin/kubeadm"
mv: cannot stat ‘/tmp/kubekey/usr/local/bin/kubeadm’: No such file or directory: Process exited with status 1
01:40:09 CST retry: [node1]
01:40:18 CST success: [master1]
01:40:18 CST success: [node1]
01:40:18 CST [InstallKubeBinariesModule] Synchronize kubelet
01:40:18 CST success: [master1]
01:40:18 CST success: [node1]
01:40:18 CST [InstallKubeBinariesModule] Generate kubelet service
01:40:18 CST message: [node1]
scp file /root/ks-yaml/kubekey/node1/kubelet.service to remote /etc/systemd/system/kubelet.service failed: Failed to exec command: sudo -E /bin/bash -c "mv -f /tmp/kubekey/etc/systemd/system/kubelet.service /etc/systemd/system/kubelet.service"
mv: cannot stat ‘/tmp/kubekey/etc/systemd/system/kubelet.service’: No such file or directory: Process exited with status 1
01:40:18 CST retry: [node1]
01:40:23 CST success: [master1]
01:40:23 CST success: [node1]
01:40:23 CST [InstallKubeBinariesModule] Enable kubelet service
01:40:23 CST success: [node1]
01:40:23 CST success: [master1]
01:40:23 CST [InstallKubeBinariesModule] Generate kubelet env
01:40:23 CST message: [node1]
scp file /root/ks-yaml/kubekey/node1/10-kubeadm.conf to remote /etc/systemd/system/kubelet.service.d/10-kubeadm.conf failed: validate md5sum failed b194bf159a239b005b04f8fc8893d3d7 != b2b80aade0b0045a66e263e919fbc4eb
01:40:23 CST retry: [node1]
01:40:28 CST success: [master1]
01:40:28 CST success: [node1]
01:40:28 CST [InitKubernetesModule] Generate kubeadm config
01:40:29 CST success: [master1]
01:40:29 CST [InitKubernetesModule] Init cluster using kubeadm
01:40:39 CST stdout: [master1]
W1127 01:40:29.312240   26718 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.23.10
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost master1 master1.cluster.local node1 node1.cluster.local] and IPs [10.233.0.1 192.168.145.99 127.0.0.1 192.168.145.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 4.003352 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handlethis transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: qs8f7n.y4j2v20vufe8veg6
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join lb.kubesphere.local:6443 --token qs8f7n.y4j2v20vufe8veg6 \
        --discovery-token-ca-cert-hash sha256:adc91ac1ce80391dd0927823749470fb3820decb6dd2ffacdf31e6467259f1fb \
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join lb.kubesphere.local:6443 --token qs8f7n.y4j2v20vufe8veg6 \
        --discovery-token-ca-cert-hash sha256:adc91ac1ce80391dd0927823749470fb3820decb6dd2ffacdf31e6467259f1fb
01:40:39 CST success: [master1]
01:40:39 CST [InitKubernetesModule] Copy admin.conf to ~/.kube/config
01:40:39 CST success: [master1]
01:40:39 CST [InitKubernetesModule] Remove master taint
01:40:39 CST skipped: [master1]
01:40:39 CST [InitKubernetesModule] Add worker label
01:40:39 CST skipped: [master1]
01:40:39 CST [ClusterDNSModule] Generate coredns service
01:40:39 CST skipped: [master1]
01:40:39 CST [ClusterDNSModule] Override coredns service
01:40:39 CST skipped: [master1]
01:40:39 CST [ClusterDNSModule] Generate nodelocaldns
01:40:39 CST success: [master1]
01:40:39 CST [ClusterDNSModule] Deploy nodelocaldns
01:40:40 CST stdout: [master1]
serviceaccount/nodelocaldns unchanged
daemonset.apps/nodelocaldns unchanged
01:40:40 CST success: [master1]
01:40:40 CST [ClusterDNSModule] Generate nodelocaldns configmap
01:40:40 CST skipped: [master1]
01:40:40 CST [ClusterDNSModule] Apply nodelocaldns configmap
01:40:40 CST skipped: [master1]
01:40:40 CST [KubernetesStatusModule] Get kubernetes cluster status
01:40:40 CST stdout: [master1]
v1.23.10
01:40:40 CST stdout: [master1]
master1   v1.23.10   [map[address:192.168.145.99 type:InternalIP] map[address:master1 type:Hostname]]
01:40:49 CST stdout: [master1]
I1127 01:40:46.908758   28165 version.go:255] remote version is much newer: v1.25.4; falling back to: stable-1.23
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
3fd2c99a8391603cddc0aa7182b74e62bd2793a25887f7decfd29f65aa2304e2
01:40:49 CST stdout: [master1]
secret/kubeadm-certs patched
01:40:49 CST stdout: [master1]
secret/kubeadm-certs patched
01:40:49 CST stdout: [master1]
secret/kubeadm-certs patched
01:40:49 CST stdout: [master1]
ywtoz5.reqbc79f8ohu5e6x
01:40:49 CST success: [master1]
01:40:49 CST [JoinNodesModule] Generate kubeadm config
01:40:49 CST skipped: [master1]
01:40:49 CST success: [node1]
01:40:49 CST [JoinNodesModule] Join control-plane node
01:40:49 CST skipped: [master1]
01:40:49 CST [JoinNodesModule] Join worker node
01:40:50 CST stdout: [node1]
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
01:41:01 CST stdout: [node1]
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1127 01:40:50.579440   28634 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[preflight] Running pre-flight checks
W1127 01:40:50.581270   28634 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
01:41:01 CST message: [node1]
join node failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm join --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull"
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1
01:41:01 CST retry: [node1]

Additional information

No response

24sama commented 2 years ago

Please attention the error report by node1. And kk scp the binary from the work node local path to the remote node /tmp path at first, and then mv them to the final path.

xiaods commented 1 year ago

@freemankevin your host is running another process with kubelet, please keept the environment is clean then use KK to install cluster.