labring / sealos

Sealos is a production-ready Kubernetes distribution that provides a one-stop solution for both public and private cloud. You can run any Docker image on sealos, start high availability databases like mysql/pgsql/redis/mongo, develop applications using any Programming language
https://cloud.sealos.io
Apache License 2.0
13.83k stars 2.07k forks source link

BUG: sealos run with Error: failed to run checker, failed to get host, failed to create ssh session, handshake failed #4204

Closed john-deng closed 6 months ago

john-deng commented 10 months ago

Sealos Version

v4.3.5

How to reproduce the bug?

  1. sealos run labring/kubernetes:v1.25.14-4.3.5

What is the expected behavior?

installation success

What do you see instead?

2023-10-30T08:42:10 debug using file /root/.config/containers/storage.conf as container storage config Getting image source signatures Copying blob ea362f878063 skipped: already exists
Copying config 900f77c3b7 done
Writing manifest to image destination Storing signatures 2023-10-30T08:42:10 debug creating new cluster 2023-10-30T08:42:10 debug start to exec arch on 192.168.1.8:22 2023-10-30T08:42:10 warn failed to get host arch: failed to create ssh session for 192.168.1.8:22: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none], no supported methods remain, defaults to amd64 2023-10-30T08:42:10 debug defaultPort: 22 2023-10-30T08:42:10 info Start to create a new cluster: master [192.168.1.8], worker [], registry 192.168.1.8 2023-10-30T08:42:10 info Executing pipeline Check in CreateProcessor. 2023-10-30T08:42:10 info checker:hostname [192.168.1.8:22] 2023-10-30T08:42:10 debug start to exec hostname on 192.168.1.8:22 Error: failed to run checker: failed to get host 192.168.1.8:22 hostname, failed to create ssh session for 192.168.1.8:22: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none], no supported methods remain

Operating environment

- Sealos version: v4.3.5
- Docker version: containerd
- Kubernetes version: 1.25.14
- Operating system: Ubuntu 22.04
- Runtime environment: n/a
- Cluster size: n/a
- Additional information:  Running sealos on Proxmox VE

Additional information

No response

cuisongliu commented 10 months ago

使用rc版本或者正式版,尽量别使用alpha版本

sealos-ci-robot commented 10 months ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


Use the rc version or the official version, try not to use the alpha version

john-deng commented 10 months ago

@cuisongliu

正式版本也一样有问题

sealos version SealosVersion: buildDate: "2023-10-20T14:15:00Z" compiler: gc gitCommit: a2719848 gitVersion: 4.3.6 goVersion: go1.20.10 platform: linux/amd64

2023-10-30T09:50:28 debug get vip is 10.103.97.2 2023-10-30T09:50:28 debug start to exec kubeadm init --config=/root/.sealos/default/etc/kubeadm-init.yaml --skip-certificate-key-print --skip-token-print -v 6 --ignore-preflight-errors=SystemVerification on 192.168.1.8:22 192.168.1.8:22 I1030 09:50:28.470797 26082 initconfiguration.go:254] loading configuration from "/root/.sealos/default/etc/kubeadm-init.yaml" 192.168.1.8:22 W1030 09:50:28.474670 26082 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration! 192.168.1.8:22 W1030 09:50:28.474720 26082 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0 192.168.1.8:22 I1030 09:50:28.477331 26082 certs.go:522] validating certificate period for CA certificate 192.168.1.8:22 I1030 09:50:28.477384 26082 certs.go:522] validating certificate period for front-proxy CA certificate 192.168.1.8:22 [init] Using Kubernetes version: v1.25.14 192.168.1.8:22 [preflight] Running pre-flight checks 192.168.1.8:22 I1030 09:50:28.477481 26082 checks.go:568] validating Kubernetes and kubeadm version 192.168.1.8:22 I1030 09:50:28.477493 26082 checks.go:168] validating if the firewall is enabled and active 192.168.1.8:22 I1030 09:50:28.482491 26082 checks.go:203] validating availability of port 6443 192.168.1.8:22 I1030 09:50:28.482543 26082 checks.go:203] validating availability of port 10259 192.168.1.8:22 I1030 09:50:28.482551 26082 checks.go:203] validating availability of port 10257 192.168.1.8:22 I1030 09:50:28.482559 26082 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml 192.168.1.8:22 I1030 09:50:28.482570 26082 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml 192.168.1.8:22 I1030 09:50:28.482572 26082 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml 192.168.1.8:22 I1030 09:50:28.482574 26082 checks.go:280] validating the existence of file /etc/kubernetes/manifests/etcd.yaml 192.168.1.8:22 I1030 09:50:28.482578 26082 checks.go:430] validating if the connectivity type is via proxy or direct 192.168.1.8:22 I1030 09:50:28.482593 26082 checks.go:469] validating http connectivity to first IP address in the CIDR 192.168.1.8:22 I1030 09:50:28.482603 26082 checks.go:469] validating http connectivity to first IP address in the CIDR 192.168.1.8:22 I1030 09:50:28.482609 26082 checks.go:104] validating the container runtime 192.168.1.8:22 I1030 09:50:28.498016 26082 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables 192.168.1.8:22 I1030 09:50:28.498064 26082 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward 192.168.1.8:22 I1030 09:50:28.498075 26082 checks.go:644] validating whether swap is enabled or not 192.168.1.8:22 I1030 09:50:28.498528 26082 checks.go:370] validating the presence of executable crictl 192.168.1.8:22 I1030 09:50:28.498547 26082 checks.go:370] validating the presence of executable conntrack 192.168.1.8:22 I1030 09:50:28.498556 26082 checks.go:370] validating the presence of executable ip 192.168.1.8:22 I1030 09:50:28.498566 26082 checks.go:370] validating the presence of executable iptables 192.168.1.8:22 I1030 09:50:28.498577 26082 checks.go:370] validating the presence of executable mount 192.168.1.8:22 I1030 09:50:28.498584 26082 checks.go:370] validating the presence of executable nsenter 192.168.1.8:22 I1030 09:50:28.498591 26082 checks.go:370] validating the presence of executable ebtables 192.168.1.8:22 I1030 09:50:28.498598 26082 checks.go:370] validating the presence of executable ethtool 192.168.1.8:22 [WARNING FileExisting-ethtool]: ethtool not found in system path 192.168.1.8:22 I1030 09:50:28.498627 26082 checks.go:370] validating the presence of executable socat 192.168.1.8:22 [WARNING FileExisting-socat]: socat not found in system path 192.168.1.8:22 I1030 09:50:28.498641 26082 checks.go:370] validating the presence of executable tc 192.168.1.8:22 I1030 09:50:28.498709 26082 checks.go:370] validating the presence of executable touch 192.168.1.8:22 I1030 09:50:28.498720 26082 checks.go:516] running all checks 192.168.1.8:22 [preflight] The system verification failed. Printing the output from the verification: 192.168.1.8:22 KERNEL_VERSION: 6.2.16-12-pve 192.168.1.8:22 OS: Linux 192.168.1.8:22 CGROUPS_CPU: enabled 192.168.1.8:22 CGROUPS_CPUSET: enabled 192.168.1.8:22 CGROUPS_DEVICES: enabled 192.168.1.8:22 CGROUPS_FREEZER: enabled 192.168.1.8:22 CGROUPS_MEMORY: enabled 192.168.1.8:22 CGROUPS_PIDS: enabled 192.168.1.8:22 CGROUPS_HUGETLB: enabled 192.168.1.8:22 CGROUPS_IO: enabled 192.168.1.8:22 [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.2.16-12-pve\n", err: exit status 1 192.168.1.8:22 I1030 09:50:28.501672 26082 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost 192.168.1.8:22 I1030 09:50:28.501788 26082 checks.go:610] validating kubelet version 192.168.1.8:22 I1030 09:50:28.535585 26082 checks.go:130] validating if the "kubelet" service is enabled and active 192.168.1.8:22 I1030 09:50:28.541881 26082 checks.go:203] validating availability of port 10250 192.168.1.8:22 I1030 09:50:28.541930 26082 checks.go:203] validating availability of port 2379 192.168.1.8:22 I1030 09:50:28.541948 26082 checks.go:203] validating availability of port 2380 192.168.1.8:22 I1030 09:50:28.541964 26082 checks.go:243] validating the existence and emptiness of directory /var/lib/etcd 192.168.1.8:22 [preflight] Pulling images required for setting up a Kubernetes cluster 192.168.1.8:22 [preflight] This might take a minute or two, depending on the speed of your internet connection 192.168.1.8:22 [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' 192.168.1.8:22 I1030 09:50:28.542029 26082 checks.go:832] using image pull policy: IfNotPresent 192.168.1.8:22 I1030 09:50:28.618026 26082 checks.go:849] pulling: registry.k8s.io/kube-apiserver:v1.25.14 192.168.1.8:22 I1030 09:50:30.842407 26082 checks.go:849] pulling: registry.k8s.io/kube-controller-manager:v1.25.14 192.168.1.8:22 I1030 09:50:32.426718 26082 checks.go:849] pulling: registry.k8s.io/kube-scheduler:v1.25.14 192.168.1.8:22 I1030 09:50:33.424491 26082 checks.go:849] pulling: registry.k8s.io/kube-proxy:v1.25.14 192.168.1.8:22 I1030 09:50:34.670476 26082 checks.go:841] image exists: registry.k8s.io/pause:3.8 192.168.1.8:22 I1030 09:50:34.747689 26082 checks.go:849] pulling: registry.k8s.io/etcd:3.5.6-0 192.168.1.8:22 I1030 09:50:38.663644 26082 checks.go:849] pulling: registry.k8s.io/coredns/coredns:v1.9.3 192.168.1.8:22 [certs] Using certificateDir folder "/etc/kubernetes/pki" 192.168.1.8:22 I1030 09:50:39.601746 26082 certs.go:522] validating certificate period for ca certificate 192.168.1.8:22 [certs] Using existing ca certificate authority 192.168.1.8:22 I1030 09:50:39.602124 26082 certs.go:522] validating certificate period for apiserver certificate 192.168.1.8:22 [certs] Using existing apiserver certificate and key on disk 192.168.1.8:22 I1030 09:50:39.602678 26082 certs.go:522] validating certificate period for apiserver-kubelet-client certificate 192.168.1.8:22 [certs] Using existing apiserver-kubelet-client certificate and key on disk 192.168.1.8:22 I1030 09:50:39.603215 26082 certs.go:522] validating certificate period for front-proxy-ca certificate 192.168.1.8:22 [certs] Using existing front-proxy-ca certificate authority 192.168.1.8:22 I1030 09:50:39.603533 26082 certs.go:522] validating certificate period for front-proxy-client certificate 192.168.1.8:22 [certs] Using existing front-proxy-client certificate and key on disk 192.168.1.8:22 I1030 09:50:39.604086 26082 certs.go:522] validating certificate period for etcd/ca certificate 192.168.1.8:22 [certs] Using existing etcd/ca certificate authority 192.168.1.8:22 I1030 09:50:39.604409 26082 certs.go:522] validating certificate period for etcd/server certificate 192.168.1.8:22 [certs] Using existing etcd/server certificate and key on disk 192.168.1.8:22 I1030 09:50:39.604948 26082 certs.go:522] validating certificate period for etcd/peer certificate 192.168.1.8:22 [certs] Using existing etcd/peer certificate and key on disk 192.168.1.8:22 I1030 09:50:39.605489 26082 certs.go:522] validating certificate period for etcd/healthcheck-client certificate 192.168.1.8:22 [certs] Using existing etcd/healthcheck-client certificate and key on disk 192.168.1.8:22 I1030 09:50:39.606034 26082 certs.go:522] validating certificate period for apiserver-etcd-client certificate 192.168.1.8:22 [certs] Using existing apiserver-etcd-client certificate and key on disk 192.168.1.8:22 I1030 09:50:39.606558 26082 certs.go:78] creating new public/private key files for signing service account users 192.168.1.8:22 [certs] Using the existing "sa" key 192.168.1.8:22 [kubeconfig] Using kubeconfig folder "/etc/kubernetes" 192.168.1.8:22 I1030 09:50:39.607460 26082 kubeconfig.go:103] creating kubeconfig file for admin.conf 192.168.1.8:22 I1030 09:50:39.803507 26082 loader.go:374] Config loaded from file: /etc/kubernetes/admin.conf 192.168.1.8:22 [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf" 192.168.1.8:22 I1030 09:50:39.803530 26082 kubeconfig.go:103] creating kubeconfig file for kubelet.conf 192.168.1.8:22 I1030 09:50:40.025844 26082 loader.go:374] Config loaded from file: /etc/kubernetes/kubelet.conf 192.168.1.8:22 [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf" 192.168.1.8:22 I1030 09:50:40.025864 26082 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf 192.168.1.8:22 I1030 09:50:40.246993 26082 loader.go:374] Config loaded from file: /etc/kubernetes/controller-manager.conf 192.168.1.8:22 W1030 09:50:40.247010 26082 kubeconfig.go:249] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://192.168.1.8:6443, got: https://apiserver.cluster.local:6443 192.168.1.8:22 [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf" 192.168.1.8:22 I1030 09:50:40.247025 26082 kubeconfig.go:103] creating kubeconfig file for scheduler.conf 192.168.1.8:22 I1030 09:50:40.514706 26082 loader.go:374] Config loaded from file: /etc/kubernetes/scheduler.conf 192.168.1.8:22 W1030 09:50:40.514722 26082 kubeconfig.go:249] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://192.168.1.8:6443, got: https://apiserver.cluster.local:6443 192.168.1.8:22 [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf" 192.168.1.8:22 I1030 09:50:40.514734 26082 kubelet.go:66] Stopping the kubelet 192.168.1.8:22 [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" 192.168.1.8:22 [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" 192.168.1.8:22 [kubelet-start] Starting the kubelet 192.168.1.8:22 [control-plane] Using manifest folder "/etc/kubernetes/manifests" 192.168.1.8:22 [control-plane] Creating static Pod manifest for "kube-apiserver" 192.168.1.8:22 I1030 09:50:40.728953 26082 manifests.go:99] [control-plane] getting StaticPodSpecs 192.168.1.8:22 I1030 09:50:40.729187 26082 manifests.go:125] [control-plane] adding volume "audit" for component "kube-apiserver" 192.168.1.8:22 I1030 09:50:40.729200 26082 manifests.go:125] [control-plane] adding volume "audit-log" for component "kube-apiserver" 192.168.1.8:22 I1030 09:50:40.729205 26082 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver" 192.168.1.8:22 I1030 09:50:40.729209 26082 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver" 192.168.1.8:22 I1030 09:50:40.729214 26082 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver" 192.168.1.8:22 I1030 09:50:40.729218 26082 manifests.go:125] [control-plane] adding volume "localtime" for component "kube-apiserver" 192.168.1.8:22 I1030 09:50:40.729223 26082 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver" 192.168.1.8:22 I1030 09:50:40.729227 26082 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver" 192.168.1.8:22 I1030 09:50:40.732795 26082 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml" 192.168.1.8:22 [control-plane] Creating static Pod manifest for "kube-controller-manager" 192.168.1.8:22 I1030 09:50:40.732820 26082 manifests.go:99] [control-plane] getting StaticPodSpecs 192.168.1.8:22 I1030 09:50:40.733107 26082 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager" 192.168.1.8:22 I1030 09:50:40.733118 26082 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager" 192.168.1.8:22 I1030 09:50:40.733124 26082 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager" 192.168.1.8:22 I1030 09:50:40.733129 26082 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager" 192.168.1.8:22 I1030 09:50:40.733133 26082 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager" 192.168.1.8:22 I1030 09:50:40.733138 26082 manifests.go:125] [control-plane] adding volume "localtime" for component "kube-controller-manager" 192.168.1.8:22 I1030 09:50:40.733142 26082 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager" 192.168.1.8:22 I1030 09:50:40.733147 26082 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager" 192.168.1.8:22 I1030 09:50:40.733779 26082 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml" 192.168.1.8:22 [control-plane] Creating static Pod manifest for "kube-scheduler" 192.168.1.8:22 I1030 09:50:40.733792 26082 manifests.go:99] [control-plane] getting StaticPodSpecs 192.168.1.8:22 I1030 09:50:40.733939 26082 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler" 192.168.1.8:22 I1030 09:50:40.733947 26082 manifests.go:125] [control-plane] adding volume "localtime" for component "kube-scheduler" 192.168.1.8:22 I1030 09:50:40.734292 26082 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml" 192.168.1.8:22 [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" 192.168.1.8:22 I1030 09:50:40.734789 26082 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml" 192.168.1.8:22 I1030 09:50:40.734798 26082 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy 192.168.1.8:22 I1030 09:50:40.735112 26082 loader.go:374] Config loaded from file: /etc/kubernetes/admin.conf 192.168.1.8:22 [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s 192.168.1.8:22 I1030 09:50:40.739100 26082 round_trippers.go:553] GET https://apiserver.cluster.local:6443/healthz?timeout=10s in 0 milliseconds 192.168.1.8:22 I1030 09:50:41.240271 26082 round_trippers.go:553] GET https://apiserver.cluster.local:6443/healthz?timeout=10s in 0 milliseconds 192.168.1.8:22 I1030 09:50:41.739831 26082 round_trippers.go:553] GET https://apiserver.cluster.local:6443/healthz?timeout=10s in 0 milliseconds 192.168.1.8:22 I1030 09:50:42.239772 26082 round_trippers.go:553] GET https://apiserver.cluster.local:6443/healthz?timeout=10s in 0 milliseconds 192.168.1.8:22 I1030 09:50:42.739737 26082 round_trippers.go:553] GET https://apiserver.cluster.local:6443/healthz?timeout=10s in 0 milliseconds 192.168.1.8:22 I1030 09:50:43.240664 26082 round_trippers.go:553] GET https://apiserver.cluster.local:6443/healthz?timeout=10s in 0 milliseconds 192.168.1.8:22 I1030 09:50:43.740576 26082 round_trippers.go:553] GET https://apiserver.cluster.local:6443/healthz?timeout=10s in 0 milliseconds 192.168.1.8:22 I1030 09:50:44.240648 26082 round_trippers.go:553] GET https://apiserver.cluster.local:6443/healthz?timeout=10s in 0 millisecond

cuisongliu commented 10 months ago

用4.3.5

sealos-ci-robot commented 10 months ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


Use 4.3.5

john-deng commented 10 months ago

@cuisongliu 尝试过还是不行

sealos version SealosVersion: buildDate: "2023-10-09T10:07:15Z" compiler: gc gitCommit: 881c10cb gitVersion: 4.3.5 goVersion: go1.20.8 platform: linux/amd64

systemctl status kubelet.service

sealos run kubernetes-v1.25.14-4.3.5.tar --debug 2023-10-30T10:21:23 debug using file /root/.config/containers/storage.conf as container storage config Getting image source signatures Copying blob be3120087210 skipped: already exists
Copying blob 72cb3925a8cb skipped: already exists
Copying blob f63c80dfa4f0 skipped: already exists
Copying blob 13c4f6470081 skipped: already exists
Copying config cd85c57707 done
Writing manifest to image destination Storing signatures 2023-10-30T10:21:24 debug creating new cluster 2023-10-30T10:21:24 debug start to exec arch on 192.168.1.8:22 2023-10-30T10:21:24 debug defaultPort: 22 2023-10-30T10:21:24 debug cluster info: apiVersion: apps.sealos.io/v1beta1 kind: Cluster metadata: creationTimestamp: null name: default spec: hosts:

2023-10-30T10:21:24 info Start to create a new cluster: master [192.168.1.8], worker [], registry 192.168.1.8 2023-10-30T10:21:24 info Executing pipeline Check in CreateProcessor. 2023-10-30T10:21:24 info checker:hostname [192.168.1.8:22] 2023-10-30T10:21:24 debug start to exec remote 192.168.1.8:22 shell: hostname 2023-10-30T10:21:24 debug start to exec hostname on 192.168.1.8:22 2023-10-30T10:21:24 info checker:timeSync [192.168.1.8:22] 2023-10-30T10:21:24 debug start to exec remote 192.168.1.8:22 shell: date +%s 2023-10-30T10:21:24 debug start to exec date +%s on 192.168.1.8:22 2023-10-30T10:21:24 info Executing pipeline PreProcess in CreateProcessor. 2023-10-30T10:21:24 debug parse reference cd85c57707094a13b0f20d40e41cc60c1ad46648478155b7d7d49dcbc7e0b405 with transport containers-storage 2023-10-30T10:21:24 debug images cd85c57707094a13b0f20d40e41cc60c1ad46648478155b7d7d49dcbc7e0b405 are pulled 2023-10-30T10:21:24 debug Pull Policy for pull [missing] 2023-10-30T10:21:24 debug parse reference cd85c57707094a13b0f20d40e41cc60c1ad46648478155b7d7d49dcbc7e0b405 with transport containers-storage 2023-10-30T10:21:24 info Executing pipeline RunConfig in CreateProcessor. 2023-10-30T10:21:24 debug clusterfile config is empty! 2023-10-30T10:21:24 info Executing pipeline MountRootfs in CreateProcessor. 2023-10-30T10:21:24 debug render env dir: /var/lib/containers/storage/overlay/350b58bec55fa718d49834b07c4eb599bd6c5bd327a5d2167b2290d40c71a80d/merged/etc 2023-10-30T10:21:24 debug render env dir: /var/lib/containers/storage/overlay/350b58bec55fa718d49834b07c4eb599bd6c5bd327a5d2167b2290d40c71a80d/merged/scripts 2023-10-30T10:21:24 debug render env dir: /var/lib/containers/storage/overlay/350b58bec55fa718d49834b07c4eb599bd6c5bd327a5d2167b2290d40c71a80d/merged/manifests 2023-10-30T10:21:25 debug send mount image, target: 192.168.1.8:22, image: docker.io/labring/kubernetes:v1.25.14-4.3.5, type: rootfs 2023-10-30T10:21:25 debug remote copy files src /var/lib/containers/storage/overlay/350b58bec55fa718d49834b07c4eb599bd6c5bd327a5d2167b2290d40c71a80d/merged/Kubefile to dst /var/lib/sealos/data/default/rootfs/Kubefile 2023-10-30T10:21:25 debug remote copy files src /var/lib/containers/storage/overlay/350b58bec55fa718d49834b07c4eb599bd6c5bd327a5d2167b2290d40c71a80d/merged/README.md to dst /var/lib/sealos/data/default/rootfs/README.md 2023-10-30T10:21:25 debug remote copy files src /var/lib/containers/storage/overlay/350b58bec55fa718d49834b07c4eb599bd6c5bd327a5d2167b2290d40c71a80d/merged/bin to dst /var/lib/sealos/data/default/rootfs/bin 2023-10-30T10:21:27 debug remote copy files src /var/lib/containers/storage/overlay/350b58bec55fa718d49834b07c4eb599bd6c5bd327a5d2167b2290d40c71a80d/merged/cri to dst /var/lib/sealos/data/default/rootfs/cri 2023-10-30T10:21:28 debug remote copy files src /var/lib/containers/storage/overlay/350b58bec55fa718d49834b07c4eb599bd6c5bd327a5d2167b2290d40c71a80d/merged/etc to dst /var/lib/sealos/data/default/rootfs/etc 2023-10-30T10:21:28 debug remote copy files src /var/lib/containers/storage/overlay/350b58bec55fa718d49834b07c4eb599bd6c5bd327a5d2167b2290d40c71a80d/merged/images to dst /var/lib/sealos/data/default/rootfs/images 2023-10-30T10:21:28 debug remote copy files src /var/lib/containers/storage/overlay/350b58bec55fa718d49834b07c4eb599bd6c5bd327a5d2167b2290d40c71a80d/merged/opt to dst /var/lib/sealos/data/default/rootfs/opt 2023-10-30T10:21:28 debug remote copy files src /var/lib/containers/storage/overlay/350b58bec55fa718d49834b07c4eb599bd6c5bd327a5d2167b2290d40c71a80d/merged/scripts to dst /var/lib/sealos/data/default/rootfs/scripts 2023-10-30T10:21:28 debug remote copy files src /var/lib/containers/storage/overlay/350b58bec55fa718d49834b07c4eb599bd6c5bd327a5d2167b2290d40c71a80d/merged/statics to dst /var/lib/sealos/data/default/rootfs/statics 2023-10-30T10:21:28 info Executing pipeline MirrorRegistry in CreateProcessor. 2023-10-30T10:21:28 debug registry nodes is: [192.168.1.8:22] 2023-10-30T10:21:28 info trying default http mode to sync images to hosts [192.168.1.8:22] 2023-10-30T10:21:28 debug checking if endpoint http://192.168.1.8:5050 is alive 2023-10-30T10:21:28 debug running temporary registry on host 192.168.1.8:22 2023-10-30T10:21:28 debug start to exec /var/lib/sealos/data/default/rootfs/opt/sealctl registry serve filesystem -p 5050 --disable-logging=true /var/lib/sealos/data/default/rootfs/registry on 192.168.1.8:22 2023-10-30T10:21:28 debug http endpoint http://192.168.1.8:5050 is alive 2023-10-30T10:21:28 debug checking if endpoint http://127.0.0.1:32955 is alive 2023-10-30T10:21:28 debug http endpoint http://127.0.0.1:32955 is alive 2023-10-30T10:21:28 debug syncing repos [{coredns/coredns 0 false false false} {etcd 0 false false false} {kube-apiserver 0 false false false} {kube-controller-manager 0 false false false} {kube-proxy 0 false false false} {kube-scheduler 0 false false false} {labring/lvscare 0 false false false} {pause 0 false false false}] from 127.0.0.1:32955 to 192.168.1.8:5050 2023-10-30T10:21:28 debug syncing 192.168.1.8:5050/coredns/coredns:v1.9.3 with selection 1 2023-10-30T10:21:29 debug syncing 192.168.1.8:5050/coredns/coredns:v1.9.3 with selection 0 2023-10-30T10:21:29 debug syncing 192.168.1.8:5050/etcd:3.5.6-0 with selection 1 2023-10-30T10:21:29 debug syncing 192.168.1.8:5050/etcd:3.5.6-0 with selection 0 2023-10-30T10:21:29 debug syncing 192.168.1.8:5050/kube-apiserver:v1.25.14 with selection 1 2023-10-30T10:21:29 debug syncing 192.168.1.8:5050/kube-apiserver:v1.25.14 with selection 0 2023-10-30T10:21:29 debug syncing 192.168.1.8:5050/kube-controller-manager:v1.25.14 with selection 1 2023-10-30T10:21:30 debug syncing 192.168.1.8:5050/kube-controller-manager:v1.25.14 with selection 0 2023-10-30T10:21:30 debug syncing 192.168.1.8:5050/kube-proxy:v1.25.14 with selection 1 2023-10-30T10:21:30 debug syncing 192.168.1.8:5050/kube-proxy:v1.25.14 with selection 0 2023-10-30T10:21:30 debug syncing 192.168.1.8:5050/kube-scheduler:v1.25.14 with selection 1 2023-10-30T10:21:30 debug syncing 192.168.1.8:5050/kube-scheduler:v1.25.14 with selection 0 2023-10-30T10:21:30 debug syncing 192.168.1.8:5050/labring/lvscare:v4.3.5 with selection 1 2023-10-30T10:21:30 debug syncing 192.168.1.8:5050/labring/lvscare:v4.3.5 with selection 0 2023-10-30T10:21:30 debug syncing 192.168.1.8:5050/pause:3.8 with selection 1 2023-10-30T10:21:30 debug syncing 192.168.1.8:5050/pause:3.8 with selection 0 2023-10-30T10:21:30 info Executing pipeline Bootstrap in CreateProcessor 2023-10-30T10:21:30 debug apply [default_checker registry_host_applier registry_applier initializer] on hosts [192.168.1.8:22] 2023-10-30T10:21:30 debug apply default_checker on host 192.168.1.8:22 2023-10-30T10:21:30 debug start to exec cd /var/lib/sealos/data/default/rootfs/scripts && export registryDomain="sealos.hub" defaultVIP="10.103.97.2" registryUsername="admin" SEALOS_SYS_SEALOS_VERSION="4.3.5" SEALOS_SYS_IMAGE_ENDPOINT="/var/run/image-cri-shim.sock" criData="/var/lib/containerd" SEALOS_SYS_KUBE_VERSION="v1.25.14" registryPort="5000" registryData="/var/lib/registry" SEALOS_SYS_CRI_ENDPOINT="/var/run/containerd/containerd.sock" registryPassword="passw0rd" registryConfig="/etc/registry" sandboxImage="pause:3.8" disableApparmor="false" ; bash check.sh $registryData on 192.168.1.8:22 192.168.1.8:22 INFO [2023-10-30 10:21:31] >> Check port kubelet port 10249..10259, reserved port 5050..5054 inuse. Please wait... 192.168.1.8:22 INFO [2023-10-30 10:21:31] >> check root,port,cri success 2023-10-30T10:21:31 debug apply registry_host_applier on host 192.168.1.8:22 2023-10-30T10:21:31 debug start to exec cat /var/lib/sealos/data/default/rootfs/etc/registry.yml on 192.168.1.8:22 2023-10-30T10:21:31 debug registry config data info: # Copyright © 2022 sealos. #

Licensed under the Apache License, Version 2.0 (the "License");

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

#

http://www.apache.org/licenses/LICENSE-2.0

#

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an "AS IS" BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License.

domain: sealos.hub port: "5000" username: "admin" password: "passw0rd" data: "/var/lib/registry"

2023-10-30T10:21:31 debug show registry info, IP: 192.168.1.8:22, Domain: sealos.hub, Data: /var/lib/registry 2023-10-30T10:21:31 debug start to exec /var/lib/sealos/data/default/rootfs/opt/sealctl hosts add --ip 192.168.1.8 --domain sealos.hub on 192.168.1.8:22 192.168.1.8:22 2023-10-30T10:21:31 info domain sealos.hub:192.168.1.8 append success 2023-10-30T10:21:31 debug apply registry_applier on host 192.168.1.8:22 2023-10-30T10:21:31 debug start to exec cat /var/lib/sealos/data/default/rootfs/etc/registry.yml on 192.168.1.8:22 2023-10-30T10:21:31 debug registry config data info: # Copyright © 2022 sealos. #

Licensed under the Apache License, Version 2.0 (the "License");

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

#

http://www.apache.org/licenses/LICENSE-2.0

#

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an "AS IS" BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License.

domain: sealos.hub port: "5000" username: "admin" password: "passw0rd" data: "/var/lib/registry"

2023-10-30T10:21:31 debug show registry info, IP: 192.168.1.8:22, Domain: sealos.hub, Data: /var/lib/registry 2023-10-30T10:21:31 debug make soft link: rm -rf /var/lib/registry && ln -s /var/lib/sealos/data/default/rootfs/registry /var/lib/registry 2023-10-30T10:21:31 debug start to exec rm -rf /var/lib/registry && ln -s /var/lib/sealos/data/default/rootfs/registry /var/lib/registry on 192.168.1.8:22 2023-10-30T10:21:31 debug remote copy files src /root/.sealos/default/etc/registry_htpasswd to dst /var/lib/sealos/data/default/rootfs/etc/registry_htpasswd 2023-10-30T10:21:31 debug start to exec cd /var/lib/sealos/data/default/rootfs/scripts && export sandboxImage="pause:3.8" defaultVIP="10.103.97.2" registryPort="5000" SEALOS_SYS_CRI_ENDPOINT="/var/run/containerd/containerd.sock" SEALOS_SYS_KUBE_VERSION="v1.25.14" criData="/var/lib/containerd" SEALOS_SYS_IMAGE_ENDPOINT="/var/run/image-cri-shim.sock" registryConfig="/etc/registry" registryDomain="sealos.hub" registryPassword="passw0rd" registryData="/var/lib/registry" SEALOS_SYS_SEALOS_VERSION="4.3.5" disableApparmor="false" registryUsername="admin" ; bash init-registry.sh $registryData $registryConfig on 192.168.1.8:22 192.168.1.8:22 Created symlink /etc/systemd/system/multi-user.target.wants/registry.service -> /etc/systemd/system/registry.service. 192.168.1.8:22 INFO [2023-10-30 10:21:32] >> Health check registry! 192.168.1.8:22 INFO [2023-10-30 10:21:32] >> registry is running 192.168.1.8:22 INFO [2023-10-30 10:21:32] >> init registry success 2023-10-30T10:21:32 debug apply initializer on host 192.168.1.8:22 2023-10-30T10:21:32 debug start to exec cd /var/lib/sealos/data/default/rootfs/scripts && export registryPort="5000" registryPassword="passw0rd" SEALOS_SYS_KUBE_VERSION="v1.25.14" registryData="/var/lib/registry" SEALOS_SYS_IMAGE_ENDPOINT="/var/run/image-cri-shim.sock" SEALOS_SYS_CRI_ENDPOINT="/var/run/containerd/containerd.sock" registryConfig="/etc/registry" SEALOS_SYS_SEALOS_VERSION="4.3.5" registryDomain="sealos.hub" registryUsername="admin" criData="/var/lib/containerd" sandboxImage="pause:3.8" defaultVIP="10.103.97.2" disableApparmor="false" ; bash init-cri.sh $registryDomain $registryPort && bash init.sh on 192.168.1.8:22 192.168.1.8:22 Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service -> /etc/systemd/system/containerd.service. 192.168.1.8:22 INFO [2023-10-30 10:21:33] >> Health check containerd! 192.168.1.8:22 INFO [2023-10-30 10:21:33] >> containerd is running 192.168.1.8:22 INFO [2023-10-30 10:21:33] >> init containerd success 192.168.1.8:22 Created symlink /etc/systemd/system/multi-user.target.wants/image-cri-shim.service -> /etc/systemd/system/image-cri-shim.service. 192.168.1.8:22 INFO [2023-10-30 10:21:34] >> Health check image-cri-shim! 192.168.1.8:22 INFO [2023-10-30 10:21:34] >> image-cri-shim is running 192.168.1.8:22 INFO [2023-10-30 10:21:34] >> init shim success 192.168.1.8:22 127.0.0.1 localhost 192.168.1.8:22 ::1 localhost ip6-localhost ip6-loopback 192.168.1.8:22 Firewall stopped and disabled on system startup 192.168.1.8:22 modprobe: FATAL: Module ip_vs not found in directory /lib/modules/6.2.16-12-pve 192.168.1.8:22 modprobe: FATAL: Module br_netfilter not found in directory /lib/modules/6.2.16-12-pve 192.168.1.8:22 modprobe: FATAL: Module nf_conntrack_ipv4 not found in directory /lib/modules/6.2.16-12-pve 192.168.1.8:22 Applying /etc/sysctl.d/10-console-messages.conf ... 192.168.1.8:22 sysctl: setting key "kernel.printk", ignoring: Read-only file system 192.168.1.8:22 Applying /etc/sysctl.d/10-ipv6-privacy.conf ... 192.168.1.8:22 net.ipv6.conf.all.use_tempaddr = 2 192.168.1.8:22 net.ipv6.conf.default.use_tempaddr = 2 192.168.1.8:22 Applying /etc/sysctl.d/10-kernel-hardening.conf ... 192.168.1.8:22 sysctl: setting key "kernel.kptr_restrict", ignoring: Read-only file system 192.168.1.8:22 Applying /etc/sysctl.d/10-magic-sysrq.conf ... 192.168.1.8:22 sysctl: setting key "kernel.sysrq", ignoring: Read-only file system 192.168.1.8:22 Applying /etc/sysctl.d/10-network-security.conf ... 192.168.1.8:22 net.ipv4.conf.default.rp_filter = 2 192.168.1.8:22 net.ipv4.conf.all.rp_filter = 2 192.168.1.8:22 Applying /etc/sysctl.d/10-ptrace.conf ... 192.168.1.8:22 sysctl: setting key "kernel.yama.ptrace_scope", ignoring: Read-only file system 192.168.1.8:22 Applying /etc/sysctl.d/10-zeropage.conf ... 192.168.1.8:22 sysctl: setting key "vm.mmap_min_addr", ignoring: Read-only file system 192.168.1.8:22 Applying /usr/lib/sysctl.d/50-default.conf ... 192.168.1.8:22 sysctl: setting key "kernel.core_uses_pid", ignoring: Read-only file system 192.168.1.8:22 net.ipv4.conf.default.rp_filter = 2 192.168.1.8:22 net.ipv4.conf.default.accept_source_route = 0 192.168.1.8:22 sysctl: setting key "net.ipv4.conf.all.accept_source_route": Invalid argument 192.168.1.8:22 net.ipv4.conf.default.promote_secondaries = 1 192.168.1.8:22 sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument 192.168.1.8:22 sysctl: setting key "net.ipv4.ping_group_range": Invalid argument 192.168.1.8:22 sysctl: setting key "fs.protected_hardlinks", ignoring: Read-only file system 192.168.1.8:22 sysctl: setting key "fs.protected_symlinks", ignoring: Read-only file system 192.168.1.8:22 sysctl: setting key "fs.protected_regular", ignoring: Read-only file system 192.168.1.8:22 sysctl: setting key "fs.protected_fifos", ignoring: Read-only file system 192.168.1.8:22 Applying /usr/lib/sysctl.d/50-pid-max.conf ... 192.168.1.8:22 sysctl: setting key "kernel.pid_max", ignoring: Read-only file system 192.168.1.8:22 Applying /usr/lib/sysctl.d/99-protect-links.conf ... 192.168.1.8:22 sysctl: setting key "fs.protected_fifos", ignoring: Read-only file system 192.168.1.8:22 sysctl: setting key "fs.protected_hardlinks", ignoring: Read-only file system 192.168.1.8:22 sysctl: setting key "fs.protected_regular", ignoring: Read-only file system 192.168.1.8:22 sysctl: setting key "fs.protected_symlinks", ignoring: Read-only file system 192.168.1.8:22 Applying /etc/sysctl.d/99-sysctl.conf ... 192.168.1.8:22 sysctl: setting key "fs.file-max", ignoring: Read-only file system 192.168.1.8:22 net.bridge.bridge-nf-call-ip6tables = 1 # sealos 192.168.1.8:22 net.bridge.bridge-nf-call-iptables = 1 # sealos 192.168.1.8:22 net.ipv4.conf.all.rp_filter = 0 # sealos 192.168.1.8:22 net.ipv4.ip_forward = 1 # sealos 192.168.1.8:22 net.ipv4.ip_local_port_range = 1024 65535 # sealos 192.168.1.8:22 net.ipv4.tcp_keepalive_intvl = 30 # sealos 192.168.1.8:22 net.ipv4.tcp_keepalive_time = 600 # sealos 192.168.1.8:22 net.ipv6.conf.all.forwarding = 1 # sealos 192.168.1.8:22 Applying /etc/sysctl.conf ... 192.168.1.8:22 sysctl: setting key "fs.file-max", ignoring: Read-only file system 192.168.1.8:22 net.bridge.bridge-nf-call-ip6tables = 1 # sealos 192.168.1.8:22 net.bridge.bridge-nf-call-iptables = 1 # sealos 192.168.1.8:22 net.ipv4.conf.all.rp_filter = 0 # sealos 192.168.1.8:22 net.ipv4.ip_forward = 1 # sealos 192.168.1.8:22 net.ipv4.ip_local_port_range = 1024 65535 # sealos 192.168.1.8:22 net.ipv4.tcp_keepalive_intvl = 30 # sealos 192.168.1.8:22 net.ipv4.tcp_keepalive_time = 600 # sealos 192.168.1.8:22 net.ipv6.conf.all.forwarding = 1 # sealos 192.168.1.8:22 INFO [2023-10-30 10:21:34] >> pull pause image sealos.hub:5000/pause:3.8 192.168.1.8:22 Image is up to date for sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517 192.168.1.8:22 Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service -> /etc/systemd/system/kubelet.service. 192.168.1.8:22 INFO [2023-10-30 10:21:35] >> init kubelet success 192.168.1.8:22 INFO [2023-10-30 10:21:35] >> init rootfs success 2023-10-30T10:21:35 info Executing pipeline Init in CreateProcessor. 2023-10-30T10:21:35 info start to copy kubeadm config to master0 2023-10-30T10:21:35 debug using default kubeadm config 2023-10-30T10:21:35 debug skip merging kubeadm configs from cause file /var/lib/sealos/data/default/rootfs/etc/kubeadm.yml not exists 2023-10-30T10:21:35 debug renderTextFromEnv: replaces: map[$(SEALOS_SYS_CRI_ENDPOINT):/var/run/containerd/containerd.sock $(SEALOS_SYS_IMAGE_ENDPOINT):/var/run/image-cri-shim.sock $(criData):/var/lib/containerd $(defaultVIP):10.103.97.2 $(disableApparmor):false $(registryConfig):/etc/registry $(registryData):/var/lib/registry $(registryDomain):sealos.hub $(registryPassword):passw0rd $(registryPort):5000 $(registryUsername):admin $(sandboxImage):pause:3.8 $SEALOS_SYS_CRI_ENDPOINT:/var/run/containerd/containerd.sock $SEALOS_SYS_IMAGE_ENDPOINT:/var/run/image-cri-shim.sock $criData:/var/lib/containerd $defaultVIP:10.103.97.2 $disableApparmor:false $registryConfig:/etc/registry $registryData:/var/lib/registry $registryDomain:sealos.hub $registryPassword:passw0rd $registryPort:5000 $registryUsername:admin $sandboxImage:pause:3.8 ${SEALOS_SYS_CRI_ENDPOINT}:/var/run/containerd/containerd.sock ${SEALOS_SYS_IMAGE_ENDPOINT}:/var/run/image-cri-shim.sock ${criData}:/var/lib/containerd ${defaultVIP}:10.103.97.2 ${disableApparmor}:false ${registryConfig}:/etc/registry ${registryData}:/var/lib/registry ${registryDomain}:sealos.hub ${registryPassword}:passw0rd ${registryPort}:5000 ${registryUsername}:admin ${sandboxImage}:pause:3.8] ; text: $defaultVIP 2023-10-30T10:21:35 debug get vip is 10.103.97.2 2023-10-30T10:21:35 debug start to exec remote 192.168.1.8:22 shell: /var/lib/sealos/data/default/rootfs/opt/sealctl cri socket 2023-10-30T10:21:35 debug start to exec /var/lib/sealos/data/default/rootfs/opt/sealctl cri socket on 192.168.1.8:22 2023-10-30T10:21:35 debug get nodes [192.168.1.8:22] cri socket is [/run/containerd/containerd.sock] 2023-10-30T10:21:35 debug node: 192.168.1.8:22 , criSocket: /run/containerd/containerd.sock 2023-10-30T10:21:35 debug start to exec remote 192.168.1.8:22 shell: /var/lib/sealos/data/default/rootfs/opt/sealctl cri cgroup-driver --short 2023-10-30T10:21:35 debug start to exec /var/lib/sealos/data/default/rootfs/opt/sealctl cri cgroup-driver --short on 192.168.1.8:22 2023-10-30T10:21:35 debug get nodes [192.168.1.8:22] cgroup driver is [systemd] 2023-10-30T10:21:35 debug node: 192.168.1.8:22 , cGroupDriver: systemd 2023-10-30T10:21:35 debug renderTextFromEnv: replaces: map[$(SEALOS_SYS_CRI_ENDPOINT):/var/run/containerd/containerd.sock $(SEALOS_SYS_IMAGE_ENDPOINT):/var/run/image-cri-shim.sock $(criData):/var/lib/containerd $(defaultVIP):10.103.97.2 $(disableApparmor):false $(registryConfig):/etc/registry $(registryData):/var/lib/registry $(registryDomain):sealos.hub $(registryPassword):passw0rd $(registryPort):5000 $(registryUsername):admin $(sandboxImage):pause:3.8 $SEALOS_SYS_CRI_ENDPOINT:/var/run/containerd/containerd.sock $SEALOS_SYS_IMAGE_ENDPOINT:/var/run/image-cri-shim.sock $criData:/var/lib/containerd $defaultVIP:10.103.97.2 $disableApparmor:false $registryConfig:/etc/registry $registryData:/var/lib/registry $registryDomain:sealos.hub $registryPassword:passw0rd $registryPort:5000 $registryUsername:admin $sandboxImage:pause:3.8 ${SEALOS_SYS_CRI_ENDPOINT}:/var/run/containerd/containerd.sock ${SEALOS_SYS_IMAGE_ENDPOINT}:/var/run/image-cri-shim.sock ${criData}:/var/lib/containerd ${defaultVIP}:10.103.97.2 ${disableApparmor}:false ${registryConfig}:/etc/registry ${registryData}:/var/lib/registry ${registryDomain}:sealos.hub ${registryPassword}:passw0rd ${registryPort}:5000 ${registryUsername}:admin ${sandboxImage}:pause:3.8] ; text: $defaultVIP 2023-10-30T10:21:35 debug get vip is 10.103.97.2 2023-10-30T10:21:35 debug renderTextFromEnv: replaces: map[$(SEALOS_SYS_CRI_ENDPOINT):/var/run/containerd/containerd.sock $(SEALOS_SYS_IMAGE_ENDPOINT):/var/run/image-cri-shim.sock $(criData):/var/lib/containerd $(defaultVIP):10.103.97.2 $(disableApparmor):false $(registryConfig):/etc/registry $(registryData):/var/lib/registry $(registryDomain):sealos.hub $(registryPassword):passw0rd $(registryPort):5000 $(registryUsername):admin $(sandboxImage):pause:3.8 $SEALOS_SYS_CRI_ENDPOINT:/var/run/containerd/containerd.sock $SEALOS_SYS_IMAGE_ENDPOINT:/var/run/image-cri-shim.sock $criData:/var/lib/containerd $defaultVIP:10.103.97.2 $disableApparmor:false $registryConfig:/etc/registry $registryData:/var/lib/registry $registryDomain:sealos.hub $registryPassword:passw0rd $registryPort:5000 $registryUsername:admin $sandboxImage:pause:3.8 ${SEALOS_SYS_CRI_ENDPOINT}:/var/run/containerd/containerd.sock ${SEALOS_SYS_IMAGE_ENDPOINT}:/var/run/image-cri-shim.sock ${criData}:/var/lib/containerd ${defaultVIP}:10.103.97.2 ${disableApparmor}:false ${registryConfig}:/etc/registry ${registryData}:/var/lib/registry ${registryDomain}:sealos.hub ${registryPassword}:passw0rd ${registryPort}:5000 ${registryUsername}:admin ${sandboxImage}:pause:3.8] ; text: $defaultVIP 2023-10-30T10:21:35 debug get vip is 10.103.97.2 2023-10-30T10:21:35 debug override defaults of kubelet configuration 2023-10-30T10:21:35 debug remote copy files src /root/.sealos/default/tmp/kubeadm-init.yaml to dst /root/.sealos/default/etc/kubeadm-init.yaml 2023-10-30T10:21:35 info start to generate cert and kubeConfig..., 1819 it/s) 2023-10-30T10:21:35 debug start to exec rm -rf /etc/kubernetes/admin.conf on 192.168.1.8:22 2023-10-30T10:21:35 info start to generator cert and copy to masters... 2023-10-30T10:21:35 debug start to exec remote 192.168.1.8:22 shell: /var/lib/sealos/data/default/rootfs/opt/sealctl hostname 2023-10-30T10:21:35 debug start to exec /var/lib/sealos/data/default/rootfs/opt/sealctl hostname on 192.168.1.8:22 2023-10-30T10:21:35 info apiserver altNames : {map[apiserver.cluster.local:apiserver.cluster.local kubernetes:kubernetes kubernetes.default:kubernetes.default kubernetes.default.svc:kubernetes.default.svc kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local localhost:localhost dev:dev] map[10.103.97.2:10.103.97.2 10.96.0.1:10.96.0.1 127.0.0.1:127.0.0.1 192.168.1.8:192.168.1.8]} 2023-10-30T10:21:35 info Etcd altnames : {map[localhost:localhost dev:dev] map[127.0.0.1:127.0.0.1 192.168.1.8:192.168.1.8 ::1:::1]}, commonName : dev 2023-10-30T10:21:36 debug cert.GenerateCert getServiceCIDR 10.96.0.0/22 2023-10-30T10:21:36 debug cert.GenerateCert param: /root/.sealos/default/pki /root/.sealos/default/pki/etcd [127.0.0.1 apiserver.cluster.local 10.103.97.2 192.168.1.8] 192.168.1.8 dev 10.96.0.0/22 cluster.local 2023-10-30T10:21:36 info start to copy etc pki files to masters 2023-10-30T10:21:36 debug remote copy files src /root/.sealos/default/pki to dst /etc/kubernetes/pki 2023-10-30T10:21:36 info start to copy etc pki files to masters
2023-10-30T10:21:36 debug remote copy files src /root/.sealos/default/pki to dst /etc/kubernetes/pki 2023-10-30T10:21:36 info start to create kubeconfig...
2023-10-30T10:21:36 debug start to exec remote 192.168.1.8:22 shell: /var/lib/sealos/data/default/rootfs/opt/sealctl hostname 2023-10-30T10:21:36 debug start to exec /var/lib/sealos/data/default/rootfs/opt/sealctl hostname on 192.168.1.8:22 2023-10-30T10:21:37 debug [kubeconfig] Writing "admin.conf" kubeconfig file

2023-10-30T10:21:37 debug [kubeconfig] Writing "controller-manager.conf" kubeconfig file

2023-10-30T10:21:37 debug [kubeconfig] Writing "scheduler.conf" kubeconfig file

2023-10-30T10:21:37 debug [kubeconfig] Writing "kubelet.conf" kubeconfig file

2023-10-30T10:21:37 info start to copy kubeconfig files to masters 2023-10-30T10:21:37 debug remote copy files src /root/.sealos/default/etc/admin.conf to dst /etc/kubernetes/admin.conf 2023-10-30T10:21:37 debug remote copy files src /root/.sealos/default/etc/controller-manager.conf to dst /etc/kubernetes/controller-manager.conf 2023-10-30T10:21:37 debug remote copy files src /root/.sealos/default/etc/scheduler.conf to dst /etc/kubernetes/scheduler.conf 2023-10-30T10:21:37 debug remote copy files src /root/.sealos/default/etc/kubelet.conf to dst /etc/kubernetes/kubelet.conf 2023-10-30T10:21:37 info start to copy static files to masters1/1, 2930 it/s) 2023-10-30T10:21:37 debug start to exec mkdir -p /etc/kubernetes && cp -f /var/lib/sealos/data/default/rootfs/statics/audit-policy.yml /etc/kubernetes/audit-policy.yml on 192.168.1.8:22 2023-10-30T10:21:37 info start to init master0... 2023-10-30T10:21:37 debug start to exec /var/lib/sealos/data/default/rootfs/opt/sealctl hosts add --ip 192.168.1.8 --domain apiserver.cluster.local on 192.168.1.8:22 192.168.1.8:22 2023-10-30T10:21:37 info domain apiserver.cluster.local:192.168.1.8 append success 2023-10-30T10:21:37 debug renderTextFromEnv: replaces: map[$(SEALOS_SYS_CRI_ENDPOINT):/var/run/containerd/containerd.sock $(SEALOS_SYS_IMAGE_ENDPOINT):/var/run/image-cri-shim.sock $(criData):/var/lib/containerd $(defaultVIP):10.103.97.2 $(disableApparmor):false $(registryConfig):/etc/registry $(registryData):/var/lib/registry $(registryDomain):sealos.hub $(registryPassword):passw0rd $(registryPort):5000 $(registryUsername):admin $(sandboxImage):pause:3.8 $SEALOS_SYS_CRI_ENDPOINT:/var/run/containerd/containerd.sock $SEALOS_SYS_IMAGE_ENDPOINT:/var/run/image-cri-shim.sock $criData:/var/lib/containerd $defaultVIP:10.103.97.2 $disableApparmor:false $registryConfig:/etc/registry $registryData:/var/lib/registry $registryDomain:sealos.hub $registryPassword:passw0rd $registryPort:5000 $registryUsername:admin $sandboxImage:pause:3.8 ${SEALOS_SYS_CRI_ENDPOINT}:/var/run/containerd/containerd.sock ${SEALOS_SYS_IMAGE_ENDPOINT}:/var/run/image-cri-shim.sock ${criData}:/var/lib/containerd ${defaultVIP}:10.103.97.2 ${disableApparmor}:false ${registryConfig}:/etc/registry ${registryData}:/var/lib/registry ${registryDomain}:sealos.hub ${registryPassword}:passw0rd ${registryPort}:5000 ${registryUsername}:admin ${sandboxImage}:pause:3.8] ; text: $defaultVIP 2023-10-30T10:21:37 debug get vip is 10.103.97.2 2023-10-30T10:21:37 debug start to exec kubeadm init --config=/root/.sealos/default/etc/kubeadm-init.yaml --skip-certificate-key-print --skip-token-print -v 6 --ignore-preflight-errors=SystemVerification on 192.168.1.8:22 192.168.1.8:22 I1030 10:21:37.801046 1190 initconfiguration.go:254] loading configuration from "/root/.sealos/default/etc/kubeadm-init.yaml" 192.168.1.8:22 W1030 10:21:37.805399 1190 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration! 192.168.1.8:22 W1030 10:21:37.805435 1190 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0 192.168.1.8:22 I1030 10:21:37.807583 1190 certs.go:522] validating certificate period for CA certificate 192.168.1.8:22 I1030 10:21:37.807615 1190 certs.go:522] validating certificate period for front-proxy CA certificate 192.168.1.8:22 [init] Using Kubernetes version: v1.25.14 192.168.1.8:22 [preflight] Running pre-flight checks 192.168.1.8:22 I1030 10:21:37.807681 1190 checks.go:568] validating Kubernetes and kubeadm version 192.168.1.8:22 I1030 10:21:37.807690 1190 checks.go:168] validating if the firewall is enabled and active 192.168.1.8:22 I1030 10:21:37.811655 1190 checks.go:203] validating availability of port 6443 192.168.1.8:22 I1030 10:21:37.811726 1190 checks.go:203] validating availability of port 10259 192.168.1.8:22 I1030 10:21:37.811736 1190 checks.go:203] validating availability of port 10257 192.168.1.8:22 I1030 10:21:37.811745 1190 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml 192.168.1.8:22 I1030 10:21:37.811756 1190 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml 192.168.1.8:22 I1030 10:21:37.811759 1190 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml 192.168.1.8:22 I1030 10:21:37.811761 1190 checks.go:280] validating the existence of file /etc/kubernetes/manifests/etcd.yaml 192.168.1.8:22 I1030 10:21:37.811765 1190 checks.go:430] validating if the connectivity type is via proxy or direct 192.168.1.8:22 I1030 10:21:37.811779 1190 checks.go:469] validating http connectivity to first IP address in the CIDR 192.168.1.8:22 I1030 10:21:37.811791 1190 checks.go:469] validating http connectivity to first IP address in the CIDR 192.168.1.8:22 I1030 10:21:37.811797 1190 checks.go:104] validating the container runtime 192.168.1.8:22 I1030 10:21:37.826864 1190 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables 192.168.1.8:22 I1030 10:21:37.826914 1190 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward 192.168.1.8:22 I1030 10:21:37.826926 1190 checks.go:644] validating whether swap is enabled or not 192.168.1.8:22 I1030 10:21:37.827468 1190 checks.go:370] validating the presence of executable crictl 192.168.1.8:22 I1030 10:21:37.827508 1190 checks.go:370] validating the presence of executable conntrack 192.168.1.8:22 I1030 10:21:37.827541 1190 checks.go:370] validating the presence of executable ip 192.168.1.8:22 I1030 10:21:37.829484 1190 checks.go:370] validating the presence of executable iptables 192.168.1.8:22 I1030 10:21:37.829506 1190 checks.go:370] validating the presence of executable mount 192.168.1.8:22 I1030 10:21:37.829539 1190 checks.go:370] validating the presence of executable nsenter 192.168.1.8:22 I1030 10:21:37.829567 1190 checks.go:370] validating the presence of executable ebtables 192.168.1.8:22 I1030 10:21:37.829602 1190 checks.go:370] validating the presence of executable ethtool 192.168.1.8:22 [WARNING FileExisting-ethtool]: ethtool not found in system path 192.168.1.8:22 I1030 10:21:37.829707 1190 checks.go:370] validating the presence of executable socat 192.168.1.8:22 [WARNING FileExisting-socat]: socat not found in system path 192.168.1.8:22 I1030 10:21:37.829748 1190 checks.go:370] validating the presence of executable tc 192.168.1.8:22 I1030 10:21:37.829767 1190 checks.go:370] validating the presence of executable touch 192.168.1.8:22 I1030 10:21:37.829790 1190 checks.go:516] running all checks 192.168.1.8:22 [preflight] The system verification failed. Printing the output from the verification: 192.168.1.8:22 KERNEL_VERSION: 6.2.16-12-pve 192.168.1.8:22 OS: Linux 192.168.1.8:22 CGROUPS_CPU: enabled 192.168.1.8:22 CGROUPS_CPUSET: enabled 192.168.1.8:22 CGROUPS_DEVICES: enabled 192.168.1.8:22 CGROUPS_FREEZER: enabled 192.168.1.8:22 CGROUPS_MEMORY: enabled 192.168.1.8:22 CGROUPS_PIDS: enabled 192.168.1.8:22 CGROUPS_HUGETLB: enabled 192.168.1.8:22 CGROUPS_IO: enabled 192.168.1.8:22 [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.2.16-12-pve\n", err: exit status 1 192.168.1.8:22 I1030 10:21:37.832540 1190 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost 192.168.1.8:22 [WARNING Hostname]: hostname "dev" could not be reached 192.168.1.8:22 [WARNING Hostname]: hostname "dev": lookup dev on 192.168.10.1:53: server misbehaving 192.168.1.8:22 I1030 10:21:46.441312 1190 checks.go:610] validating kubelet version 192.168.1.8:22 I1030 10:21:46.474597 1190 checks.go:130] validating if the "kubelet" service is enabled and active 192.168.1.8:22 I1030 10:21:46.489217 1190 checks.go:203] validating availability of port 10250 192.168.1.8:22 I1030 10:21:46.489277 1190 checks.go:203] validating availability of port 2379 192.168.1.8:22 I1030 10:21:46.489297 1190 checks.go:203] validating availability of port 2380 192.168.1.8:22 I1030 10:21:46.489313 1190 checks.go:243] validating the existence and emptiness of directory /var/lib/etcd 192.168.1.8:22 [preflight] Pulling images required for setting up a Kubernetes cluster 192.168.1.8:22 [preflight] This might take a minute or two, depending on the speed of your internet connection 192.168.1.8:22 [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' 192.168.1.8:22 I1030 10:21:46.489386 1190 checks.go:832] using image pull policy: IfNotPresent 192.168.1.8:22 I1030 10:21:46.554887 1190 checks.go:849] pulling: registry.k8s.io/kube-apiserver:v1.25.14 192.168.1.8:22 I1030 10:21:48.302705 1190 checks.go:849] pulling: registry.k8s.io/kube-controller-manager:v1.25.14 192.168.1.8:22 I1030 10:21:49.531928 1190 checks.go:849] pulling: registry.k8s.io/kube-scheduler:v1.25.14 192.168.1.8:22 I1030 10:21:50.339447 1190 checks.go:849] pulling: registry.k8s.io/kube-proxy:v1.25.14 192.168.1.8:22 I1030 10:21:51.362069 1190 checks.go:841] image exists: registry.k8s.io/pause:3.8 192.168.1.8:22 I1030 10:21:51.428693 1190 checks.go:849] pulling: registry.k8s.io/etcd:3.5.6-0 192.168.1.8:22 I1030 10:21:54.370245 1190 checks.go:849] pulling: registry.k8s.io/coredns/coredns:v1.9.3 192.168.1.8:22 [certs] Using certificateDir folder "/etc/kubernetes/pki" 192.168.1.8:22 I1030 10:21:55.149197 1190 certs.go:522] validating certificate period for ca certificate 192.168.1.8:22 [certs] Using existing ca certificate authority 192.168.1.8:22 I1030 10:21:55.149532 1190 certs.go:522] validating certificate period for apiserver certificate 192.168.1.8:22 [certs] Using existing apiserver certificate and key on disk 192.168.1.8:22 I1030 10:21:55.149769 1190 certs.go:522] validating certificate period for apiserver-kubelet-client certificate 192.168.1.8:22 [certs] Using existing apiserver-kubelet-client certificate and key on disk 192.168.1.8:22 I1030 10:21:55.149985 1190 certs.go:522] validating certificate period for front-proxy-ca certificate 192.168.1.8:22 [certs] Using existing front-proxy-ca certificate authority 192.168.1.8:22 I1030 10:21:55.150236 1190 certs.go:522] validating certificate period for front-proxy-client certificate 192.168.1.8:22 [certs] Using existing front-proxy-client certificate and key on disk 192.168.1.8:22 I1030 10:21:55.150446 1190 certs.go:522] validating certificate period for etcd/ca certificate 192.168.1.8:22 [certs] Using existing etcd/ca certificate authority 192.168.1.8:22 I1030 10:21:55.150656 1190 certs.go:522] validating certificate period for etcd/server certificate 192.168.1.8:22 [certs] Using existing etcd/server certificate and key on disk 192.168.1.8:22 I1030 10:21:55.150873 1190 certs.go:522] validating certificate period for etcd/peer certificate 192.168.1.8:22 [certs] Using existing etcd/peer certificate and key on disk 192.168.1.8:22 I1030 10:21:55.151097 1190 certs.go:522] validating certificate period for etcd/healthcheck-client certificate 192.168.1.8:22 [certs] Using existing etcd/healthcheck-client certificate and key on disk 192.168.1.8:22 I1030 10:21:55.151311 1190 certs.go:522] validating certificate period for apiserver-etcd-client certificate 192.168.1.8:22 [certs] Using existing apiserver-etcd-client certificate and key on disk 192.168.1.8:22 I1030 10:21:55.151510 1190 certs.go:78] creating new public/private key files for signing service account users 192.168.1.8:22 [certs] Using the existing "sa" key 192.168.1.8:22 [kubeconfig] Using kubeconfig folder "/etc/kubernetes" 192.168.1.8:22 I1030 10:21:55.152156 1190 kubeconfig.go:103] creating kubeconfig file for admin.conf 192.168.1.8:22 I1030 10:21:55.289457 1190 loader.go:374] Config loaded from file: /etc/kubernetes/admin.conf 192.168.1.8:22 [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf" 192.168.1.8:22 I1030 10:21:55.289474 1190 kubeconfig.go:103] creating kubeconfig file for kubelet.conf 192.168.1.8:22 I1030 10:21:55.491120 1190 loader.go:374] Config loaded from file: /etc/kubernetes/kubelet.conf 192.168.1.8:22 [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf" 192.168.1.8:22 I1030 10:21:55.491136 1190 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf 192.168.1.8:22 I1030 10:21:55.618807 1190 loader.go:374] Config loaded from file: /etc/kubernetes/controller-manager.conf 192.168.1.8:22 W1030 10:21:55.618818 1190 kubeconfig.go:249] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://192.168.1.8:6443, got: https://apiserver.cluster.local:6443 192.168.1.8:22 [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf" 192.168.1.8:22 I1030 10:21:55.618853 1190 kubeconfig.go:103] creating kubeconfig file for scheduler.conf 192.168.1.8:22 I1030 10:21:55.657980 1190 loader.go:374] Config loaded from file: /etc/kubernetes/scheduler.conf 192.168.1.8:22 W1030 10:21:55.658003 1190 kubeconfig.go:249] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://192.168.1.8:6443, got: https://apiserver.cluster.local:6443 192.168.1.8:22 [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf" 192.168.1.8:22 I1030 10:21:55.658012 1190 kubelet.go:66] Stopping the kubelet 192.168.1.8:22 [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" 192.168.1.8:22 [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" 192.168.1.8:22 [kubelet-start] Starting the kubelet 192.168.1.8:22 [control-plane] Using manifest folder "/etc/kubernetes/manifests" 192.168.1.8:22 [control-plane] Creating static Pod manifest for "kube-apiserver" 192.168.1.8:22 I1030 10:21:55.845230 1190 manifests.go:99] [control-plane] getting StaticPodSpecs 192.168.1.8:22 I1030 10:21:55.845502 1190 manifests.go:125] [control-plane] adding volume "audit" for component "kube-apiserver" 192.168.1.8:22 I1030 10:21:55.845511 1190 manifests.go:125] [control-plane] adding volume "audit-log" for component "kube-apiserver" 192.168.1.8:22 I1030 10:21:55.845515 1190 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver" 192.168.1.8:22 I1030 10:21:55.845519 1190 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver" 192.168.1.8:22 I1030 10:21:55.845523 1190 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver" 192.168.1.8:22 I1030 10:21:55.845528 1190 manifests.go:125] [control-plane] adding volume "localtime" for component "kube-apiserver" 192.168.1.8:22 I1030 10:21:55.845533 1190 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver" 192.168.1.8:22 I1030 10:21:55.845537 1190 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver" 192.168.1.8:22 I1030 10:21:55.847569 1190 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml" 192.168.1.8:22 [control-plane] Creating static Pod manifest for "kube-controller-manager" 192.168.1.8:22 I1030 10:21:55.847584 1190 manifests.go:99] [control-plane] getting StaticPodSpecs 192.168.1.8:22 I1030 10:21:55.847723 1190 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager" 192.168.1.8:22 I1030 10:21:55.847728 1190 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager" 192.168.1.8:22 I1030 10:21:55.847731 1190 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager" 192.168.1.8:22 I1030 10:21:55.847734 1190 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager" 192.168.1.8:22 I1030 10:21:55.847737 1190 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager" 192.168.1.8:22 I1030 10:21:55.847741 1190 manifests.go:125] [control-plane] adding volume "localtime" for component "kube-controller-manager" 192.168.1.8:22 I1030 10:21:55.847745 1190 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager" 192.168.1.8:22 I1030 10:21:55.847748 1190 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager" 192.168.1.8:22 I1030 10:21:55.848222 1190 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml" 192.168.1.8:22 [control-plane] Creating static Pod manifest for "kube-scheduler" 192.168.1.8:22 I1030 10:21:55.848230 1190 manifests.go:99] [control-plane] getting StaticPodSpecs 192.168.1.8:22 I1030 10:21:55.848334 1190 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler" 192.168.1.8:22 I1030 10:21:55.848338 1190 manifests.go:125] [control-plane] adding volume "localtime" for component "kube-scheduler" 192.168.1.8:22 I1030 10:21:55.849094 1190 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml" 192.168.1.8:22 [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" 192.168.1.8:22 I1030 10:21:55.850380 1190 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml" 192.168.1.8:22 I1030 10:21:55.850395 1190 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy 192.168.1.8:22 I1030 10:21:55.850627 1190 loader.go:374] Config loaded from file: /etc/kubernetes/admin.conf 192.168.1.8:22 [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s 192.168.1.8:22 I1030 10:21:55.853529 1190 round_trippers.go:553] GET https://apiserver.cluster.local:6443/healthz?timeout=10s in 0 milliseconds 192.168.1.8:22 I1030 10:21:56.354976 1190 round_trippers.go:553] GET https://apiserver.cluster.local:6443/healthz?timeout=10s in 0 milliseconds 192.168.1.8:22 I1030 10:21:56.854890 1190 round_trippers.go:553] GET https://apiserver.cluster.local:6443/healthz?timeout=10s in 0 milliseconds 192.168.1.8:22 I1030 10:21:57.354453 1190 round_trippers.go:553] GET https://apiserver.cluster.local:6443/healthz?timeout=10s in 0 milliseconds 192.168.1.8:22 I1030 10:21:57.854294 1190 round_trippers.go:553] GET https://apiserver.cluster.local:6443/healthz?timeout=10s in 0 milliseconds

john-deng commented 10 months ago

I tried to install k3s, it also failed.

sealos version CriVersionInfo: RuntimeApiVersion: v1 RuntimeName: containerd RuntimeVersion: v1.7.6-k3s1 Version: 0.1.0 SealosVersion: buildDate: "2023-10-17T05:06:27Z" compiler: gc gitCommit: 33aee733 gitVersion: 4.4.0-beta2 goVersion: go1.20.8 platform: linux/amd64

WARNING: Failed to get kubernetes version. Check kubernetes status or use command "sealos run" to launch kubernetes

root@dev:~# systemctl status k3s.service

Oct 30 10:32:11 dev k3s[4162]: time="2023-10-30T10:32:11Z" level=info msg="Starting temporary etcd to reconcile with datastore" Oct 30 10:32:11 dev k3s[4162]: {"level":"info","ts":"2023-10-30T10:32:11.541Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["http://127.0.0.1:2400"]} Oct 30 10:32:11 dev k3s[4162]: {"level":"info","ts":"2023-10-30T10:32:11.541Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["http://127.0.0.1:2399"]} Oct 30 10:32:11 dev k3s[4162]: {"level":"info","ts":"2023-10-30T10:32:11.541Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.3","git-sha":"Not provided (use ./build instead of go build)","go-version":"go1.20.8","go-os":"linux","go-arch":"amd64","max-cpu-set":4,"max-cpu-availab> Oct 30 10:32:11 dev k3s[4162]: {"level":"info","ts":"2023-10-30T10:32:11.542Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/rancher/k3s/server/db/etcd-tmp/member/snap/db","took":"882.787s"} Oct 30 10:32:11 dev k3s[4162]: {"level":"info","ts":"2023-10-30T10:32:11.542Z","caller":"etcdserver/server.go:529","msg":"No snapshot found. Recovering WAL from scratch!"} Oct 30 10:32:11 dev k3s[4162]: {"level":"info","ts":"2023-10-30T10:32:11.548Z","caller":"etcdserver/raft.go:556","msg":"forcing restart member","cluster-id":"21995869da60973e","local-member-id":"33959c7aa75b2c7b","commit-index":219} Oct 30 10:32:11 dev k3s[4162]: {"level":"info","ts":"2023-10-30T10:32:11.548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"33959c7aa75b2c7b switched to configuration voters=()"} Oct 30 10:32:11 dev k3s[4162]: {"level":"info","ts":"2023-10-30T10:32:11.548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"33959c7aa75b2c7b became follower at term 3"} Oct 30 10:32:11 dev k3s[4162]: {"level":"info","ts":"2023-10-30T10:32:11.548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 33959c7aa75b2c7b [peers: [], term: 3, commit: 219, applied: 0, lastindex: 219, lastterm: 3]"}

john-deng commented 10 months ago

@cuisongliu I just noticed that I missed an important piece of information: the sealos was running on a Proxmox VE host, which may not be supported yet, I guess. Do you have any plans to support it, if that is the case?

Here is the kernel information:

 uname -a
Linux dev 6.2.16-12-pve #1 SMP PREEMPT_DYNAMIC PMX 6.2.16-12 (2023-09-04T13:21Z) x86_64 x86_64 x86_64 GNU/Linux
bxy4543 commented 10 months ago

I didn’t see what the final error message was? https://github.com/labring/sealos/issues/4204#issuecomment-1784892489

stale[bot] commented 8 months ago

This issue has been automatically closed because we haven't heard back for more than 60 days, please reopen this issue if necessary.

alfuckk commented 1 month ago

error msg is W0811 13:19:14.503367 3902 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0