labring / sealos

Sealos is a production-ready Kubernetes distribution. You can run any Docker image on sealos, start high availability databases like mysql/pgsql/redis/mongo, develop applications using any Programming language.
https://cloud.sealos.io
Apache License 2.0
13.85k stars 2.07k forks source link

sealos4.2.0搭建3主k8,出现 containerd相关error Invalid argument 最终搭建失败 #3109

Closed SsLaLal closed 1 year ago

SsLaLal commented 1 year ago

Detailed description of the question.

root@k8s-master01:~/tempCloud# sealos run labring/kubernetes:v1.24.7 labring/calico:v3.22.1 labring/metrics-server:v0.6.1 labring/helm:v3.8.2 --masters 10.251.80.125,10.251.80.126,10.251.80.127 --nodes 10.251.80.128,10.251.80.129 --port 22 -p 'xxxx' 2023-05-18T17:20:04 info Start to create a new cluster: master [10.251.80.125 10.251.80.126 10.251.80.127], worker [10.251.80.128 10.251.80.129], registry 10.251.80.125 2023-05-18T17:20:04 info Executing pipeline Check in CreateProcessor. 2023-05-18T17:20:04 info checker:hostname [] 2023-05-18T17:20:04 info checker:timeSync [] 2023-05-18T17:20:04 info Executing pipeline PreProcess in CreateProcessor. 2023-05-18T17:20:05 info Executing pipeline RunConfig in CreateProcessor. 2023-05-18T17:20:05 info Executing pipeline MountRootfs in CreateProcessor. 2023-05-18T17:23:17 info Executing pipeline MirrorRegistry in CreateProcessor.
INFO [2023-05-18 17:23:18] >> untar-registry.sh was not found, skip decompression registry or execute sealos run labring/registry:untar INFO [2023-05-18 17:23:26] >> untar-registry.sh was not found, skip decompression registry or execute sealos run labring/registry:untar INFO [2023-05-18 17:23:53] >> untar-registry.sh was not found, skip decompression registry or execute sealos run labring/registry:untar INFO [2023-05-18 17:24:12] >> untar-registry.sh was not found, skip decompression registry or execute sealos run labring/registry:untar 2023-05-18T17:24:13 info Executing pipeline Bootstrap in CreateProcessor 10.251.80.126:22 INFO [2023-05-18 17:24:14] >> check root,port,cri success 10.251.80.129:22 INFO [2023-05-18 17:24:14] >> check root,port,cri success 10.251.80.127:22 INFO [2023-05-18 17:24:14] >> check root,port,cri success 10.251.80.128:22 INFO [2023-05-18 17:24:14] >> check root,port,cri success INFO [2023-05-18 17:24:20] >> check root,port,cri success 2023-05-18T17:24:20 info domain sealos.hub delete success 2023-05-18T17:24:20 info domain sealos.hub:10.251.80.125 append success 10.251.80.126:22 2023-05-18T17:24:21 info domain sealos.hub delete success 10.251.80.126:22 2023-05-18T17:24:21 info domain sealos.hub:10.251.80.125 append success 10.251.80.127:22 2023-05-18T17:24:21 info domain sealos.hub delete success 10.251.80.127:22 2023-05-18T17:24:21 info domain sealos.hub:10.251.80.125 append success 10.251.80.129:22 2023-05-18T17:24:21 info domain sealos.hub delete success 10.251.80.129:22 2023-05-18T17:24:21 info domain sealos.hub:10.251.80.125 append success 10.251.80.128:22 2023-05-18T17:24:21 info domain sealos.hub delete success 10.251.80.128:22 2023-05-18T17:24:21 info domain sealos.hub:10.251.80.125 append success INFO [2023-05-18 17:24:22] >> Health check registry! INFO [2023-05-18 17:24:22] >> registry is running INFO [2023-05-18 17:24:22] >> init registry success Failed to enable unit: Unit file /etc/systemd/system/containerd.service is masked. Failed to restart containerd.service: Unit containerd.service is masked. INFO [2023-05-18 17:24:25] >> Health check containerd! ERROR [2023-05-18 17:24:25] >> containerd status is error ERROR [2023-05-18 17:24:25] >> ====init containerd failed!==== 10.251.80.126:22 INFO [2023-05-18 17:24:26] >> Health check containerd! 10.251.80.126:22 INFO [2023-05-18 17:24:26] >> containerd is running 10.251.80.126:22 INFO [2023-05-18 17:24:26] >> init containerd success 10.251.80.127:22 INFO [2023-05-18 17:24:27] >> Health check containerd! 10.251.80.127:22 INFO [2023-05-18 17:24:27] >> containerd is running 10.251.80.127:22 INFO [2023-05-18 17:24:27] >> init containerd success 10.251.80.129:22 INFO [2023-05-18 17:24:27] >> Health check containerd! 10.251.80.129:22 INFO [2023-05-18 17:24:27] >> containerd is running 10.251.80.129:22 INFO [2023-05-18 17:24:27] >> init containerd success 10.251.80.128:22 INFO [2023-05-18 17:24:27] >> Health check containerd! 10.251.80.128:22 INFO [2023-05-18 17:24:27] >> containerd is running 10.251.80.128:22 INFO [2023-05-18 17:24:27] >> init containerd success 10.251.80.126:22 INFO [2023-05-18 17:24:27] >> Health check image-cri-shim! 10.251.80.126:22 INFO [2023-05-18 17:24:27] >> image-cri-shim is running 10.251.80.126:22 INFO [2023-05-18 17:24:27] >> init shim success 10.251.80.126:22 127.0.0.1 localhost 10.251.80.126:22 ::1 ip6-localhost ip6-loopback 10.251.80.126:22 Applying /etc/sysctl.d/10-console-messages.conf ... 10.251.80.126:22 kernel.printk = 4 4 1 7 10.251.80.126:22 Applying /etc/sysctl.d/10-ipv6-privacy.conf ... 10.251.80.126:22 net.ipv6.conf.all.use_tempaddr = 2 10.251.80.126:22 net.ipv6.conf.default.use_tempaddr = 2 10.251.80.126:22 Applying /etc/sysctl.d/10-kernel-hardening.conf ... 10.251.80.126:22 kernel.kptr_restrict = 1 10.251.80.126:22 Applying /etc/sysctl.d/10-link-restrictions.conf ... 10.251.80.126:22 fs.protected_hardlinks = 1 10.251.80.126:22 fs.protected_symlinks = 1 10.251.80.126:22 Applying /etc/sysctl.d/10-magic-sysrq.conf ... 10.251.80.126:22 kernel.sysrq = 176 10.251.80.126:22 Applying /etc/sysctl.d/10-network-security.conf ... 10.251.80.126:22 net.ipv4.conf.default.rp_filter = 2 10.251.80.126:22 net.ipv4.conf.all.rp_filter = 2 10.251.80.126:22 Applying /etc/sysctl.d/10-ptrace.conf ... 10.251.80.126:22 kernel.yama.ptrace_scope = 1 10.251.80.126:22 Applying /etc/sysctl.d/10-zeropage.conf ... 10.251.80.126:22 vm.mmap_min_addr = 65536 10.251.80.126:22 Applying /usr/lib/sysctl.d/50-default.conf ... 10.251.80.126:22 net.ipv4.conf.default.promote_secondaries = 1 10.251.80.126:22 sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument 10.251.80.126:22 net.ipv4.ping_group_range = 0 2147483647 10.251.80.126:22 net.core.default_qdisc = fq_codel 10.251.80.126:22 fs.protected_regular = 1 10.251.80.126:22 fs.protected_fifos = 1 10.251.80.126:22 Applying /usr/lib/sysctl.d/50-pid-max.conf ... 10.251.80.126:22 kernel.pid_max = 4194304 10.251.80.126:22 Applying /etc/sysctl.d/99-sysctl.conf ... 10.251.80.126:22 Applying /usr/lib/sysctl.d/protect-links.conf ... 10.251.80.126:22 fs.protected_fifos = 1 10.251.80.126:22 fs.protected_hardlinks = 1 10.251.80.126:22 fs.protected_regular = 2 10.251.80.126:22 fs.protected_symlinks = 1 10.251.80.126:22 Applying /etc/sysctl.d/sealos-k8s.conf ... 10.251.80.126:22 net.bridge.bridge-nf-call-ip6tables = 1 10.251.80.126:22 net.bridge.bridge-nf-call-iptables = 1 10.251.80.126:22 net.ipv4.conf.all.rp_filter = 0 10.251.80.126:22 net.ipv4.ip_forward = 1 10.251.80.126:22 sysctl: setting key "net.ipv4.ip_local_port_range": Invalid argument 10.251.80.126:22 net.core.somaxconn = 65535 10.251.80.126:22 fs.file-max = 1048576 10.251.80.126:22 Applying /etc/sysctl.conf ... 10.251.80.127:22 INFO [2023-05-18 17:24:28] >> Health check image-cri-shim! 10.251.80.127:22 INFO [2023-05-18 17:24:28] >> image-cri-shim is running 10.251.80.127:22 INFO [2023-05-18 17:24:28] >> init shim success 10.251.80.127:22 127.0.0.1 localhost 10.251.80.127:22 ::1 ip6-localhost ip6-loopback 10.251.80.127:22 Applying /etc/sysctl.d/10-console-messages.conf ... 10.251.80.127:22 kernel.printk = 4 4 1 7 10.251.80.127:22 Applying /etc/sysctl.d/10-ipv6-privacy.conf ... 10.251.80.127:22 net.ipv6.conf.all.use_tempaddr = 2 10.251.80.127:22 net.ipv6.conf.default.use_tempaddr = 2 10.251.80.127:22 Applying /etc/sysctl.d/10-kernel-hardening.conf ... 10.251.80.127:22 kernel.kptr_restrict = 1 10.251.80.127:22 Applying /etc/sysctl.d/10-link-restrictions.conf ... 10.251.80.127:22 fs.protected_hardlinks = 1 10.251.80.127:22 fs.protected_symlinks = 1 10.251.80.127:22 Applying /etc/sysctl.d/10-magic-sysrq.conf ... 10.251.80.127:22 kernel.sysrq = 176 10.251.80.127:22 Applying /etc/sysctl.d/10-network-security.conf ... 10.251.80.127:22 net.ipv4.conf.default.rp_filter = 2 10.251.80.127:22 net.ipv4.conf.all.rp_filter = 2 10.251.80.127:22 Applying /etc/sysctl.d/10-ptrace.conf ... 10.251.80.127:22 kernel.yama.ptrace_scope = 1 10.251.80.127:22 Applying /etc/sysctl.d/10-zeropage.conf ... 10.251.80.127:22 vm.mmap_min_addr = 65536 10.251.80.127:22 Applying /usr/lib/sysctl.d/50-default.conf ... 10.251.80.127:22 net.ipv4.conf.default.promote_secondaries = 1 10.251.80.127:22 sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument 10.251.80.127:22 net.ipv4.ping_group_range = 0 2147483647 10.251.80.127:22 net.core.default_qdisc = fq_codel 10.251.80.127:22 fs.protected_regular = 1 10.251.80.127:22 fs.protected_fifos = 1 10.251.80.127:22 Applying /usr/lib/sysctl.d/50-pid-max.conf ... 10.251.80.127:22 kernel.pid_max = 4194304 10.251.80.127:22 Applying /etc/sysctl.d/99-sysctl.conf ... 10.251.80.127:22 Applying /usr/lib/sysctl.d/protect-links.conf ... 10.251.80.127:22 fs.protected_fifos = 1 10.251.80.127:22 fs.protected_hardlinks = 1 10.251.80.127:22 fs.protected_regular = 2 10.251.80.127:22 fs.protected_symlinks = 1 10.251.80.127:22 Applying /etc/sysctl.d/sealos-k8s.conf ... 10.251.80.127:22 net.bridge.bridge-nf-call-ip6tables = 1 10.251.80.127:22 net.bridge.bridge-nf-call-iptables = 1 10.251.80.127:22 net.ipv4.conf.all.rp_filter = 0 10.251.80.127:22 net.ipv4.ip_forward = 1 10.251.80.127:22 sysctl: setting key "net.ipv4.ip_local_port_range": Invalid argument 10.251.80.127:22 net.core.somaxconn = 65535 10.251.80.127:22 fs.file-max = 1048576 10.251.80.127:22 Applying /etc/sysctl.conf ... 10.251.80.129:22 INFO [2023-05-18 17:24:28] >> Health check image-cri-shim! 10.251.80.129:22 INFO [2023-05-18 17:24:28] >> image-cri-shim is running 10.251.80.129:22 INFO [2023-05-18 17:24:28] >> init shim success 10.251.80.129:22 127.0.0.1 localhost 10.251.80.129:22 ::1 ip6-localhost ip6-loopback 10.251.80.129:22 Applying /etc/sysctl.d/10-console-messages.conf ... 10.251.80.129:22 kernel.printk = 4 4 1 7 10.251.80.129:22 Applying /etc/sysctl.d/10-ipv6-privacy.conf ... 10.251.80.129:22 net.ipv6.conf.all.use_tempaddr = 2 10.251.80.129:22 net.ipv6.conf.default.use_tempaddr = 2 10.251.80.129:22 Applying /etc/sysctl.d/10-kernel-hardening.conf ... 10.251.80.129:22 kernel.kptr_restrict = 1 10.251.80.129:22 Applying /etc/sysctl.d/10-link-restrictions.conf ... 10.251.80.129:22 fs.protected_hardlinks = 1 10.251.80.129:22 fs.protected_symlinks = 1 10.251.80.129:22 Applying /etc/sysctl.d/10-magic-sysrq.conf ... 10.251.80.129:22 kernel.sysrq = 176 10.251.80.129:22 Applying /etc/sysctl.d/10-network-security.conf ... 10.251.80.129:22 net.ipv4.conf.default.rp_filter = 2 10.251.80.129:22 net.ipv4.conf.all.rp_filter = 2 10.251.80.129:22 Applying /etc/sysctl.d/10-ptrace.conf ... 10.251.80.129:22 kernel.yama.ptrace_scope = 1 10.251.80.129:22 Applying /etc/sysctl.d/10-zeropage.conf ... 10.251.80.129:22 vm.mmap_min_addr = 65536 10.251.80.129:22 Applying /usr/lib/sysctl.d/50-default.conf ... 10.251.80.129:22 net.ipv4.conf.default.promote_secondaries = 1 10.251.80.129:22 sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument 10.251.80.129:22 net.ipv4.ping_group_range = 0 2147483647 10.251.80.129:22 net.core.default_qdisc = fq_codel 10.251.80.129:22 fs.protected_regular = 1 10.251.80.129:22 fs.protected_fifos = 1 10.251.80.129:22 Applying /usr/lib/sysctl.d/50-pid-max.conf ... 10.251.80.129:22 kernel.pid_max = 4194304 10.251.80.129:22 Applying /etc/sysctl.d/99-sysctl.conf ... 10.251.80.129:22 Applying /usr/lib/sysctl.d/protect-links.conf ... 10.251.80.129:22 fs.protected_fifos = 1 10.251.80.129:22 fs.protected_hardlinks = 1 10.251.80.129:22 fs.protected_regular = 2 10.251.80.129:22 fs.protected_symlinks = 1 10.251.80.129:22 Applying /etc/sysctl.d/sealos-k8s.conf ... 10.251.80.129:22 net.bridge.bridge-nf-call-ip6tables = 1 10.251.80.129:22 net.bridge.bridge-nf-call-iptables = 1 10.251.80.129:22 net.ipv4.conf.all.rp_filter = 0 10.251.80.129:22 net.ipv4.ip_forward = 1 10.251.80.129:22 sysctl: setting key "net.ipv4.ip_local_port_range": Invalid argument 10.251.80.129:22 net.core.somaxconn = 65535 10.251.80.129:22 fs.file-max = 1048576 10.251.80.129:22 Applying /etc/sysctl.conf ... 10.251.80.126:22 Firewall stopped and disabled on system startup 10.251.80.127:22 Firewall stopped and disabled on system startup 10.251.80.129:22 Firewall stopped and disabled on system startup 10.251.80.128:22 INFO [2023-05-18 17:24:28] >> Health check image-cri-shim! 10.251.80.128:22 INFO [2023-05-18 17:24:28] >> image-cri-shim is running 10.251.80.128:22 INFO [2023-05-18 17:24:28] >> init shim success 10.251.80.128:22 127.0.0.1 localhost 10.251.80.128:22 ::1 ip6-localhost ip6-loopback 10.251.80.128:22 Applying /etc/sysctl.d/10-console-messages.conf ... 10.251.80.128:22 kernel.printk = 4 4 1 7 10.251.80.128:22 Applying /etc/sysctl.d/10-ipv6-privacy.conf ... 10.251.80.128:22 net.ipv6.conf.all.use_tempaddr = 2 10.251.80.128:22 net.ipv6.conf.default.use_tempaddr = 2 10.251.80.128:22 Applying /etc/sysctl.d/10-kernel-hardening.conf ... 10.251.80.128:22 kernel.kptr_restrict = 1 10.251.80.128:22 Applying /etc/sysctl.d/10-link-restrictions.conf ... 10.251.80.128:22 fs.protected_hardlinks = 1 10.251.80.128:22 fs.protected_symlinks = 1 10.251.80.128:22 Applying /etc/sysctl.d/10-magic-sysrq.conf ... 10.251.80.128:22 kernel.sysrq = 176 10.251.80.128:22 Applying /etc/sysctl.d/10-network-security.conf ... 10.251.80.128:22 net.ipv4.conf.default.rp_filter = 2 10.251.80.128:22 net.ipv4.conf.all.rp_filter = 2 10.251.80.128:22 Applying /etc/sysctl.d/10-ptrace.conf ... 10.251.80.128:22 kernel.yama.ptrace_scope = 1 10.251.80.128:22 Applying /etc/sysctl.d/10-zeropage.conf ... 10.251.80.128:22 vm.mmap_min_addr = 65536 10.251.80.128:22 Applying /usr/lib/sysctl.d/50-default.conf ... 10.251.80.128:22 net.ipv4.conf.default.promote_secondaries = 1 10.251.80.128:22 sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument 10.251.80.128:22 net.ipv4.ping_group_range = 0 2147483647 10.251.80.128:22 net.core.default_qdisc = fq_codel 10.251.80.128:22 fs.protected_regular = 1 10.251.80.128:22 fs.protected_fifos = 1 10.251.80.128:22 Applying /usr/lib/sysctl.d/50-pid-max.conf ... 10.251.80.128:22 kernel.pid_max = 4194304 10.251.80.128:22 Applying /etc/sysctl.d/99-sysctl.conf ... 10.251.80.128:22 Applying /usr/lib/sysctl.d/protect-links.conf ... 10.251.80.128:22 fs.protected_fifos = 1 10.251.80.128:22 fs.protected_hardlinks = 1 10.251.80.128:22 fs.protected_regular = 2 10.251.80.128:22 fs.protected_symlinks = 1 10.251.80.128:22 Applying /etc/sysctl.d/sealos-k8s.conf ... 10.251.80.128:22 net.bridge.bridge-nf-call-ip6tables = 1 10.251.80.128:22 net.bridge.bridge-nf-call-iptables = 1 10.251.80.128:22 net.ipv4.conf.all.rp_filter = 0 10.251.80.128:22 net.ipv4.ip_forward = 1 10.251.80.128:22 sysctl: setting key "net.ipv4.ip_local_port_range": Invalid argument 10.251.80.128:22 net.core.somaxconn = 65535 10.251.80.128:22 fs.file-max = 1048576 10.251.80.128:22 Applying /etc/sysctl.conf ... 10.251.80.128:22 Firewall stopped and disabled on system startup 10.251.80.127:22 Image is up to date for sha256:221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165 10.251.80.127:22 INFO [2023-05-18 17:24:47] >> init kubelet success 10.251.80.127:22 INFO [2023-05-18 17:24:47] >> init rootfs success 10.251.80.126:22 Image is up to date for sha256:221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165 10.251.80.126:22 INFO [2023-05-18 17:25:45] >> init kubelet success 10.251.80.126:22 INFO [2023-05-18 17:25:45] >> init rootfs success 10.251.80.128:22 Image is up to date for sha256:221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165 10.251.80.129:22 Image is up to date for sha256:221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165 10.251.80.128:22 INFO [2023-05-18 17:25:53] >> init kubelet success 10.251.80.128:22 INFO [2023-05-18 17:25:53] >> init rootfs success 10.251.80.129:22 INFO [2023-05-18 17:25:53] >> init kubelet success 10.251.80.129:22 INFO [2023-05-18 17:25:53] >> init rootfs success 2023-05-18T17:25:53 error Applied to cluster error: exit status 1 Error: exit status 1

Some reference materials you see.

No response

cuisongliu commented 1 year ago

Failed to enable unit: Unit file /etc/systemd/system/containerd.service is masked. Failed to restart containerd.service: Unit containerd.service is masked.

SsLaLal commented 1 year ago

机器上我预装了containerd和docker,已经将预装的软件全部卸载了

SsLaLal commented 1 year ago

请问,我在更新了机器环境后,sealos run的日志中报出很多error,not found ,Invalid argument这类关键字,我需要关注吗?这些会影响集群的搭建结果吗 ?@cuisongliu

日志如下:

`2023-05-19T09:02:26 info Start to create a new cluster: master [10.251.80.125 10.251.80.126 10.251.80.127], worker [10.251.80.128 10.251.80.129], registry 10.251.80.125 2023-05-19T09:02:26 info Executing pipeline Check in CreateProcessor. 2023-05-19T09:02:27 info checker:hostname [] 2023-05-19T09:02:27 info checker:timeSync [] 2023-05-19T09:02:27 info Executing pipeline PreProcess in CreateProcessor. 2023-05-19T09:02:29 info Executing pipeline RunConfig in CreateProcessor. 2023-05-19T09:02:29 info Executing pipeline MountRootfs in CreateProcessor. 2023-05-19T09:05:09 info Executing pipeline MirrorRegistry in CreateProcessor.
INFO [2023-05-19 09:05:09] >> untar-registry.sh was not found, skip decompression registry or execute sealos run labring/registry:untar INFO [2023-05-19 09:05:09] >> untar-registry.sh was not found, skip decompression registry or execute sealos run labring/registry:untar INFO [2023-05-19 09:05:09] >> untar-registry.sh was not found, skip decompression registry or execute sealos run labring/registry:untar INFO [2023-05-19 09:05:09] >> untar-registry.sh was not found, skip decompression registry or execute sealos run labring/registry:untar 2023-05-19T09:05:09 info Executing pipeline Bootstrap in CreateProcessor 10.251.80.127:22 INFO [2023-05-19 09:05:10] >> check root,port,cri success 10.251.80.129:22 INFO [2023-05-19 09:05:10] >> check root,port,cri success 10.251.80.128:22 INFO [2023-05-19 09:05:10] >> check root,port,cri success 10.251.80.126:22 INFO [2023-05-19 09:05:10] >> check root,port,cri success INFO [2023-05-19 09:05:17] >> check root,port,cri success 2023-05-19T09:05:17 info domain sealos.hub:10.251.80.125 append success 10.251.80.126:22 2023-05-19T09:05:17 info domain sealos.hub delete success 10.251.80.126:22 2023-05-19T09:05:17 info domain sealos.hub:10.251.80.125 append success 10.251.80.129:22 2023-05-19T09:05:17 info domain sealos.hub delete success 10.251.80.129:22 2023-05-19T09:05:17 info domain sealos.hub:10.251.80.125 append success 10.251.80.127:22 2023-05-19T09:05:17 info domain sealos.hub delete success 10.251.80.127:22 2023-05-19T09:05:17 info domain sealos.hub:10.251.80.125 append success 10.251.80.128:22 2023-05-19T09:05:18 info domain sealos.hub delete success 10.251.80.128:22 2023-05-19T09:05:18 info domain sealos.hub:10.251.80.125 append success Created symlink /etc/systemd/system/multi-user.target.wants/registry.service → /etc/systemd/system/registry.service. INFO [2023-05-19 09:05:27] >> Health check registry! INFO [2023-05-19 09:05:28] >> registry is running INFO [2023-05-19 09:05:28] >> init registry success 10.251.80.126:22 INFO [2023-05-19 09:05:34] >> Health check containerd! 10.251.80.126:22 INFO [2023-05-19 09:05:34] >> containerd is running 10.251.80.126:22 INFO [2023-05-19 09:05:34] >> init containerd success 10.251.80.128:22 INFO [2023-05-19 09:05:34] >> Health check containerd! 10.251.80.128:22 INFO [2023-05-19 09:05:34] >> containerd is running 10.251.80.128:22 INFO [2023-05-19 09:05:34] >> init containerd success 10.251.80.129:22 INFO [2023-05-19 09:05:34] >> Health check containerd! 10.251.80.129:22 INFO [2023-05-19 09:05:34] >> containerd is running 10.251.80.129:22 INFO [2023-05-19 09:05:34] >> init containerd success 10.251.80.127:22 INFO [2023-05-19 09:05:34] >> Health check containerd! 10.251.80.127:22 INFO [2023-05-19 09:05:35] >> containerd is running 10.251.80.127:22 INFO [2023-05-19 09:05:35] >> init containerd success 10.251.80.127:22 INFO [2023-05-19 09:05:36] >> Health check image-cri-shim! 10.251.80.127:22 INFO [2023-05-19 09:05:36] >> image-cri-shim is running 10.251.80.127:22 INFO [2023-05-19 09:05:36] >> init shim success 10.251.80.127:22 127.0.0.1 localhost 10.251.80.127:22 ::1 ip6-localhost ip6-loopback 10.251.80.127:22 Applying /etc/sysctl.d/10-console-messages.conf ... 10.251.80.127:22 kernel.printk = 4 4 1 7 10.251.80.127:22 Applying /etc/sysctl.d/10-ipv6-privacy.conf ... 10.251.80.127:22 net.ipv6.conf.all.use_tempaddr = 2 10.251.80.127:22 net.ipv6.conf.default.use_tempaddr = 2 10.251.80.127:22 Applying /etc/sysctl.d/10-kernel-hardening.conf ... 10.251.80.127:22 kernel.kptr_restrict = 1 10.251.80.127:22 Applying /etc/sysctl.d/10-link-restrictions.conf ... 10.251.80.127:22 fs.protected_hardlinks = 1 10.251.80.127:22 fs.protected_symlinks = 1 10.251.80.127:22 Applying /etc/sysctl.d/10-magic-sysrq.conf ... 10.251.80.127:22 kernel.sysrq = 176 10.251.80.127:22 Applying /etc/sysctl.d/10-network-security.conf ... 10.251.80.127:22 net.ipv4.conf.default.rp_filter = 2 10.251.80.127:22 net.ipv4.conf.all.rp_filter = 2 10.251.80.127:22 Applying /etc/sysctl.d/10-ptrace.conf ... 10.251.80.127:22 kernel.yama.ptrace_scope = 1 10.251.80.127:22 Applying /etc/sysctl.d/10-zeropage.conf ... 10.251.80.127:22 vm.mmap_min_addr = 65536 10.251.80.127:22 Applying /usr/lib/sysctl.d/50-default.conf ... 10.251.80.127:22 net.ipv4.conf.default.promote_secondaries = 1 10.251.80.127:22 sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument 10.251.80.127:22 net.ipv4.ping_group_range = 0 2147483647 10.251.80.127:22 net.core.default_qdisc = fq_codel 10.251.80.127:22 fs.protected_regular = 1 10.251.80.127:22 fs.protected_fifos = 1 10.251.80.127:22 Applying /usr/lib/sysctl.d/50-pid-max.conf ... 10.251.80.127:22 kernel.pid_max = 4194304 10.251.80.127:22 Applying /etc/sysctl.d/99-sysctl.conf ... 10.251.80.127:22 Applying /usr/lib/sysctl.d/protect-links.conf ... 10.251.80.127:22 fs.protected_fifos = 1 10.251.80.127:22 fs.protected_hardlinks = 1 10.251.80.127:22 fs.protected_regular = 2 10.251.80.127:22 fs.protected_symlinks = 1 10.251.80.127:22 Applying /etc/sysctl.d/sealos-k8s.conf ... 10.251.80.127:22 net.bridge.bridge-nf-call-ip6tables = 1 10.251.80.127:22 net.bridge.bridge-nf-call-iptables = 1 10.251.80.127:22 net.ipv4.conf.all.rp_filter = 0 10.251.80.127:22 net.ipv4.ip_forward = 1 10.251.80.127:22 sysctl: setting key "net.ipv4.ip_local_port_range": Invalid argument 10.251.80.127:22 net.core.somaxconn = 65535 10.251.80.127:22 fs.file-max = 1048576 10.251.80.127:22 Applying /etc/sysctl.conf ... 10.251.80.128:22 INFO [2023-05-19 09:05:37] >> Health check image-cri-shim! 10.251.80.128:22 INFO [2023-05-19 09:05:37] >> image-cri-shim is running 10.251.80.128:22 INFO [2023-05-19 09:05:37] >> init shim success 10.251.80.128:22 127.0.0.1 localhost 10.251.80.128:22 ::1 ip6-localhost ip6-loopback 10.251.80.128:22 Applying /etc/sysctl.d/10-console-messages.conf ... 10.251.80.128:22 kernel.printk = 4 4 1 7 10.251.80.128:22 Applying /etc/sysctl.d/10-ipv6-privacy.conf ... 10.251.80.128:22 net.ipv6.conf.all.use_tempaddr = 2 10.251.80.128:22 net.ipv6.conf.default.use_tempaddr = 2 10.251.80.128:22 Applying /etc/sysctl.d/10-kernel-hardening.conf ... 10.251.80.128:22 kernel.kptr_restrict = 1 10.251.80.128:22 Applying /etc/sysctl.d/10-link-restrictions.conf ... 10.251.80.128:22 fs.protected_hardlinks = 1 10.251.80.128:22 fs.protected_symlinks = 1 10.251.80.128:22 Applying /etc/sysctl.d/10-magic-sysrq.conf ... 10.251.80.128:22 kernel.sysrq = 176 10.251.80.128:22 Applying /etc/sysctl.d/10-network-security.conf ... 10.251.80.128:22 net.ipv4.conf.default.rp_filter = 2 10.251.80.128:22 net.ipv4.conf.all.rp_filter = 2 10.251.80.128:22 Applying /etc/sysctl.d/10-ptrace.conf ... 10.251.80.128:22 kernel.yama.ptrace_scope = 1 10.251.80.128:22 Applying /etc/sysctl.d/10-zeropage.conf ... 10.251.80.128:22 vm.mmap_min_addr = 65536 10.251.80.128:22 Applying /usr/lib/sysctl.d/50-default.conf ... 10.251.80.128:22 net.ipv4.conf.default.promote_secondaries = 1 10.251.80.128:22 sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument 10.251.80.128:22 net.ipv4.ping_group_range = 0 2147483647 10.251.80.128:22 net.core.default_qdisc = fq_codel 10.251.80.128:22 fs.protected_regular = 1 10.251.80.128:22 fs.protected_fifos = 1 10.251.80.128:22 Applying /usr/lib/sysctl.d/50-pid-max.conf ... 10.251.80.128:22 kernel.pid_max = 4194304 10.251.80.128:22 Applying /etc/sysctl.d/99-sysctl.conf ... 10.251.80.128:22 Applying /usr/lib/sysctl.d/protect-links.conf ... 10.251.80.128:22 fs.protected_fifos = 1 10.251.80.128:22 fs.protected_hardlinks = 1 10.251.80.128:22 fs.protected_regular = 2 10.251.80.128:22 fs.protected_symlinks = 1 10.251.80.128:22 Applying /etc/sysctl.d/sealos-k8s.conf ... 10.251.80.128:22 net.bridge.bridge-nf-call-ip6tables = 1 10.251.80.128:22 net.bridge.bridge-nf-call-iptables = 1 10.251.80.128:22 net.ipv4.conf.all.rp_filter = 0 10.251.80.128:22 net.ipv4.ip_forward = 1 10.251.80.128:22 sysctl: setting key "net.ipv4.ip_local_port_range": Invalid argument 10.251.80.128:22 net.core.somaxconn = 65535 10.251.80.128:22 fs.file-max = 1048576 10.251.80.128:22 Applying /etc/sysctl.conf ... 10.251.80.127:22 Firewall stopped and disabled on system startup 10.251.80.129:22 INFO [2023-05-19 09:05:37] >> Health check image-cri-shim! 10.251.80.129:22 INFO [2023-05-19 09:05:37] >> image-cri-shim is running 10.251.80.129:22 INFO [2023-05-19 09:05:37] >> init shim success 10.251.80.129:22 127.0.0.1 localhost 10.251.80.129:22 ::1 ip6-localhost ip6-loopback 10.251.80.128:22 Firewall stopped and disabled on system startup 10.251.80.129:22 Applying /etc/sysctl.d/10-console-messages.conf ... 10.251.80.129:22 kernel.printk = 4 4 1 7 10.251.80.129:22 Applying /etc/sysctl.d/10-ipv6-privacy.conf ... 10.251.80.129:22 net.ipv6.conf.all.use_tempaddr = 2 10.251.80.129:22 net.ipv6.conf.default.use_tempaddr = 2 10.251.80.129:22 Applying /etc/sysctl.d/10-kernel-hardening.conf ... 10.251.80.129:22 kernel.kptr_restrict = 1 10.251.80.129:22 Applying /etc/sysctl.d/10-link-restrictions.conf ... 10.251.80.129:22 fs.protected_hardlinks = 1 10.251.80.129:22 fs.protected_symlinks = 1 10.251.80.129:22 Applying /etc/sysctl.d/10-magic-sysrq.conf ... 10.251.80.129:22 kernel.sysrq = 176 10.251.80.129:22 Applying /etc/sysctl.d/10-network-security.conf ... 10.251.80.129:22 net.ipv4.conf.default.rp_filter = 2 10.251.80.129:22 net.ipv4.conf.all.rp_filter = 2 10.251.80.129:22 Applying /etc/sysctl.d/10-ptrace.conf ... 10.251.80.129:22 kernel.yama.ptrace_scope = 1 10.251.80.129:22 Applying /etc/sysctl.d/10-zeropage.conf ... 10.251.80.129:22 vm.mmap_min_addr = 65536 10.251.80.129:22 Applying /usr/lib/sysctl.d/50-default.conf ... 10.251.80.129:22 net.ipv4.conf.default.promote_secondaries = 1 10.251.80.129:22 sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument 10.251.80.129:22 net.ipv4.ping_group_range = 0 2147483647 10.251.80.129:22 net.core.default_qdisc = fq_codel 10.251.80.129:22 fs.protected_regular = 1 10.251.80.129:22 fs.protected_fifos = 1 10.251.80.129:22 Applying /usr/lib/sysctl.d/50-pid-max.conf ... 10.251.80.129:22 kernel.pid_max = 4194304 10.251.80.129:22 Applying /etc/sysctl.d/99-sysctl.conf ... 10.251.80.129:22 Applying /usr/lib/sysctl.d/protect-links.conf ... 10.251.80.129:22 fs.protected_fifos = 1 10.251.80.129:22 fs.protected_hardlinks = 1 10.251.80.129:22 fs.protected_regular = 2 10.251.80.129:22 fs.protected_symlinks = 1 10.251.80.129:22 Applying /etc/sysctl.d/sealos-k8s.conf ... 10.251.80.129:22 net.bridge.bridge-nf-call-ip6tables = 1 10.251.80.129:22 net.bridge.bridge-nf-call-iptables = 1 10.251.80.129:22 net.ipv4.conf.all.rp_filter = 0 10.251.80.129:22 net.ipv4.ip_forward = 1 10.251.80.129:22 sysctl: setting key "net.ipv4.ip_local_port_range": Invalid argument 10.251.80.129:22 net.core.somaxconn = 65535 10.251.80.129:22 fs.file-max = 1048576 10.251.80.129:22 Applying /etc/sysctl.conf ... 10.251.80.126:22 INFO [2023-05-19 09:05:37] >> Health check image-cri-shim! 10.251.80.126:22 INFO [2023-05-19 09:05:37] >> image-cri-shim is running 10.251.80.126:22 INFO [2023-05-19 09:05:37] >> init shim success 10.251.80.126:22 127.0.0.1 localhost 10.251.80.126:22 ::1 ip6-localhost ip6-loopback 10.251.80.126:22 Applying /etc/sysctl.d/10-console-messages.conf ... 10.251.80.126:22 kernel.printk = 4 4 1 7 10.251.80.126:22 Applying /etc/sysctl.d/10-ipv6-privacy.conf ... 10.251.80.126:22 net.ipv6.conf.all.use_tempaddr = 2 10.251.80.126:22 net.ipv6.conf.default.use_tempaddr = 2 10.251.80.126:22 Applying /etc/sysctl.d/10-kernel-hardening.conf ... 10.251.80.126:22 kernel.kptr_restrict = 1 10.251.80.126:22 Applying /etc/sysctl.d/10-link-restrictions.conf ... 10.251.80.126:22 fs.protected_hardlinks = 1 10.251.80.126:22 fs.protected_symlinks = 1 10.251.80.126:22 Applying /etc/sysctl.d/10-magic-sysrq.conf ... 10.251.80.126:22 kernel.sysrq = 176 10.251.80.126:22 Applying /etc/sysctl.d/10-network-security.conf ... 10.251.80.126:22 net.ipv4.conf.default.rp_filter = 2 10.251.80.126:22 net.ipv4.conf.all.rp_filter = 2 10.251.80.126:22 Applying /etc/sysctl.d/10-ptrace.conf ... 10.251.80.126:22 kernel.yama.ptrace_scope = 1 10.251.80.126:22 Applying /etc/sysctl.d/10-zeropage.conf ... 10.251.80.126:22 vm.mmap_min_addr = 65536 10.251.80.126:22 Applying /usr/lib/sysctl.d/50-default.conf ... 10.251.80.126:22 net.ipv4.conf.default.promote_secondaries = 1 10.251.80.126:22 sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument 10.251.80.126:22 net.ipv4.ping_group_range = 0 2147483647 10.251.80.126:22 net.core.default_qdisc = fq_codel 10.251.80.126:22 fs.protected_regular = 1 10.251.80.126:22 fs.protected_fifos = 1 10.251.80.126:22 Applying /usr/lib/sysctl.d/50-pid-max.conf ... 10.251.80.126:22 kernel.pid_max = 4194304 10.251.80.126:22 Applying /etc/sysctl.d/99-sysctl.conf ... 10.251.80.126:22 Applying /usr/lib/sysctl.d/protect-links.conf ... 10.251.80.126:22 fs.protected_fifos = 1 10.251.80.126:22 fs.protected_hardlinks = 1 10.251.80.126:22 fs.protected_regular = 2 10.251.80.126:22 fs.protected_symlinks = 1 10.251.80.126:22 Applying /etc/sysctl.d/sealos-k8s.conf ... 10.251.80.126:22 net.bridge.bridge-nf-call-ip6tables = 1 10.251.80.126:22 net.bridge.bridge-nf-call-iptables = 1 10.251.80.126:22 net.ipv4.conf.all.rp_filter = 0 10.251.80.126:22 net.ipv4.ip_forward = 1 10.251.80.126:22 sysctl: setting key "net.ipv4.ip_local_port_range": Invalid argument 10.251.80.126:22 net.core.somaxconn = 65535 10.251.80.126:22 fs.file-max = 1048576 10.251.80.126:22 Applying /etc/sysctl.conf ... 10.251.80.129:22 Firewall stopped and disabled on system startup 10.251.80.126:22 Firewall stopped and disabled on system startup Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service. 10.251.80.127:22 Image is up to date for sha256:221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165 10.251.80.127:22 INFO [2023-05-19 09:06:02] >> init kubelet success 10.251.80.127:22 INFO [2023-05-19 09:06:02] >> init rootfs success 10.251.80.128:22 Image is up to date for sha256:221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165 10.251.80.126:22 Image is up to date for sha256:221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165 10.251.80.126:22 INFO [2023-05-19 09:06:07] >> init kubelet success 10.251.80.126:22 INFO [2023-05-19 09:06:07] >> init rootfs success 10.251.80.129:22 Image is up to date for sha256:221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165 10.251.80.128:22 INFO [2023-05-19 09:06:10] >> init kubelet success 10.251.80.128:22 INFO [2023-05-19 09:06:10] >> init rootfs success 10.251.80.129:22 INFO [2023-05-19 09:06:15] >> init kubelet success 10.251.80.129:22 INFO [2023-05-19 09:06:15] >> init rootfs success INFO [2023-05-19 09:06:28] >> Health check containerd! INFO [2023-05-19 09:06:28] >> containerd is running INFO [2023-05-19 09:06:28] >> init containerd success Created symlink /etc/systemd/system/multi-user.target.wants/image-cri-shim.service → /etc/systemd/system/image-cri-shim.service. INFO [2023-05-19 09:06:33] >> Health check image-cri-shim! INFO [2023-05-19 09:06:34] >> image-cri-shim is running INFO [2023-05-19 09:06:34] >> init shim success 127.0.0.1 localhost ::1 ip6-localhost ip6-loopback

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root:

kubeadm join apiserver.cluster.local:6443 --token \ --discovery-token-ca-cert-hash sha256:3c461bcef72cda53555b17e464ad91e2e7861172b0ab4cc7f5d5e19ea4665b87 \ --control-plane --certificate-key

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join apiserver.cluster.local:6443 --token \ --discovery-token-ca-cert-hash sha256:3c461bcef72cda53555b17e464ad91e2e7861172b0ab4cc7f5d5e19ea4665b87 2023-05-19T09:10:26 info Executing pipeline Join in CreateProcessor. 2023-05-19T09:10:26 info [10.251.80.126:22 10.251.80.127:22] will be added as master 2023-05-19T09:10:26 info start to init filesystem join masters... 2023-05-19T09:10:27 info start to copy static files to masters 2023-05-19T09:10:28 info start to copy kubeconfig files to masters 2023-05-19T09:10:29 info start to copy etc pki files to masters1/1, 250 it/s) 2023-05-19T09:10:30 info start to get kubernetes token...
2023-05-19T09:10:32 info start to copy kubeadm join config to master: 10.251.80.127:22 2023-05-19T09:10:34 info start to copy kubeadm join config to master: 10.251.80.126:22 2023-05-19T09:10:37 info fetch certSANs from kubeadm configmap(1/1, 325 it/s) 2023-05-19T09:10:37 info start to join 10.251.80.126:22 as master 2023-05-19T09:10:37 info start to generator cert 10.251.80.126:22 as master 10.251.80.126:22 2023-05-19T09:10:38 info apiserver altNames : {map[apiserver.cluster.local:apiserver.cluster.local k8s-master02:k8s-master02 kubernetes:kubernetes kubernetes.default:kubernetes.default kubernetes.default.svc:kubernetes.default.svc kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local localhost:localhost] map[10.103.97.2:10.103.97.2 10.251.80.125:10.251.80.125 10.251.80.126:10.251.80.126 10.251.80.127:10.251.80.127 10.96.0.1:10.96.0.1 127.0.0.1:127.0.0.1]} 10.251.80.126:22 2023-05-19T09:10:38 info Etcd altnames : {map[k8s-master02:k8s-master02 localhost:localhost] map[10.251.80.126:10.251.80.126 127.0.0.1:127.0.0.1 ::1:::1]}, commonName : k8s-master02 10.251.80.126:22 2023-05-19T09:10:38 info sa.key sa.pub already exist 10.251.80.126:22 2023-05-19T09:10:40 info domain apiserver.cluster.local:10.251.80.125 append success 10.251.80.126:22 W0519 09:10:41.255530 51912 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration! 10.251.80.126:22 [preflight] Running pre-flight checks 10.251.80.126:22 [WARNING FileExisting-socat]: socat not found in system path 10.251.80.126:22 [preflight] Reading configuration from the cluster... 10.251.80.126:22 [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' 10.251.80.126:22 W0519 09:10:41.607382 51912 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0 10.251.80.126:22 [preflight] Running pre-flight checks before initializing the new control plane instance 10.251.80.126:22 [preflight] Pulling images required for setting up a Kubernetes cluster 10.251.80.126:22 [preflight] This might take a minute or two, depending on the speed of your internet connection 10.251.80.126:22 [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' 10.251.80.126:22 [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace 10.251.80.126:22 [certs] Using certificateDir folder "/etc/kubernetes/pki" 10.251.80.126:22 [certs] Using the existing "apiserver" certificate and key 10.251.80.126:22 [certs] Using the existing "apiserver-kubelet-client" certificate and key 10.251.80.126:22 [certs] Using the existing "etcd/server" certificate and key 10.251.80.126:22 [certs] Using the existing "etcd/peer" certificate and key 10.251.80.126:22 [certs] Using the existing "etcd/healthcheck-client" certificate and key 10.251.80.126:22 [certs] Using the existing "apiserver-etcd-client" certificate and key 10.251.80.126:22 [certs] Using the existing "front-proxy-client" certificate and key 10.251.80.126:22 [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki" 10.251.80.126:22 [certs] Using the existing "sa" key 10.251.80.126:22 [kubeconfig] Generating kubeconfig files 10.251.80.126:22 [kubeconfig] Using kubeconfig folder "/etc/kubernetes" 10.251.80.126:22 [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf" 10.251.80.126:22 W0519 09:12:07.472373 51912 kubeconfig.go:249] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://10.251.80.126:6443, got: https://apiserver.cluster.local:6443 10.251.80.126:22 [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf" 10.251.80.126:22 W0519 09:12:07.752040 51912 kubeconfig.go:249] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://10.251.80.126:6443, got: https://apiserver.cluster.local:6443 10.251.80.126:22 [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf" 10.251.80.126:22 [control-plane] Using manifest folder "/etc/kubernetes/manifests" 10.251.80.126:22 [control-plane] Creating static Pod manifest for "kube-apiserver" 10.251.80.126:22 [control-plane] Creating static Pod manifest for "kube-controller-manager" 10.251.80.126:22 [control-plane] Creating static Pod manifest for "kube-scheduler" 10.251.80.126:22 [check-etcd] Checking that the etcd cluster is healthy 10.251.80.126:22 [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" 10.251.80.126:22 [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" 10.251.80.126:22 [kubelet-start] Starting the kubelet 10.251.80.126:22 [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... 10.251.80.126:22 [etcd] Announced new etcd member joining to the existing etcd cluster 10.251.80.126:22 [etcd] Creating static Pod manifest for "etcd" 10.251.80.126:22 [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s 10.251.80.126:22 The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation 10.251.80.126:22 [mark-control-plane] Marking the node k8s-master02 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] 10.251.80.126:22 [mark-control-plane] Marking the node k8s-master02 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule] 10.251.80.126:22 10.251.80.126:22 This node has joined the cluster and a new control plane instance was created: 10.251.80.126:22 10.251.80.126:22 Certificate signing request was sent to apiserver and approval was received. 10.251.80.126:22 The Kubelet was informed of the new secure connection details. 10.251.80.126:22 Control plane label and taint were applied to the new node. 10.251.80.126:22 The Kubernetes control plane instances scaled up. 10.251.80.126:22 A new etcd member was added to the local/stacked etcd cluster. 10.251.80.126:22 10.251.80.126:22 To start administering your cluster from this node, you need to run the following as a regular user: 10.251.80.126:22 10.251.80.126:22 mkdir -p $HOME/.kube 10.251.80.126:22 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 10.251.80.126:22 sudo chown $(id -u):$(id -g) $HOME/.kube/config 10.251.80.126:22 10.251.80.126:22 Run 'kubectl get nodes' to see this node join the cluster. 10.251.80.126:22 10.251.80.126:22 2023-05-19T09:12:39 info domain apiserver.cluster.local delete success 10.251.80.126:22 2023-05-19T09:12:39 info domain apiserver.cluster.local:10.251.80.126 append success 2023-05-19T09:12:40 info succeeded in joining 10.251.80.126:22 as master 2023-05-19T09:12:40 info start to join 10.251.80.127:22 as master 2023-05-19T09:12:40 info start to generator cert 10.251.80.127:22 as master 10.251.80.127:22 2023-05-19T09:12:41 info apiserver altNames : {map[apiserver.cluster.local:apiserver.cluster.local k8s-master03:k8s-master03 kubernetes:kubernetes kubernetes.default:kubernetes.default kubernetes.default.svc:kubernetes.default.svc kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local localhost:localhost] map[10.103.97.2:10.103.97.2 10.251.80.125:10.251.80.125 10.251.80.126:10.251.80.126 10.251.80.127:10.251.80.127 10.96.0.1:10.96.0.1 127.0.0.1:127.0.0.1]} 10.251.80.127:22 2023-05-19T09:12:41 info Etcd altnames : {map[k8s-master03:k8s-master03 localhost:localhost] map[10.251.80.127:10.251.80.127 127.0.0.1:127.0.0.1 ::1:::1]}, commonName : k8s-master03 10.251.80.127:22 2023-05-19T09:12:41 info sa.key sa.pub already exist 10.251.80.127:22 2023-05-19T09:12:44 info domain apiserver.cluster.local:10.251.80.125 append success 10.251.80.127:22 W0519 09:12:45.247107 48125 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration! 10.251.80.127:22 [preflight] Running pre-flight checks 10.251.80.127:22 [WARNING FileExisting-socat]: socat not found in system path 10.251.80.127:22 [preflight] Reading configuration from the cluster... 10.251.80.127:22 [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' 10.251.80.127:22 W0519 09:12:48.634467 48125 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0 10.251.80.127:22 [preflight] Running pre-flight checks before initializing the new control plane instance 10.251.80.127:22 [preflight] Pulling images required for setting up a Kubernetes cluster 10.251.80.127:22 [preflight] This might take a minute or two, depending on the speed of your internet connection 10.251.80.127:22 [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' 10.251.80.127:22 [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace 10.251.80.127:22 [certs] Using certificateDir folder "/etc/kubernetes/pki" 10.251.80.127:22 [certs] Using the existing "apiserver" certificate and key 10.251.80.127:22 [certs] Using the existing "apiserver-kubelet-client" certificate and key 10.251.80.127:22 [certs] Using the existing "front-proxy-client" certificate and key 10.251.80.127:22 [certs] Using the existing "apiserver-etcd-client" certificate and key 10.251.80.127:22 [certs] Using the existing "etcd/server" certificate and key 10.251.80.127:22 [certs] Using the existing "etcd/peer" certificate and key 10.251.80.127:22 [certs] Using the existing "etcd/healthcheck-client" certificate and key 10.251.80.127:22 [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki" 10.251.80.127:22 [certs] Using the existing "sa" key 10.251.80.127:22 [kubeconfig] Generating kubeconfig files 10.251.80.127:22 [kubeconfig] Using kubeconfig folder "/etc/kubernetes" 10.251.80.127:22 [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf" 10.251.80.127:22 W0519 09:14:18.247887 48125 kubeconfig.go:249] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://10.251.80.127:6443, got: https://apiserver.cluster.local:6443 10.251.80.127:22 [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf" 10.251.80.127:22 W0519 09:14:18.488017 48125 kubeconfig.go:249] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://10.251.80.127:6443, got: https://apiserver.cluster.local:6443 10.251.80.127:22 [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf" 10.251.80.127:22 [control-plane] Using manifest folder "/etc/kubernetes/manifests" 10.251.80.127:22 [control-plane] Creating static Pod manifest for "kube-apiserver" 10.251.80.127:22 [control-plane] Creating static Pod manifest for "kube-controller-manager" 10.251.80.127:22 [control-plane] Creating static Pod manifest for "kube-scheduler" 10.251.80.127:22 [check-etcd] Checking that the etcd cluster is healthy 10.251.80.127:22 [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" 10.251.80.127:22 [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" 10.251.80.127:22 [kubelet-start] Starting the kubelet 10.251.80.127:22 [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... 10.251.80.127:22 [etcd] Announced new etcd member joining to the existing etcd cluster 10.251.80.127:22 [etcd] Creating static Pod manifest for "etcd" 10.251.80.127:22 [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s 10.251.80.127:22 The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation 10.251.80.127:22 [mark-control-plane] Marking the node k8s-master03 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] 10.251.80.127:22 [mark-control-plane] Marking the node k8s-master03 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule] 10.251.80.127:22 10.251.80.127:22 This node has joined the cluster and a new control plane instance was created: 10.251.80.127:22 10.251.80.127:22 Certificate signing request was sent to apiserver and approval was received. 10.251.80.127:22 The Kubelet was informed of the new secure connection details. 10.251.80.127:22 Control plane label and taint were applied to the new node. 10.251.80.127:22 The Kubernetes control plane instances scaled up. 10.251.80.127:22 A new etcd member was added to the local/stacked etcd cluster. 10.251.80.127:22 10.251.80.127:22 To start administering your cluster from this node, you need to run the following as a regular user: 10.251.80.127:22 10.251.80.127:22 mkdir -p $HOME/.kube 10.251.80.127:22 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 10.251.80.127:22 sudo chown $(id -u):$(id -g) $HOME/.kube/config 10.251.80.127:22 10.251.80.127:22 Run 'kubectl get nodes' to see this node join the cluster. 10.251.80.127:22 10.251.80.127:22 2023-05-19T09:14:52 info domain apiserver.cluster.local delete success 10.251.80.127:22 2023-05-19T09:14:52 info domain apiserver.cluster.local:10.251.80.127 append success 2023-05-19T09:14:52 info succeeded in joining 10.251.80.127:22 as master 2023-05-19T09:14:52 info [10.251.80.128:22 10.251.80.129:22] will be added as worker 2023-05-19T09:14:53 info fetch certSANs from kubeadm configmap 2023-05-19T09:14:54 info start to join 10.251.80.129:22 as worker 2023-05-19T09:14:54 info start to copy kubeadm join config to node: 10.251.80.129:22 2023-05-19T09:14:54 info start to join 10.251.80.128:22 as worker 2023-05-19T09:14:57 info start to copy kubeadm join config to node: 10.251.80.128:22 10.251.80.129:22 2023-05-19T09:14:58 info domain apiserver.cluster.local:10.103.97.2 append success 10.251.80.129:22 2023-05-19T09:14:59 info domain lvscare.node.ip:10.251.80.129 append success 2023-05-19T09:14:59 info run ipvs once module: 10.251.80.129:22 10.251.80.129:22 2023-05-19T09:14:59 info Trying to add route 10.251.80.129:22 2023-05-19T09:14:59 info success to set route.(host:10.103.97.2, gateway:10.251.80.129) 2023-05-19T09:15:00 info start join node: 10.251.80.129:22==] (1/1, 344 it/s) 10.251.80.128:22 2023-05-19T09:15:00 info domain apiserver.cluster.local:10.103.97.2 append success 10.251.80.129:22 W0519 09:15:00.961252 47330 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration! 10.251.80.129:22 [preflight] Running pre-flight checks 10.251.80.129:22 [WARNING FileExisting-socat]: socat not found in system path 10.251.80.129:22 [preflight] Reading configuration from the cluster... 10.251.80.129:22 [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' 10.251.80.128:22 2023-05-19T09:15:01 info domain lvscare.node.ip:10.251.80.128 append success 2023-05-19T09:15:01 info run ipvs once module: 10.251.80.128:22 10.251.80.128:22 2023-05-19T09:15:02 info Trying to add route 10.251.80.128:22 2023-05-19T09:15:02 info success to set route.(host:10.103.97.2, gateway:10.251.80.128) 2023-05-19T09:15:02 info start join node: 10.251.80.128:22 10.251.80.129:22 W0519 09:15:02.511864 47330 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0 10.251.80.128:22 W0519 09:15:02.953669 45779 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration! 10.251.80.128:22 [preflight] Running pre-flight checks 10.251.80.128:22 [WARNING FileExisting-socat]: socat not found in system path 10.251.80.128:22 [preflight] Reading configuration from the cluster... 10.251.80.128:22 [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' 10.251.80.129:22 [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" 10.251.80.129:22 [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" 10.251.80.129:22 [kubelet-start] Starting the kubelet 10.251.80.129:22 [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... 10.251.80.128:22 W0519 09:15:05.546583 45779 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0 10.251.80.128:22 [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" 10.251.80.128:22 [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" 10.251.80.128:22 [kubelet-start] Starting the kubelet 10.251.80.128:22 [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... 10.251.80.129:22 10.251.80.129:22 This node has joined the cluster: 10.251.80.129:22 Certificate signing request was sent to apiserver and a response was received. 10.251.80.129:22 The Kubelet was informed of the new secure connection details. 10.251.80.129:22 10.251.80.129:22 Run 'kubectl get nodes' on the control-plane to see this node join the cluster. 10.251.80.129:22 2023-05-19T09:15:20 info succeeded in joining 10.251.80.129:22 as worker 10.251.80.128:22 10.251.80.128:22 This node has joined the cluster: 10.251.80.128:22 Certificate signing request was sent to apiserver and a response was received. 10.251.80.128:22 The Kubelet was informed of the new secure connection details. 10.251.80.128:22 10.251.80.128:22 Run 'kubectl get nodes' on the control-plane to see this node join the cluster. 10.251.80.128:22 2023-05-19T09:15:21 info succeeded in joining 10.251.80.128:22 as worker 2023-05-19T09:15:21 info start to sync lvscare static pod to node: 10.251.80.128:22 master: [10.251.80.125:6443 10.251.80.126:6443 10.251.80.127:6443] 2023-05-19T09:15:21 info start to sync lvscare static pod to node: 10.251.80.129:22 master: [10.251.80.125:6443 10.251.80.126:6443 10.251.80.127:6443] 10.251.80.128:22 2023-05-19T09:15:22 info generator lvscare static pod is success 10.251.80.129:22 2023-05-19T09:15:22 info generator lvscare static pod is success 2023-05-19T09:15:23 info Executing pipeline RunGuest in CreateProcessor. customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created`

cuisongliu commented 1 year ago

如果内核参数不影响,可能有些参数你系统不支持