labring / sealos

Sealos is a production-ready Kubernetes distribution. You can run any Docker image on sealos, start high availability databases like mysql/pgsql/redis/mongo, develop applications using any Programming language.
https://cloud.sealos.io
Apache License 2.0
13.85k stars 2.07k forks source link

集群安装失败 提示resource mapping not found for name: "tigera-operator" namespace: "" from "manifests/tigera-operator.yaml": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1" #1691

Closed itxiao6 closed 2 years ago

itxiao6 commented 2 years ago

基础环境

Centos7 5.19.6-1.el7.elrepo.x86_64

Clusterfile 文件内容

[root@master1 ~]# cat Clusterfile 
apiVersion: apps.sealos.io/v1beta1
kind: Cluster
metadata:
  creationTimestamp: null
  name: default
spec:
  hosts:
  - ips:
    - 192.168.52.11:22
    roles:
    - master
    - amd64
  - ips:
    - 192.168.52.21:22
    - 192.168.52.22:22
    - 192.168.52.23:22
    - 192.168.52.24:22
    - 192.168.52.25:22
    - 192.168.52.26:22
    roles:
    - node
    - amd64
  image:
  - labring/kubernetes:v1.25.0
  - labring/calico:v3.22.1
  ssh:
    passwd: "123456"
    pk: /root/.ssh/id_rsa
    port: 22
status: {}

安装日志

[root@master1 ~]# sealos apply -f Clusterfile
2022-09-03T10:43:43 info Start to create a new cluster: master [192.168.52.11], worker [192.168.52.21 192.168.52.22 192.168.52.23 192.168.52.24 192.168.52.25 192.168.52.26]
2022-09-03T10:43:43 info Executing pipeline Check in CreateProcessor.
2022-09-03T10:43:44 info checker:hostname [192.168.52.11:22 192.168.52.21:22 192.168.52.22:22 192.168.52.23:22 192.168.52.24:22 192.168.52.25:22 192.168.52.26:22]
2022-09-03T10:43:46 info checker:timeSync [192.168.52.11:22 192.168.52.21:22 192.168.52.22:22 192.168.52.23:22 192.168.52.24:22 192.168.52.25:22 192.168.52.26:22]
2022-09-03T10:43:48 info Executing pipeline PreProcess in CreateProcessor.
Resolving "labring/kubernetes" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/labring/kubernetes:v1.25.0...
Getting image source signatures
Copying blob c96c5e7299a2 done  
Copying config bb62b3fff8 done  
Writing manifest to image destination
Storing signatures
bb62b3fff8c2dc0cff559a975cdc7b656b6949054f6058c9418a7e4f76b922aa
Resolving "labring/calico" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/labring/calico:v3.22.1...
Getting image source signatures
Copying blob 64700354ceba done  
Copying config 29516dc98b done  
Writing manifest to image destination
Storing signatures
29516dc98b4b2d4fc899c9b27dfd004e75ee73ef9071ad6835f9ff97f156f58c
default-knerifox
default-jtzj3syq
2022-09-03T10:45:56 info Executing pipeline RunConfig in CreateProcessor.
2022-09-03T10:45:56 info Executing pipeline MountRootfs in CreateProcessor.
which: no docker in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin)
 INFO [2022-09-03 10:47:12] >> check root,port,cri success 
Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /etc/systemd/system/containerd.service.
[1/1]copying files to 192.168.52.24:22  75% [==========>    ] (3/4, 29 it/min) [11s:2s] INFO [2022-09-03 10:47:51] >> Health check containerd! 
 INFO [2022-09-03 10:47:53] >> containerd is running                                   
 INFO [2022-09-03 10:47:53] >> init containerd success 
Created symlink from /etc/systemd/system/multi-user.target.wants/image-cri-shim.service to /etc/systemd/system/image-cri-shim.service.
 INFO [2022-09-03 10:47:53] >> Health check image-cri-shim! 
 INFO [2022-09-03 10:47:53] >> image-cri-shim is running 
 INFO [2022-09-03 10:47:53] >> init shim success 
[1/1]copying files to 192.168.52.26:22   5% [               ] (1/17, 2 it/s) [0s:7s]* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.conf.all.rp_filter = 0
* Applying /etc/sysctl.conf ...
net.ipv4.ip_forward = 1
[1/1]copying files to 192.168.52.24:22   5% [               ] (1/17, 2 it/s) [0s:9s]Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
SELINUX=enforcing
[1/1]copying files to 192.168.52.26:22  11% [>              ] (2/17, 20 it/min) [6s:45s]Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
 INFO [2022-09-03 10:48:02] >> init kube success 
 INFO [2022-09-03 10:48:02] >> init containerd rootfs success 
192.168.52.26:22: which: no docker in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)  
192.168.52.26:22:  INFO [2022-09-03 10:50:29] >> check root,port,cri success 
192.168.52.23:22: which: no docker in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)  
192.168.52.23:22:  INFO [2022-09-03 10:50:30] >> check root,port,cri success 
192.168.52.26:22: Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /etc/systemd/system/containerd.service.
192.168.52.26:22:  INFO [2022-09-03 10:50:32] >> Health check containerd! 
192.168.52.26:22:  INFO [2022-09-03 10:50:32] >> containerd is running 
192.168.52.26:22:  INFO [2022-09-03 10:50:32] >> init containerd success 
192.168.52.26:22: Created symlink from /etc/systemd/system/multi-user.target.wants/image-cri-shim.service to /etc/systemd/system/image-cri-shim.service.
192.168.52.26:22:  INFO [2022-09-03 10:50:33] >> Health check image-cri-shim! 
192.168.52.26:22:  INFO [2022-09-03 10:50:33] >> image-cri-shim is running 
192.168.52.26:22:  INFO [2022-09-03 10:50:33] >> init shim success 
192.168.52.26:22: * Applying /usr/lib/sysctl.d/00-system.conf ...
192.168.52.26:22: net.bridge.bridge-nf-call-ip6tables = 0
192.168.52.26:22: net.bridge.bridge-nf-call-iptables = 0
192.168.52.26:22: net.bridge.bridge-nf-call-arptables = 0
192.168.52.26:22: * Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
192.168.52.26:22: * Applying /usr/lib/sysctl.d/50-default.conf ...
192.168.52.26:22: kernel.sysrq = 16
192.168.52.26:22: kernel.core_uses_pid = 1
192.168.52.26:22: net.ipv4.conf.default.rp_filter = 1
192.168.52.26:22: net.ipv4.conf.all.rp_filter = 1
192.168.52.26:22: net.ipv4.conf.default.accept_source_route = 0
192.168.52.26:22: net.ipv4.conf.all.accept_source_route = 0
192.168.52.26:22: net.ipv4.conf.default.promote_secondaries = 1
192.168.52.26:22: net.ipv4.conf.all.promote_secondaries = 1
192.168.52.26:22: fs.protected_hardlinks = 1
192.168.52.26:22: fs.protected_symlinks = 1
192.168.52.26:22: * Applying /etc/sysctl.d/99-sysctl.conf ...
192.168.52.26:22: * Applying /etc/sysctl.d/k8s.conf ...
192.168.52.26:22: net.bridge.bridge-nf-call-ip6tables = 1
192.168.52.26:22: net.bridge.bridge-nf-call-iptables = 1
192.168.52.26:22: net.ipv4.conf.all.rp_filter = 0
192.168.52.26:22: * Applying /etc/sysctl.conf ...
192.168.52.26:22: net.ipv4.ip_forward = 1
192.168.52.23:22: Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /etc/systemd/system/containerd.service.
192.168.52.23:22:  INFO [2022-09-03 10:50:34] >> Health check containerd! 
192.168.52.23:22:  INFO [2022-09-03 10:50:34] >> containerd is running 
192.168.52.23:22:  INFO [2022-09-03 10:50:34] >> init containerd success 
192.168.52.23:22: Created symlink from /etc/systemd/system/multi-user.target.wants/image-cri-shim.service to /etc/systemd/system/image-cri-shim.service.
192.168.52.23:22:  INFO [2022-09-03 10:50:34] >> Health check image-cri-shim! 
192.168.52.23:22:  INFO [2022-09-03 10:50:34] >> image-cri-shim is running 
192.168.52.23:22:  INFO [2022-09-03 10:50:34] >> init shim success 
192.168.52.23:22: * Applying /usr/lib/sysctl.d/00-system.conf ...
192.168.52.23:22: net.bridge.bridge-nf-call-ip6tables = 0
192.168.52.23:22: net.bridge.bridge-nf-call-iptables = 0
192.168.52.23:22: net.bridge.bridge-nf-call-arptables = 0
192.168.52.23:22: * Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
192.168.52.23:22: * Applying /usr/lib/sysctl.d/50-default.conf ...
192.168.52.23:22: kernel.sysrq = 16
192.168.52.23:22: kernel.core_uses_pid = 1
192.168.52.23:22: net.ipv4.conf.default.rp_filter = 1
192.168.52.23:22: net.ipv4.conf.all.rp_filter = 1
192.168.52.23:22: net.ipv4.conf.default.accept_source_route = 0
192.168.52.23:22: net.ipv4.conf.all.accept_source_route = 0
192.168.52.23:22: net.ipv4.conf.default.promote_secondaries = 1
192.168.52.23:22: net.ipv4.conf.all.promote_secondaries = 1
192.168.52.23:22: fs.protected_hardlinks = 1
192.168.52.23:22: fs.protected_symlinks = 1
192.168.52.23:22: * Applying /etc/sysctl.d/99-sysctl.conf ...
192.168.52.23:22: * Applying /etc/sysctl.d/k8s.conf ...
192.168.52.23:22: net.bridge.bridge-nf-call-ip6tables = 1
192.168.52.23:22: net.bridge.bridge-nf-call-iptables = 1
192.168.52.23:22: net.ipv4.conf.all.rp_filter = 0
192.168.52.23:22: * Applying /etc/sysctl.conf ...
192.168.52.23:22: net.ipv4.ip_forward = 1
192.168.52.26:22: Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
192.168.52.26:22: Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
192.168.52.26:22: SELINUX=enforcing:22 100% [===============] (1/1, 11 it/min)
192.168.52.26:22: Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
192.168.52.26:22:  INFO [2022-09-03 10:50:36] >> init kube success 
192.168.52.26:22:  INFO [2022-09-03 10:50:36] >> init containerd rootfs success 
192.168.52.23:22: Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
192.168.52.23:22: Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
192.168.52.21:22: which: no docker in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)
192.168.52.21:22:  INFO [2022-09-03 10:50:36] >> check root,port,cri success 
192.168.52.23:22: SELINUX=enforcing
192.168.52.24:22: which: no docker in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)
192.168.52.24:22:  INFO [2022-09-03 10:50:37] >> check root,port,cri success 
192.168.52.23:22: Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
192.168.52.23:22:  INFO [2022-09-03 10:50:37] >> init kube success 
192.168.52.23:22:  INFO [2022-09-03 10:50:37] >> init containerd rootfs success 
192.168.52.22:22: which: no docker in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)
192.168.52.22:22:  INFO [2022-09-03 10:50:37] >> check root,port,cri success 
192.168.52.25:22: which: no docker in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)
192.168.52.25:22:  INFO [2022-09-03 10:50:39] >> check root,port,cri success 
192.168.52.21:22: Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /etc/systemd/system/containerd.service.
192.168.52.24:22: Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /etc/systemd/system/containerd.service.
192.168.52.21:22:  INFO [2022-09-03 10:50:49] >> Health check containerd! 
192.168.52.21:22:  INFO [2022-09-03 10:50:49] >> containerd is running 
192.168.52.21:22:  INFO [2022-09-03 10:50:49] >> init containerd success 
192.168.52.21:22: Created symlink from /etc/systemd/system/multi-user.target.wants/image-cri-shim.service to /etc/systemd/system/image-cri-shim.service.
192.168.52.24:22:  INFO [2022-09-03 10:50:49] >> Health check containerd! 
192.168.52.24:22:  INFO [2022-09-03 10:50:49] >> containerd is running 
192.168.52.24:22:  INFO [2022-09-03 10:50:49] >> init containerd success 
192.168.52.24:22: Created symlink from /etc/systemd/system/multi-user.target.wants/image-cri-shim.service to /etc/systemd/system/image-cri-shim.service.
192.168.52.21:22:  INFO [2022-09-03 10:50:49] >> Health check image-cri-shim! 
192.168.52.21:22:  INFO [2022-09-03 10:50:49] >> image-cri-shim is running 
192.168.52.21:22:  INFO [2022-09-03 10:50:49] >> init shim success 
192.168.52.21:22: * Applying /usr/lib/sysctl.d/00-system.conf ...
192.168.52.21:22: net.bridge.bridge-nf-call-ip6tables = 0
192.168.52.21:22: net.bridge.bridge-nf-call-iptables = 0
192.168.52.21:22: net.bridge.bridge-nf-call-arptables = 0
192.168.52.21:22: * Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
192.168.52.21:22: * Applying /usr/lib/sysctl.d/50-default.conf ...
192.168.52.21:22: kernel.sysrq = 16
192.168.52.21:22: kernel.core_uses_pid = 1
192.168.52.21:22: net.ipv4.conf.default.rp_filter = 1
192.168.52.21:22: net.ipv4.conf.all.rp_filter = 1
192.168.52.21:22: net.ipv4.conf.default.accept_source_route = 0
192.168.52.21:22: net.ipv4.conf.all.accept_source_route = 0
192.168.52.21:22: net.ipv4.conf.default.promote_secondaries = 1
192.168.52.21:22: net.ipv4.conf.all.promote_secondaries = 1
192.168.52.21:22: fs.protected_hardlinks = 1
192.168.52.21:22: fs.protected_symlinks = 1
192.168.52.21:22: * Applying /etc/sysctl.d/99-sysctl.conf ...
192.168.52.21:22: * Applying /etc/sysctl.d/k8s.conf ...
192.168.52.21:22: net.bridge.bridge-nf-call-ip6tables = 1
192.168.52.21:22: net.bridge.bridge-nf-call-iptables = 1
192.168.52.21:22: net.ipv4.conf.all.rp_filter = 0
192.168.52.21:22: * Applying /etc/sysctl.conf ...
192.168.52.21:22: net.ipv4.ip_forward = 1
192.168.52.24:22:  INFO [2022-09-03 10:50:49] >> Health check image-cri-shim! 
192.168.52.24:22:  INFO [2022-09-03 10:50:49] >> image-cri-shim is running 
192.168.52.24:22:  INFO [2022-09-03 10:50:49] >> init shim success 
192.168.52.24:22: * Applying /usr/lib/sysctl.d/00-system.conf ...
192.168.52.24:22: net.bridge.bridge-nf-call-ip6tables = 0
192.168.52.24:22: net.bridge.bridge-nf-call-iptables = 0
192.168.52.24:22: net.bridge.bridge-nf-call-arptables = 0
192.168.52.24:22: * Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
192.168.52.24:22: * Applying /usr/lib/sysctl.d/50-default.conf ...
192.168.52.24:22: kernel.sysrq = 16
192.168.52.24:22: kernel.core_uses_pid = 1
192.168.52.24:22: net.ipv4.conf.default.rp_filter = 1
192.168.52.24:22: net.ipv4.conf.all.rp_filter = 1
192.168.52.24:22: net.ipv4.conf.default.accept_source_route = 0
192.168.52.24:22: net.ipv4.conf.all.accept_source_route = 0
192.168.52.24:22: net.ipv4.conf.default.promote_secondaries = 1
192.168.52.24:22: net.ipv4.conf.all.promote_secondaries = 1
192.168.52.24:22: fs.protected_hardlinks = 1
192.168.52.24:22: fs.protected_symlinks = 1
192.168.52.24:22: * Applying /etc/sysctl.d/99-sysctl.conf ...
192.168.52.24:22: * Applying /etc/sysctl.d/k8s.conf ...
192.168.52.24:22: net.bridge.bridge-nf-call-ip6tables = 1
192.168.52.24:22: net.bridge.bridge-nf-call-iptables = 1
192.168.52.24:22: net.ipv4.conf.all.rp_filter = 0
192.168.52.24:22: * Applying /etc/sysctl.conf ...
192.168.52.24:22: net.ipv4.ip_forward = 1
192.168.52.25:22: Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /etc/systemd/system/containerd.service.
192.168.52.22:22: Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /etc/systemd/system/containerd.service.
192.168.52.25:22:  INFO [2022-09-03 10:50:50] >> Health check containerd! 
192.168.52.25:22:  INFO [2022-09-03 10:50:50] >> containerd is running 
192.168.52.25:22:  INFO [2022-09-03 10:50:50] >> init containerd success 
192.168.52.22:22:  INFO [2022-09-03 10:50:50] >> Health check containerd! 
192.168.52.22:22:  INFO [2022-09-03 10:50:50] >> containerd is running 
192.168.52.22:22:  INFO [2022-09-03 10:50:50] >> init containerd success 
192.168.52.25:22: Created symlink from /etc/systemd/system/multi-user.target.wants/image-cri-shim.service to /etc/systemd/system/image-cri-shim.service.
192.168.52.22:22: Created symlink from /etc/systemd/system/multi-user.target.wants/image-cri-shim.service to /etc/systemd/system/image-cri-shim.service.
192.168.52.25:22:  INFO [2022-09-03 10:50:50] >> Health check image-cri-shim! 
192.168.52.25:22:  INFO [2022-09-03 10:50:51] >> image-cri-shim is running 
192.168.52.25:22:  INFO [2022-09-03 10:50:51] >> init shim success 
192.168.52.22:22:  INFO [2022-09-03 10:50:51] >> Health check image-cri-shim! 
192.168.52.22:22:  INFO [2022-09-03 10:50:51] >> image-cri-shim is running 
192.168.52.22:22:  INFO [2022-09-03 10:50:51] >> init shim success 
192.168.52.25:22: * Applying /usr/lib/sysctl.d/00-system.conf ...
192.168.52.25:22: net.bridge.bridge-nf-call-ip6tables = 0
192.168.52.25:22: net.bridge.bridge-nf-call-iptables = 0
192.168.52.25:22: net.bridge.bridge-nf-call-arptables = 0
192.168.52.25:22: * Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
192.168.52.25:22: * Applying /usr/lib/sysctl.d/50-default.conf ...
192.168.52.25:22: kernel.sysrq = 16
192.168.52.25:22: kernel.core_uses_pid = 1
192.168.52.25:22: net.ipv4.conf.default.rp_filter = 1
192.168.52.25:22: net.ipv4.conf.all.rp_filter = 1
192.168.52.25:22: net.ipv4.conf.default.accept_source_route = 0
192.168.52.25:22: net.ipv4.conf.all.accept_source_route = 0
192.168.52.25:22: net.ipv4.conf.default.promote_secondaries = 1
192.168.52.25:22: net.ipv4.conf.all.promote_secondaries = 1
192.168.52.25:22: fs.protected_hardlinks = 1
192.168.52.25:22: fs.protected_symlinks = 1
192.168.52.25:22: * Applying /etc/sysctl.d/99-sysctl.conf ...
192.168.52.25:22: * Applying /etc/sysctl.d/k8s.conf ...
192.168.52.25:22: net.bridge.bridge-nf-call-ip6tables = 1
192.168.52.25:22: net.bridge.bridge-nf-call-iptables = 1
192.168.52.25:22: net.ipv4.conf.all.rp_filter = 0
192.168.52.25:22: * Applying /etc/sysctl.conf ...
192.168.52.25:22: net.ipv4.ip_forward = 1
192.168.52.22:22: * Applying /usr/lib/sysctl.d/00-system.conf ...
192.168.52.22:22: net.bridge.bridge-nf-call-ip6tables = 0
192.168.52.22:22: net.bridge.bridge-nf-call-iptables = 0
192.168.52.22:22: net.bridge.bridge-nf-call-arptables = 0
192.168.52.22:22: * Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
192.168.52.22:22: * Applying /usr/lib/sysctl.d/50-default.conf ...
192.168.52.22:22: kernel.sysrq = 16
192.168.52.22:22: kernel.core_uses_pid = 1
192.168.52.22:22: net.ipv4.conf.default.rp_filter = 1
192.168.52.22:22: net.ipv4.conf.all.rp_filter = 1
192.168.52.22:22: net.ipv4.conf.default.accept_source_route = 0
192.168.52.22:22: net.ipv4.conf.all.accept_source_route = 0
192.168.52.22:22: net.ipv4.conf.default.promote_secondaries = 1
192.168.52.22:22: net.ipv4.conf.all.promote_secondaries = 1
192.168.52.22:22: fs.protected_hardlinks = 1
192.168.52.22:22: fs.protected_symlinks = 1
192.168.52.22:22: * Applying /etc/sysctl.d/99-sysctl.conf ...
192.168.52.22:22: * Applying /etc/sysctl.d/k8s.conf ...
192.168.52.22:22: net.bridge.bridge-nf-call-ip6tables = 1
192.168.52.22:22: net.bridge.bridge-nf-call-iptables = 1
192.168.52.22:22: net.ipv4.conf.all.rp_filter = 0
192.168.52.22:22: * Applying /etc/sysctl.conf ...
192.168.52.22:22: net.ipv4.ip_forward = 1
192.168.52.21:22: Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
192.168.52.21:22: Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
192.168.52.21:22: SELINUX=enforcing
192.168.52.24:22: Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
192.168.52.24:22: Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
192.168.52.24:22: SELINUX=enforcing
192.168.52.21:22: Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
192.168.52.21:22:  INFO [2022-09-03 10:50:52] >> init kube success 
192.168.52.21:22:  INFO [2022-09-03 10:50:52] >> init containerd rootfs success 
192.168.52.24:22: Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
192.168.52.24:22:  INFO [2022-09-03 10:50:52] >> init kube success 
192.168.52.24:22:  INFO [2022-09-03 10:50:52] >> init containerd rootfs success 
192.168.52.22:22: Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
192.168.52.22:22: Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
192.168.52.22:22: SELINUX=enforcing
192.168.52.25:22: Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
192.168.52.25:22: Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
192.168.52.25:22: SELINUX=enforcing
192.168.52.22:22: Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
192.168.52.22:22:  INFO [2022-09-03 10:50:53] >> init kube success 
192.168.52.22:22:  INFO [2022-09-03 10:50:53] >> init containerd rootfs success 
192.168.52.25:22: Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
192.168.52.25:22:  INFO [2022-09-03 10:50:54] >> init kube success 
192.168.52.25:22:  INFO [2022-09-03 10:50:54] >> init containerd rootfs success 
2022-09-03T10:50:55 info Executing pipeline Init in CreateProcessor.
2022-09-03T10:50:55 info start to copy kubeadm config to master0
2022-09-03T10:51:02 info start to generate cert and kubeConfig...
2022-09-03T10:51:02 info start to generator cert and copy to masters...
2022-09-03T10:51:02 info apiserver altNames : {map[apiserver.cluster.local:apiserver.cluster.local kubernetes:kubernetes kubernetes.default:kubernetes.default kubernetes.default.svc:kubernetes.default.svc kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local localhost:localhost master1:master1] map[10.103.97.2:10.103.97.2 10.96.0.1:10.96.0.1 127.0.0.1:127.0.0.1 192.168.52.11:192.168.52.11]}
2022-09-03T10:51:02 info Etcd altnames : {map[localhost:localhost master1:master1] map[127.0.0.1:127.0.0.1 192.168.52.11:192.168.52.11 ::1:::1]}, commonName : master1
2022-09-03T10:51:06 info start to copy etc pki files to masters
2022-09-03T10:51:06 info start to create kubeconfig...
2022-09-03T10:51:08 info start to copy kubeconfig files to masters
2022-09-03T10:51:08 info start to copy static files to masters
2022-09-03T10:51:08 info start to apply registry
unpacking docker.io/library/registry:2.7.1 (sha256:49bd6b1420deba16b51bd073977ea6ae4000b816a356b12e805a699c4e5d3dba)...done
e1170ac0bb42056a593c023d3b855f197f7ea02c4fa137ddcd80180264ff4232
 INFO [2022-09-03 10:54:07] >> init registry success 
2022-09-03T10:54:07 info start to init master0...
2022-09-03T10:54:07 info registry auth in node 192.168.52.11:22
2022-09-03T10:54:07 info domain sealos.hub:192.168.52.11 append success
2022-09-03T10:54:07 info domain apiserver.cluster.local:192.168.52.11 append success
W0903 10:54:07.783607    2193 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[init] Using Kubernetes version: v1.25.0
[preflight] Running pre-flight checks
    [WARNING FileExisting-socat]: socat not found in system path
    [WARNING Hostname]: hostname "master1" could not be reached
    [WARNING Hostname]: hostname "master1": lookup master1 on 202.102.227.68:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
W0903 10:54:39.092721    2193 kubeconfig.go:249] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://192.168.52.11:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
W0903 10:54:39.561122    2193 kubeconfig.go:249] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://192.168.52.11:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 47.505894 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join apiserver.cluster.local:6443 --token <value withheld> \
    --discovery-token-ca-cert-hash sha256:07a0e5ef82b79ac4ec497daef66dca1dab344d1772193ac7190d5a55867806cc \
    --control-plane --certificate-key <value withheld>

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join apiserver.cluster.local:6443 --token <value withheld> \
    --discovery-token-ca-cert-hash sha256:07a0e5ef82b79ac4ec497daef66dca1dab344d1772193ac7190d5a55867806cc 
2022-09-03T10:55:28 info Executing pipeline Join in CreateProcessor.
2022-09-03T10:55:28 info [192.168.52.21:22 192.168.52.22:22 192.168.52.23:22 192.168.52.24:22 192.168.52.25:22 192.168.52.26:22] will be added as worker
2022-09-03T10:55:29 info start to get kubernetes token...
2022-09-03T10:55:35 info start to join 192.168.52.26:22 as worker
2022-09-03T10:55:35 info start to join 192.168.52.22:22 as worker
2022-09-03T10:55:35 info start to join 192.168.52.24:22 as worker
2022-09-03T10:55:35 info start to join 192.168.52.25:22 as worker
2022-09-03T10:55:35 info start to join 192.168.52.21:22 as worker
2022-09-03T10:55:35 info start to copy kubeadm join config to node: 192.168.52.24:22
2022-09-03T10:55:35 info start to join 192.168.52.23:22 as worker
2022-09-03T10:55:35 info start to copy kubeadm join config to node: 192.168.52.21:22
2022-09-03T10:55:35 info start to copy kubeadm join config to node: 192.168.52.22:22
2022-09-03T10:55:35 info start to copy kubeadm join config to node: 192.168.52.23:22
2022-09-03T10:55:35 info start to copy kubeadm join config to node: 192.168.52.26:22
2022-09-03T10:55:35 info start to copy kubeadm join config to node: 192.168.52.25:22
192.168.52.23:22: 2022-09-03T10:55:43 info domain apiserver.cluster.local:10.103.97.2 append success
192.168.52.25:22: 2022-09-03T10:55:43 info domain apiserver.cluster.local:10.103.97.2 append success
192.168.52.26:22: 2022-09-03T10:55:43 info domain apiserver.cluster.local:10.103.97.2 append success
192.168.52.21:22: 2022-09-03T10:55:43 info domain apiserver.cluster.local:10.103.97.2 append success
192.168.52.24:22: 2022-09-03T10:55:43 info domain apiserver.cluster.local:10.103.97.2 append success
192.168.52.22:22: 2022-09-03T10:55:43 info domain apiserver.cluster.local:10.103.97.2 append success
192.168.52.23:22: 2022-09-03T10:55:43 info domain lvscare.node.ip:192.168.52.23 append success
2022-09-03T10:55:43 info registry auth in node 192.168.52.23:22
192.168.52.25:22: 2022-09-03T10:55:43 info domain lvscare.node.ip:192.168.52.25 append success
2022-09-03T10:55:43 info registry auth in node 192.168.52.25:22
192.168.52.26:22: 2022-09-03T10:55:43 info domain lvscare.node.ip:192.168.52.26 append success
2022-09-03T10:55:43 info registry auth in node 192.168.52.26:22
192.168.52.21:22: 2022-09-03T10:55:43 info domain lvscare.node.ip:192.168.52.21 append success
2022-09-03T10:55:43 info registry auth in node 192.168.52.21:22
192.168.52.24:22: 2022-09-03T10:55:43 info domain lvscare.node.ip:192.168.52.24 append success
2022-09-03T10:55:43 info registry auth in node 192.168.52.24:22
192.168.52.22:22: 2022-09-03T10:55:43 info domain lvscare.node.ip:192.168.52.22 append success
2022-09-03T10:55:43 info registry auth in node 192.168.52.22:22
192.168.52.23:22: 2022-09-03T10:55:49 info domain sealos.hub:192.168.52.11 append success
192.168.52.25:22: 2022-09-03T10:55:49 info domain sealos.hub:192.168.52.11 append success
192.168.52.26:22: 2022-09-03T10:55:49 info domain sealos.hub:192.168.52.11 append success
192.168.52.21:22: 2022-09-03T10:55:49 info domain sealos.hub:192.168.52.11 append success
192.168.52.24:22: 2022-09-03T10:55:49 info domain sealos.hub:192.168.52.11 append success
192.168.52.22:22: 2022-09-03T10:55:49 info domain sealos.hub:192.168.52.11 append success
2022-09-03T10:55:49 info run ipvs once module: 192.168.52.23:22
2022-09-03T10:55:49 info run ipvs once module: 192.168.52.25:22
2022-09-03T10:55:49 info run ipvs once module: 192.168.52.26:22
2022-09-03T10:55:49 info run ipvs once module: 192.168.52.21:22
2022-09-03T10:55:49 info run ipvs once module: 192.168.52.24:22
2022-09-03T10:55:49 info run ipvs once module: 192.168.52.22:22
192.168.52.25:22: 2022-09-03T10:55:50 info Trying to add route
192.168.52.25:22: 2022-09-03T10:55:50 info success to set route.(host:10.103.97.2, gateway:192.168.52.25)
2022-09-03T10:55:50 info start join node: 192.168.52.25:22
192.168.52.23:22: 2022-09-03T10:55:50 info Trying to add route
192.168.52.23:22: 2022-09-03T10:55:50 info success to set route.(host:10.103.97.2, gateway:192.168.52.23)
2022-09-03T10:55:50 info start join node: 192.168.52.23:22
192.168.52.21:22: 2022-09-03T10:55:50 info Trying to add route
192.168.52.21:22: 2022-09-03T10:55:50 info success to set route.(host:10.103.97.2, gateway:192.168.52.21)
2022-09-03T10:55:50 info start join node: 192.168.52.21:22
192.168.52.26:22: 2022-09-03T10:55:50 info Trying to add route
192.168.52.26:22: 2022-09-03T10:55:50 info success to set route.(host:10.103.97.2, gateway:192.168.52.26)
2022-09-03T10:55:50 info start join node: 192.168.52.26:22
192.168.52.24:22: 2022-09-03T10:55:50 info Trying to add route
192.168.52.24:22: 2022-09-03T10:55:50 info success to set route.(host:10.103.97.2, gateway:192.168.52.24)
2022-09-03T10:55:50 info start join node: 192.168.52.24:22
192.168.52.22:22: 2022-09-03T10:55:50 info Trying to add route
192.168.52.22:22: 2022-09-03T10:55:50 info success to set route.(host:10.103.97.2, gateway:192.168.52.22)
2022-09-03T10:55:50 info start join node: 192.168.52.22:22
192.168.52.25:22: W0903 10:55:50.571804    4015 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
192.168.52.25:22: [preflight] Running pre-flight checks
192.168.52.23:22: W0903 10:55:50.580986    4024 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
192.168.52.23:22: [preflight] Running pre-flight checks
192.168.52.25:22:   [WARNING FileExisting-socat]: socat not found in system path
192.168.52.23:22:   [WARNING FileExisting-socat]: socat not found in system path
192.168.52.25:22:   [WARNING Hostname]: hostname "wrok5" could not be reached
192.168.52.25:22:   [WARNING Hostname]: hostname "wrok5": lookup wrok5 on 202.102.227.68:53: no such host
192.168.52.26:22: W0903 10:55:50.770884    4016 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
192.168.52.26:22: [preflight] Running pre-flight checks
192.168.52.24:22: W0903 10:55:50.787892    4016 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
192.168.52.24:22: [preflight] Running pre-flight checks
192.168.52.21:22: W0903 10:55:50.796095    4006 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
192.168.52.21:22: [preflight] Running pre-flight checks
192.168.52.26:22:   [WARNING FileExisting-socat]: socat not found in system path
192.168.52.24:22:   [WARNING FileExisting-socat]: socat not found in system path
192.168.52.25:22: [preflight] Reading configuration from the cluster...
192.168.52.25:22: [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
192.168.52.23:22:   [WARNING Hostname]: hostname "work3" could not be reached
192.168.52.23:22:   [WARNING Hostname]: hostname "work3": lookup work3 on 202.102.227.68:53: no such host
192.168.52.21:22:   [WARNING FileExisting-socat]: socat not found in system path
192.168.52.22:22: W0903 10:55:50.880696    4020 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
192.168.52.22:22: [preflight] Running pre-flight checks
192.168.52.25:22: [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
192.168.52.25:22: [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
192.168.52.25:22: [kubelet-start] Starting the kubelet
192.168.52.22:22:   [WARNING FileExisting-socat]: socat not found in system path
192.168.52.21:22:   [WARNING Hostname]: hostname "work1" could not be reached
192.168.52.21:22:   [WARNING Hostname]: hostname "work1": lookup work1 on 202.102.227.68:53: no such host
192.168.52.22:22:   [WARNING Hostname]: hostname "work2" could not be reached
192.168.52.22:22:   [WARNING Hostname]: hostname "work2": lookup work2 on 202.102.227.68:53: no such host
192.168.52.23:22: [preflight] Reading configuration from the cluster...
192.168.52.23:22: [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
192.168.52.23:22: [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
192.168.52.23:22: [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
192.168.52.23:22: [kubelet-start] Starting the kubelet
192.168.52.21:22: [preflight] Reading configuration from the cluster...
192.168.52.21:22: [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
192.168.52.25:22: [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
192.168.52.24:22:   [WARNING Hostname]: hostname "wrok4" could not be reached
192.168.52.24:22:   [WARNING Hostname]: hostname "wrok4": lookup wrok4 on 202.102.227.68:53: no such host
192.168.52.21:22: [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
192.168.52.21:22: [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
192.168.52.21:22: [kubelet-start] Starting the kubelet
192.168.52.22:22: [preflight] Reading configuration from the cluster...
192.168.52.22:22: [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
192.168.52.22:22: [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
192.168.52.22:22: [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
192.168.52.22:22: [kubelet-start] Starting the kubelet
192.168.52.23:22: [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
192.168.52.24:22: [preflight] Reading configuration from the cluster...
192.168.52.24:22: [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
192.168.52.24:22: [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
192.168.52.24:22: [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
192.168.52.24:22: [kubelet-start] Starting the kubelet
192.168.52.21:22: [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
192.168.52.22:22: [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
192.168.52.24:22: [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
192.168.52.26:22:   [WARNING Hostname]: hostname "work6" could not be reached
192.168.52.26:22:   [WARNING Hostname]: hostname "work6": lookup work6 on 202.102.227.68:53: no such host
192.168.52.26:22: [preflight] Reading configuration from the cluster...
192.168.52.26:22: [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
192.168.52.26:22: [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
192.168.52.26:22: [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
192.168.52.26:22: [kubelet-start] Starting the kubelet
192.168.52.26:22: [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
192.168.52.23:22: 
192.168.52.23:22: This node has joined the cluster:
192.168.52.23:22: * Certificate signing request was sent to apiserver and a response was received.
192.168.52.23:22: * The Kubelet was informed of the new secure connection details.
192.168.52.23:22: 
192.168.52.23:22: Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
192.168.52.23:22: 
2022-09-03T10:55:56 info succeeded in joining 192.168.52.23:22 as worker
192.168.52.24:22: 
192.168.52.24:22: This node has joined the cluster:
192.168.52.24:22: * Certificate signing request was sent to apiserver and a response was received.
192.168.52.24:22: * The Kubelet was informed of the new secure connection details.
192.168.52.24:22: 
192.168.52.24:22: Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
192.168.52.24:22: 
2022-09-03T10:55:57 info succeeded in joining 192.168.52.24:22 as worker
192.168.52.25:22: 
192.168.52.25:22: This node has joined the cluster:
192.168.52.25:22: * Certificate signing request was sent to apiserver and a response was received.
192.168.52.25:22: * The Kubelet was informed of the new secure connection details.
192.168.52.25:22: 
192.168.52.25:22: Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
192.168.52.25:22: 
2022-09-03T10:56:05 info succeeded in joining 192.168.52.25:22 as worker
192.168.52.21:22: 
192.168.52.21:22: This node has joined the cluster:
192.168.52.21:22: * Certificate signing request was sent to apiserver and a response was received.
192.168.52.21:22: * The Kubelet was informed of the new secure connection details.
192.168.52.21:22: 
192.168.52.21:22: Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
192.168.52.21:22: 
2022-09-03T10:56:05 info succeeded in joining 192.168.52.21:22 as worker
192.168.52.22:22: 
192.168.52.22:22: This node has joined the cluster:
192.168.52.22:22: * Certificate signing request was sent to apiserver and a response was received.
192.168.52.22:22: * The Kubelet was informed of the new secure connection details.
192.168.52.22:22: 
192.168.52.22:22: Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
192.168.52.22:22: 
2022-09-03T10:56:05 info succeeded in joining 192.168.52.22:22 as worker
192.168.52.26:22: 
192.168.52.26:22: This node has joined the cluster:
192.168.52.26:22: * Certificate signing request was sent to apiserver and a response was received.
192.168.52.26:22: * The Kubelet was informed of the new secure connection details.
192.168.52.26:22: 
192.168.52.26:22: Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
192.168.52.26:22: 
2022-09-03T10:56:07 info succeeded in joining 192.168.52.26:22 as worker
2022-09-03T10:56:08 info start to sync lvscare static pod to node: 192.168.52.21:22 master: [192.168.52.11:6443]
2022-09-03T10:56:08 info start to sync lvscare static pod to node: 192.168.52.22:22 master: [192.168.52.11:6443]
2022-09-03T10:56:08 info start to sync lvscare static pod to node: 192.168.52.26:22 master: [192.168.52.11:6443]
2022-09-03T10:56:08 info start to sync lvscare static pod to node: 192.168.52.23:22 master: [192.168.52.11:6443]
2022-09-03T10:56:08 info start to sync lvscare static pod to node: 192.168.52.25:22 master: [192.168.52.11:6443]
2022-09-03T10:56:08 info start to sync lvscare static pod to node: 192.168.52.24:22 master: [192.168.52.11:6443]
192.168.52.23:22: 2022-09-03T10:56:08 info generator lvscare static pod is success
192.168.52.25:22: 2022-09-03T10:56:08 info generator lvscare static pod is success
192.168.52.22:22: 2022-09-03T10:56:08 info generator lvscare static pod is success
192.168.52.26:22: 2022-09-03T10:56:08 info generator lvscare static pod is success
192.168.52.24:22: 2022-09-03T10:56:09 info generator lvscare static pod is success
192.168.52.21:22: 2022-09-03T10:56:09 info generator lvscare static pod is success
2022-09-03T10:56:09 info Executing pipeline RunGuest in CreateProcessor.
2022-09-03T10:56:09 info guest cmd is kubectl apply -f manifests/tigera-operator.yaml
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
namespace/tigera-operator created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created
error: resource mapping not found for name: "tigera-operator" namespace: "" from "manifests/tigera-operator.yaml": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1"
ensure CRDs are installed first
2022-09-03T10:56:10 error Applied to cluster error: exit status 1
2022-09-03T10:56:10 info 
      ___           ___           ___           ___       ___           ___
     /\  \         /\  \         /\  \         /\__\     /\  \         /\  \
    /::\  \       /::\  \       /::\  \       /:/  /    /::\  \       /::\  \
   /:/\ \  \     /:/\:\  \     /:/\:\  \     /:/  /    /:/\:\  \     /:/\ \  \
  _\:\~\ \  \   /::\~\:\  \   /::\~\:\  \   /:/  /    /:/  \:\  \   _\:\~\ \  \
 /\ \:\ \ \__\ /:/\:\ \:\__\ /:/\:\ \:\__\ /:/__/    /:/__/ \:\__\ /\ \:\ \ \__\
 \:\ \:\ \/__/ \:\~\:\ \/__/ \/__\:\/:/  / \:\  \    \:\  \ /:/  / \:\ \:\ \/__/
  \:\ \:\__\    \:\ \:\__\        \::/  /   \:\  \    \:\  /:/  /   \:\ \:\__\
   \:\/:/  /     \:\ \/__/        /:/  /     \:\  \    \:\/:/  /     \:\/:/  /
    \::/  /       \:\__\         /:/  /       \:\__\    \::/  /       \::/  /
     \/__/         \/__/         \/__/         \/__/     \/__/         \/__/

                  Website :https://www.sealos.io/
                  Address :github.com/labring/sealos
cuisongliu commented 2 years ago

https://github.com/labring/sealos/issues/1625

need upgrade calico cloud image

itxiao6 commented 2 years ago

1625

need upgrade calico cloud image

我看 可用的版本 只有 labring/calico:v3.22.1 没有其他的版本 如果我要是升级这个的话 是怎么升级呢 直接修改calico yaml 文件吗