labring / sealos

Sealos is a production-ready Kubernetes distribution. You can run any Docker image on sealos, start high availability databases like mysql/pgsql/redis/mongo, develop applications using any Programming language.
https://cloud.sealos.io
Apache License 2.0
13.9k stars 2.07k forks source link

BUG: kubernetes:v1.26.1 安装失败 #2459

Closed SpringHgui closed 1 year ago

SpringHgui commented 1 year ago

Sealos Version

4.1.5~alpha1-1

How to reproduce the bug?

  1. 阿里云全新安装centos7.9
  2. yum 安装 Sealos 4.1.5~alpha1-1
  3. 执行
    sealos run labring/kubernetes:v1.26.1 labring/helm:v3.10.3 labring/calico:v3.25.0 --single

What is the expected behavior?

安装集群成功

What do you see instead?

> sealos run labring/kubernetes:v1.26.1 labring/helm:v3.10.3 labring/calico:v3.25.0 --single
2023-01-30T10:45:00 info Start to create a new cluster: master [172.16.51.7], worker [], registry 172.16.51.7
2023-01-30T10:45:00 info Executing pipeline Check in CreateProcessor.
2023-01-30T10:45:00 info checker:hostname [172.16.51.7:22]
2023-01-30T10:45:00 info checker:timeSync [172.16.51.7:22]
2023-01-30T10:45:00 info Executing pipeline PreProcess in CreateProcessor.
Resolving "labring/helm" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/labring/helm:v3.10.3...
Getting image source signatures
Copying blob 6b142d0fe6c9 done  
Copying config 71b74a953c done  
Writing manifest to image destination
Storing signatures
Resolving "labring/calico" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/labring/calico:v3.25.0...
Getting image source signatures
Copying blob c02ac8d19a0b done  
Copying config 89e401f61a done  
Writing manifest to image destination
Storing signatures
2023-01-30T10:47:35 info Executing pipeline RunConfig in CreateProcessor.
2023-01-30T10:47:35 info Executing pipeline MountRootfs in CreateProcessor.
2023-01-30T10:47:45 info Executing pipeline MirrorRegistry in CreateProcessor.
2023-01-30T10:47:45 info Executing pipeline Bootstrap in CreateProcessor
which: no docker in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin)
 WARN [2023-01-30 10:47:47] >> Replace disable_apparmor = false to disable_apparmor = true 
 INFO [2023-01-30 10:47:47] >> check root,port,cri success 
Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /etc/systemd/system/containerd.service.
 INFO [2023-01-30 10:47:49] >> Health check containerd! 
 INFO [2023-01-30 10:47:49] >> containerd is running 
 INFO [2023-01-30 10:47:49] >> init containerd success 
Created symlink from /etc/systemd/system/multi-user.target.wants/image-cri-shim.service to /etc/systemd/system/image-cri-shim.service.
 INFO [2023-01-30 10:47:49] >> Health check image-cri-shim! 
 INFO [2023-01-30 10:47:49] >> image-cri-shim is running 
 INFO [2023-01-30 10:47:49] >> init shim success 
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
vm.swappiness = 0
kernel.sysrq = 1
net.ipv4.neigh.default.gc_stale_time = 120
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.all.arp_announce = 2
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_slow_start_after_idle = 0
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.conf.all.rp_filter = 0
* Applying /etc/sysctl.conf ...
vm.swappiness = 0
kernel.sysrq = 1
net.ipv4.neigh.default.gc_stale_time = 120
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.all.arp_announce = 2
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_slow_start_after_idle = 0
net.ipv4.ip_forward = 1
 INFO [2023-01-30 10:47:49] >> init kube success 
 INFO [2023-01-30 10:47:49] >> init rootfs success 
Created symlink from /etc/systemd/system/multi-user.target.wants/registry.service to /etc/systemd/system/registry.service.
 INFO [2023-01-30 10:47:49] >> Health check registry! 
 INFO [2023-01-30 10:47:49] >> registry is running 
 INFO [2023-01-30 10:47:49] >> init registry success 
2023-01-30T10:47:49 info Executing pipeline Init in CreateProcessor.
2023-01-30T10:47:49 info start to copy kubeadm config to master0
2023-01-30T10:47:50 info start to generate cert and kubeConfig...
2023-01-30T10:47:50 info start to generator cert and copy to masters...
2023-01-30T10:47:50 info apiserver altNames : {map[apiserver.cluster.local:apiserver.cluster.local kubernetes:kubernetes kubernetes.default:kubernetes.default kubernetes.default.svc:kubernetes.default.svc kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local localhost:localhost svrhz10-1:svrhz10-1] map[10.103.97.2:10.103.97.2 10.96.0.1:10.96.0.1 127.0.0.1:127.0.0.1 172.16.51.7:172.16.51.7]}
2023-01-30T10:47:50 info Etcd altnames : {map[localhost:localhost svrhz10-1:svrhz10-1] map[127.0.0.1:127.0.0.1 172.16.51.7:172.16.51.7 ::1:::1]}, commonName : svrhz10-1
2023-01-30T10:47:54 info start to copy etc pki files to masters
2023-01-30T10:47:54 info start to copy etc pki files to masters
2023-01-30T10:47:54 info start to create kubeconfig...
2023-01-30T10:47:55 info start to copy kubeconfig files to masters
2023-01-30T10:47:55 info start to copy static files to masters
2023-01-30T10:47:55 info start to init master0...
2023-01-30T10:47:55 info registry auth in node 172.16.51.7:22
2023-01-30T10:47:55 info domain sealos.hub:172.16.51.7 append success
2023-01-30T10:47:56 info domain apiserver.cluster.local:172.16.51.7 append success
W0130 10:47:56.086293   21394 initconfiguration.go:305] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeproxy.config.k8s.io", Version:"v1alpha1", Kind:"KubeProxyConfiguration"}: strict decoding error: unknown field "udpIdleTimeout"
W0130 10:47:56.089160   21394 configset.go:177] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeproxy.config.k8s.io", Version:"v1alpha1", Kind:"KubeProxyConfiguration"}: strict decoding error: unknown field "udpIdleTimeout"
W0130 10:47:56.090614   21394 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
W0130 10:47:56.090658   21394 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0
[init] Using Kubernetes version: v1.26.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
W0130 10:48:13.454816   21394 kubeconfig.go:264] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://172.16.51.7:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
W0130 10:48:13.755124   21394 kubeconfig.go:264] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://172.16.51.7:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
2023-01-30T10:52:13 error Applied to cluster error: failed to init init master0 failed, error: exit status 1. Please clean and reinstall
Error: failed to init init master0 failed, error: exit status 1. Please clean and reinstall
failed to init init master0 failed, error: exit status 1. Please clean and reinstall

Operating environment

- Sealos version:
- Docker version:
- Kubernetes version:
- Operating system:
- Runtime environment:
- Cluster size:
- Additional information:

Additional information

No response

SpringHgui commented 1 year ago

经测试,同样的命令,sealos 4.1.4 安装成功

xiao-jay commented 1 year ago

i have the same problem ,Sealos 4.1.5~alpha1-1

root@yyj-test:/home/ubuntu# sealos run labring/kubernetes:v1.26.1 labring/helm:v3.10.3 labring/calico:v3.25.0 --single --debug
2023-01-30T11:37:59 debug creating new cluster
2023-01-30T11:37:59 debug host 192.168.64.92 is local, command via exec
2023-01-30T11:37:59 debug cmd for bash in host:  arch
2023-01-30T11:37:59 debug cluster info: apiVersion: apps.sealos.io/v1beta1
kind: Cluster
metadata:
  creationTimestamp: null
  name: default
spec:
  hosts:
  - ips:
    - 192.168.64.92:22
    roles:
    - master
    - arm64
  image:
  - labring/kubernetes:v1.26.1
  - labring/helm:v3.10.3
  - labring/calico:v3.25.0
  ssh:
    pk: /root/.ssh/id_rsa
    port: 22
status: {}

2023-01-30T11:37:59 info Start to create a new cluster: master [192.168.64.92], worker [], registry 192.168.64.92
2023-01-30T11:37:59 info Executing pipeline Check in CreateProcessor.
2023-01-30T11:37:59 debug host 192.168.64.92:22 is local, ping is always true
2023-01-30T11:37:59 info checker:hostname [192.168.64.92:22]
2023-01-30T11:37:59 debug host 192.168.64.92:22 is local, command via exec
2023-01-30T11:37:59 debug cmd for bash in host:  hostname
2023-01-30T11:37:59 info checker:timeSync [192.168.64.92:22]
2023-01-30T11:37:59 debug host 192.168.64.92:22 is local, command via exec
2023-01-30T11:37:59 debug cmd for bash in host:  date +%s
2023-01-30T11:37:59 info Executing pipeline PreProcess in CreateProcessor.
2023-01-30T11:38:09 debug images ccfe6105d78d7f9915c42644e87ce55375b07cd09218147f2ada1e96519a0ccc, 0a09b1755103c2151c270ce84a53c2a99c7a81892d23febcaf7cfe5aef9b3f0f, 59c5ff452436484cbd41556364772a91895e4f64f27db50137c381fe1616d566 are pulled
2023-01-30T11:38:09 debug Pull Policy for pull [missing]
2023-01-30T11:38:09 debug Pull Policy for pull [missing]
2023-01-30T11:38:09 debug Pull Policy for pull [missing]
2023-01-30T11:38:09 debug sync cluster status is: apiVersion: apps.sealos.io/v1beta1
kind: Cluster
metadata:
  creationTimestamp: null
  name: default
spec:
  hosts:
  - ips:
    - 192.168.64.92:22
    roles:
    - master
    - arm64
  image:
  - labring/kubernetes:v1.26.1
  - labring/helm:v3.10.3
  - labring/calico:v3.25.0
  ssh:
    pk: /root/.ssh/id_rsa
    port: 22
    user: root
status:
  mounts:
  - env:
      criData: /var/lib/containerd
      defaultVIP: 10.103.97.2
      disableApparmor: "false"
      registryConfig: /etc/registry
      registryData: /var/lib/registry
      registryDomain: sealos.hub
      registryPassword: passw0rd
      registryPort: "5000"
      registryUsername: admin
      sandboxImage: pause:3.9
    imageName: labring/kubernetes:v1.26.1
    labels:
      auth: auth.sh
      check: check.sh $registryData
      clean: clean.sh && bash clean-cri.sh $criData
      clean-registry: clean-registry.sh $registryData $registryConfig
      image: ghcr.io/labring/lvscare:v4.1.4
      init: init-cri.sh $registryDomain $registryPort && bash init.sh
      init-registry: init-registry.sh $registryData $registryConfig
      io.buildah.version: 1.28.1
      org.opencontainers.image.description: kubernetes-v1.26.1 container image
      org.opencontainers.image.licenses: MIT
      org.opencontainers.image.source: https://github.com/labring-actions/cache
      sealos.io.type: rootfs
      sealos.io.version: v1beta1
      version: v1.26.1
      vip: $defaultVIP
    mountPoint: /var/lib/containers/storage/overlay/1eec575c93fed3abfcf67bb95ecc17bb5b5152f0d3120762e1cd8262b9c8222e/merged
    name: default-usydkt3p
    type: rootfs
  - cmd:
    - cp -a opt/helm /usr/bin/
    imageName: labring/helm:v3.10.3
    labels:
      io.buildah.version: 1.28.1
    mountPoint: /var/lib/containers/storage/overlay/9daf2b145c3d2e116202b7ecd14a61b1659ee7bd1fbc54bb4548988b75265046/merged
    name: default-eicimif5
    type: application
  - cmd:
    - helm upgrade -i calico charts/calico -f charts/calico.values.yaml -n tigera-operator
      --create-namespace
    imageName: labring/calico:v3.25.0
    labels:
      io.buildah.version: 1.28.1
    mountPoint: /var/lib/containers/storage/overlay/a8c682eaef150bab0e41f31ba89818633a579ff398ed7596307fea6a7ffebb7c/merged
    name: default-gepwkmnt
    type: application
  phase: ClusterInProcess

2023-01-30T11:38:09 debug renderTextFromEnv: replaces: map[$(SEALOS_SYS_KUBE_VERSION):v1.26.1 $(criData):/var/lib/containerd $(defaultVIP):10.103.97.2 $(disableApparmor):false $(registryConfig):/etc/registry $(registryData):/var/lib/registry $(registryDomain):sealos.hub $(registryPassword):passw0rd $(registryPort):5000 $(registryUsername):admin $(sandboxImage):pause:3.9 $SEALOS_SYS_KUBE_VERSION:v1.26.1 $criData:/var/lib/containerd $defaultVIP:10.103.97.2 $disableApparmor:false $registryConfig:/etc/registry $registryData:/var/lib/registry $registryDomain:sealos.hub $registryPassword:passw0rd $registryPort:5000 $registryUsername:admin $sandboxImage:pause:3.9 ${SEALOS_SYS_KUBE_VERSION}:v1.26.1 ${criData}:/var/lib/containerd ${defaultVIP}:10.103.97.2 ${disableApparmor}:false ${registryConfig}:/etc/registry ${registryData}:/var/lib/registry ${registryDomain}:sealos.hub ${registryPassword}:passw0rd ${registryPort}:5000 ${registryUsername}:admin ${sandboxImage}:pause:3.9] ; text: $defaultVIP
2023-01-30T11:38:09 debug get vip is 10.103.97.2
2023-01-30T11:38:09 info Executing pipeline RunConfig in CreateProcessor.
2023-01-30T11:38:09 debug clusterfile config is empty!
2023-01-30T11:38:09 debug clusterfile config is empty!
2023-01-30T11:38:09 debug clusterfile config is empty!
2023-01-30T11:38:09 info Executing pipeline MountRootfs in CreateProcessor.
2023-01-30T11:38:09 debug render env dir: /var/lib/containers/storage/overlay/a8c682eaef150bab0e41f31ba89818633a579ff398ed7596307fea6a7ffebb7c/merged/etc
2023-01-30T11:38:09 debug render env dir: /var/lib/containers/storage/overlay/a8c682eaef150bab0e41f31ba89818633a579ff398ed7596307fea6a7ffebb7c/merged/scripts
2023-01-30T11:38:09 debug render env dir: /var/lib/containers/storage/overlay/a8c682eaef150bab0e41f31ba89818633a579ff398ed7596307fea6a7ffebb7c/merged/manifests
2023-01-30T11:38:09 debug render env dir: /var/lib/containers/storage/overlay/1eec575c93fed3abfcf67bb95ecc17bb5b5152f0d3120762e1cd8262b9c8222e/merged/etc
2023-01-30T11:38:09 debug render env dir: /var/lib/containers/storage/overlay/1eec575c93fed3abfcf67bb95ecc17bb5b5152f0d3120762e1cd8262b9c8222e/merged/scripts
2023-01-30T11:38:09 debug render env dir: /var/lib/containers/storage/overlay/1eec575c93fed3abfcf67bb95ecc17bb5b5152f0d3120762e1cd8262b9c8222e/merged/manifests
2023-01-30T11:38:09 debug render env dir: /var/lib/containers/storage/overlay/9daf2b145c3d2e116202b7ecd14a61b1659ee7bd1fbc54bb4548988b75265046/merged/etc
2023-01-30T11:38:09 debug render env dir: /var/lib/containers/storage/overlay/9daf2b145c3d2e116202b7ecd14a61b1659ee7bd1fbc54bb4548988b75265046/merged/scripts
2023-01-30T11:38:09 debug render env dir: /var/lib/containers/storage/overlay/9daf2b145c3d2e116202b7ecd14a61b1659ee7bd1fbc54bb4548988b75265046/merged/manifests
2023-01-30T11:38:09 debug cmd for bash in host:  cd /var/lib/containers/storage/overlay/9daf2b145c3d2e116202b7ecd14a61b1659ee7bd1fbc54bb4548988b75265046/merged && chmod -R 0755 *
2023-01-30T11:38:10 debug cmd for bash in host:  cd /var/lib/containers/storage/overlay/a8c682eaef150bab0e41f31ba89818633a579ff398ed7596307fea6a7ffebb7c/merged && chmod -R 0755 *
2023-01-30T11:38:10 debug cmd for bash in host:  cd /var/lib/containers/storage/overlay/1eec575c93fed3abfcf67bb95ecc17bb5b5152f0d3120762e1cd8262b9c8222e/merged && chmod -R 0755 *
2023-01-30T11:38:12 debug send mount image, ip: 192.168.64.92:22, image name: labring/kubernetes:v1.26.1, image type: rootfs
2023-01-30T11:38:12 debug local 192.168.64.92:22 copy files src /var/lib/containers/storage/overlay/1eec575c93fed3abfcf67bb95ecc17bb5b5152f0d3120762e1cd8262b9c8222e/merged/Kubefile to dst /var/lib/sealos/data/default/rootfs/Kubefile
2023-01-30T11:38:12 debug local 192.168.64.92:22 copy files src /var/lib/containers/storage/overlay/1eec575c93fed3abfcf67bb95ecc17bb5b5152f0d3120762e1cd8262b9c8222e/merged/README.md to dst /var/lib/sealos/data/default/rootfs/README.md
2023-01-30T11:38:12 debug local 192.168.64.92:22 copy files src /var/lib/containers/storage/overlay/1eec575c93fed3abfcf67bb95ecc17bb5b5152f0d3120762e1cd8262b9c8222e/merged/bin to dst /var/lib/sealos/data/default/rootfs/bin
2023-01-30T11:38:12 debug local 192.168.64.92:22 copy files src /var/lib/containers/storage/overlay/1eec575c93fed3abfcf67bb95ecc17bb5b5152f0d3120762e1cd8262b9c8222e/merged/cri to dst /var/lib/sealos/data/default/rootfs/cri
2023-01-30T11:38:12 debug local 192.168.64.92:22 copy files src /var/lib/containers/storage/overlay/1eec575c93fed3abfcf67bb95ecc17bb5b5152f0d3120762e1cd8262b9c8222e/merged/etc to dst /var/lib/sealos/data/default/rootfs/etc
2023-01-30T11:38:12 debug local 192.168.64.92:22 copy files src /var/lib/containers/storage/overlay/1eec575c93fed3abfcf67bb95ecc17bb5b5152f0d3120762e1cd8262b9c8222e/merged/images to dst /var/lib/sealos/data/default/rootfs/images
2023-01-30T11:38:12 debug local 192.168.64.92:22 copy files src /var/lib/containers/storage/overlay/1eec575c93fed3abfcf67bb95ecc17bb5b5152f0d3120762e1cd8262b9c8222e/merged/opt to dst /var/lib/sealos/data/default/rootfs/opt
2023-01-30T11:38:12 debug local 192.168.64.92:22 copy files src /var/lib/containers/storage/overlay/1eec575c93fed3abfcf67bb95ecc17bb5b5152f0d3120762e1cd8262b9c8222e/merged/scripts to dst /var/lib/sealos/data/default/rootfs/scripts
2023-01-30T11:38:12 debug local 192.168.64.92:22 copy files src /var/lib/containers/storage/overlay/1eec575c93fed3abfcf67bb95ecc17bb5b5152f0d3120762e1cd8262b9c8222e/merged/statics to dst /var/lib/sealos/data/default/rootfs/statics
2023-01-30T11:38:12 debug send app mount images, ip: 192.168.64.92:22, image name: labring/calico:v3.25.0, image type: application
2023-01-30T11:38:12 debug local 192.168.64.92:22 copy files src /var/lib/containers/storage/overlay/a8c682eaef150bab0e41f31ba89818633a579ff398ed7596307fea6a7ffebb7c/merged/Kubefile to dst /var/lib/sealos/data/default/applications/default-gepwkmnt/workdir/Kubefile
2023-01-30T11:38:12 debug send app mount images, ip: 192.168.64.92:22, image name: labring/helm:v3.10.3, image type: application
2023-01-30T11:38:12 debug local 192.168.64.92:22 copy files src /var/lib/containers/storage/overlay/a8c682eaef150bab0e41f31ba89818633a579ff398ed7596307fea6a7ffebb7c/merged/charts to dst /var/lib/sealos/data/default/applications/default-gepwkmnt/workdir/charts
2023-01-30T11:38:12 debug local 192.168.64.92:22 copy files src /var/lib/containers/storage/overlay/9daf2b145c3d2e116202b7ecd14a61b1659ee7bd1fbc54bb4548988b75265046/merged/opt to dst /var/lib/sealos/data/default/applications/default-eicimif5/workdir/opt
2023-01-30T11:38:12 debug local 192.168.64.92:22 copy files src /var/lib/containers/storage/overlay/a8c682eaef150bab0e41f31ba89818633a579ff398ed7596307fea6a7ffebb7c/merged/images to dst /var/lib/sealos/data/default/applications/default-gepwkmnt/workdir/images
2023-01-30T11:38:12 debug local 192.168.64.92:22 copy files src /var/lib/containers/storage/overlay/a8c682eaef150bab0e41f31ba89818633a579ff398ed7596307fea6a7ffebb7c/merged/init.sh to dst /var/lib/sealos/data/default/applications/default-gepwkmnt/workdir/init.sh
2023-01-30T11:38:12 info Executing pipeline MirrorRegistry in CreateProcessor.
2023-01-30T11:38:12 debug local 192.168.64.92:22 copy files src /var/lib/containers/storage/overlay/a8c682eaef150bab0e41f31ba89818633a579ff398ed7596307fea6a7ffebb7c/merged/registry to dst /var/lib/sealos/data/default/rootfs/registry
2023-01-30T11:38:12 debug local 192.168.64.92:22 copy files src /var/lib/containers/storage/overlay/1eec575c93fed3abfcf67bb95ecc17bb5b5152f0d3120762e1cd8262b9c8222e/merged/registry to dst /var/lib/sealos/data/default/rootfs/registry
2023-01-30T11:38:13 info Executing pipeline Bootstrap in CreateProcessor
2023-01-30T11:38:13 debug apply default checker on host 192.168.64.92:22
2023-01-30T11:38:13 debug host 192.168.64.92 is local, command via exec
2023-01-30T11:38:13 debug cmd for bash in host:  cat /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml
2023-01-30T11:38:13 debug image shim data info: # Copyright © 2022 sealos.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

shim: /var/run/image-cri-shim.sock
cri: /run/containerd/containerd.sock
address: http://sealos.hub:5000
force: true
debug: false
image: /var/lib/image-cri-shim
version: v1
timeout: 15m
auth: admin:passw0rd

2023-01-30T11:38:13 debug show image shim info, image dir : /var/lib/image-cri-shim 
2023-01-30T11:38:13 debug host 192.168.64.92:22 is local, command via exec
2023-01-30T11:38:13 debug cmd for pipe in host:  bash -c registryConfig=(/etc/registry) registryData=(/var/lib/registry) sandboxImage=(pause:3.9) registryPort=(5000) registryPassword=(passw0rd) SEALOS_SYS_KUBE_VERSION=(v1.26.1) registryUsername=(admin) defaultVIP=(10.103.97.2) criData=(/var/lib/containerd) registryDomain=(sealos.hub) disableApparmor=(false) && cd /var/lib/sealos/data/default/rootfs/scripts && bash check.sh $registryData
 INFO [2023-01-30 11:38:23] >> check root,port,cri success 
2023-01-30T11:38:23 debug cmd for pipe in host:  bash -c mkdir -p /var/lib/image-cri-shim && cp -rf  /var/lib/sealos/data/default/rootfs/images/shim/* /var/lib/image-cri-shim/
2023-01-30T11:38:23 debug apply default initializer on host 192.168.64.92:22
2023-01-30T11:38:23 debug host 192.168.64.92:22 is local, command via exec
2023-01-30T11:38:23 debug cmd for pipe in host:  bash -c registryPassword=(passw0rd) criData=(/var/lib/containerd) registryDomain=(sealos.hub) sandboxImage=(pause:3.9) disableApparmor=(false) registryConfig=(/etc/registry) SEALOS_SYS_KUBE_VERSION=(v1.26.1) registryUsername=(admin) defaultVIP=(10.103.97.2) registryPort=(5000) registryData=(/var/lib/registry) && cd /var/lib/sealos/data/default/rootfs/scripts && bash init-cri.sh $registryDomain $registryPort && bash init.sh
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service.
 INFO [2023-01-30 11:38:24] >> Health check containerd! 
 INFO [2023-01-30 11:38:24] >> containerd is running 
 INFO [2023-01-30 11:38:24] >> init containerd success 
Created symlink /etc/systemd/system/multi-user.target.wants/image-cri-shim.service → /etc/systemd/system/image-cri-shim.service.
 INFO [2023-01-30 11:38:25] >> Health check image-cri-shim! 
 INFO [2023-01-30 11:38:25] >> image-cri-shim is running 
 INFO [2023-01-30 11:38:25] >> init shim success 
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
* Applying /etc/sysctl.d/10-console-messages.conf ...
kernel.printk = 4 4 1 7
* Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
net.ipv6.conf.all.use_tempaddr = 2
net.ipv6.conf.default.use_tempaddr = 2
* Applying /etc/sysctl.d/10-kernel-hardening.conf ...
kernel.kptr_restrict = 1
* Applying /etc/sysctl.d/10-magic-sysrq.conf ...
kernel.sysrq = 176
* Applying /etc/sysctl.d/10-network-security.conf ...
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.all.rp_filter = 2
* Applying /etc/sysctl.d/10-ptrace.conf ...
kernel.yama.ptrace_scope = 1
* Applying /etc/sysctl.d/10-zeropage.conf ...
vm.mmap_min_addr = 32768
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.default.accept_source_route = 0
sysctl: setting key "net.ipv4.conf.all.accept_source_route": Invalid argument
net.ipv4.conf.default.promote_secondaries = 1
sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument
net.ipv4.ping_group_range = 0 2147483647
net.core.default_qdisc = fq_codel
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
fs.protected_regular = 1
fs.protected_fifos = 1
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...
kernel.pid_max = 4194304
* Applying /etc/sysctl.d/99-cloudimg-ipv6.conf ...
net.ipv6.conf.all.use_tempaddr = 0
net.ipv6.conf.default.use_tempaddr = 0
* Applying /usr/lib/sysctl.d/99-protect-links.conf ...
fs.protected_fifos = 1
fs.protected_hardlinks = 1
fs.protected_regular = 2
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.conf.all.rp_filter = 0
* Applying /etc/sysctl.conf ...
net.ipv4.ip_forward = 1
Firewall stopped and disabled on system startup
 INFO [2023-01-30 11:38:26] >> init kube success 
 INFO [2023-01-30 11:38:26] >> init rootfs success 
2023-01-30T11:38:26 debug apply registry addon applier on host 192.168.64.92:22
2023-01-30T11:38:26 debug host 192.168.64.92:22 is local, command via exec
2023-01-30T11:38:26 debug cmd for bash in host:  cat /var/lib/sealos/data/default/rootfs/etc/registry.yml
2023-01-30T11:38:26 debug image shim data info: # Copyright © 2022 sealos.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

domain: sealos.hub
port: "5000"
username: "admin"
password: "passw0rd"
data: "/var/lib/registry"

2023-01-30T11:38:26 debug show registry info, IP: 192.168.64.92:22, Domain: sealos.hub, Data: /var/lib/registry
2023-01-30T11:38:26 debug make soft link: ln -s /var/lib/sealos/data/default/rootfs/registry /var/lib/registry
2023-01-30T11:38:26 debug host 192.168.64.92:22 is local, command via exec
2023-01-30T11:38:26 debug cmd for pipe in host:  bash -c ln -s /var/lib/sealos/data/default/rootfs/registry /var/lib/registry
2023-01-30T11:38:26 debug local 192.168.64.92:22 copy files src /root/.sealos/default/etc/registry_htpasswd to dst /var/lib/sealos/data/default/rootfs/etc/registry_htpasswd
2023-01-30T11:38:26 debug host 192.168.64.92:22 is local, command via exec
2023-01-30T11:38:26 debug cmd for pipe in host:  bash -c registryData=(/var/lib/registry) registryPort=(5000) sandboxImage=(pause:3.9) registryPassword=(passw0rd) SEALOS_SYS_KUBE_VERSION=(v1.26.1) registryConfig=(/etc/registry) disableApparmor=(false) defaultVIP=(10.103.97.2) registryUsername=(admin) criData=(/var/lib/containerd) registryDomain=(sealos.hub) && cd /var/lib/sealos/data/default/rootfs/scripts && bash init-registry.sh $registryData $registryConfig
Created symlink /etc/systemd/system/multi-user.target.wants/registry.service → /etc/systemd/system/registry.service.
 INFO [2023-01-30 11:38:27] >> Health check registry! 
 INFO [2023-01-30 11:38:27] >> registry is running 
 INFO [2023-01-30 11:38:27] >> init registry success 
2023-01-30T11:38:27 info Executing pipeline Init in CreateProcessor.
2023-01-30T11:38:27 info start to copy kubeadm config to master0
2023-01-30T11:38:27 debug override defaults of kubelet configuration
2023-01-30T11:38:27 debug renderTextFromEnv: replaces: map[$(SEALOS_SYS_KUBE_VERSION):v1.26.1 $(criData):/var/lib/containerd $(defaultVIP):10.103.97.2 $(disableApparmor):false $(registryConfig):/etc/registry $(registryData):/var/lib/registry $(registryDomain):sealos.hub $(registryPassword):passw0rd $(registryPort):5000 $(registryUsername):admin $(sandboxImage):pause:3.9 $SEALOS_SYS_KUBE_VERSION:v1.26.1 $criData:/var/lib/containerd $defaultVIP:10.103.97.2 $disableApparmor:false $registryConfig:/etc/registry $registryData:/var/lib/registry $registryDomain:sealos.hub $registryPassword:passw0rd $registryPort:5000 $registryUsername:admin $sandboxImage:pause:3.9 ${SEALOS_SYS_KUBE_VERSION}:v1.26.1 ${criData}:/var/lib/containerd ${defaultVIP}:10.103.97.2 ${disableApparmor}:false ${registryConfig}:/etc/registry ${registryData}:/var/lib/registry ${registryDomain}:sealos.hub ${registryPassword}:passw0rd ${registryPort}:5000 ${registryUsername}:admin ${sandboxImage}:pause:3.9] ; text: $defaultVIP
2023-01-30T11:38:27 debug get vip is 10.103.97.2
2023-01-30T11:38:27 debug renderTextFromEnv: replaces: map[$(SEALOS_SYS_KUBE_VERSION):v1.26.1 $(criData):/var/lib/containerd $(defaultVIP):10.103.97.2 $(disableApparmor):false $(registryConfig):/etc/registry $(registryData):/var/lib/registry $(registryDomain):sealos.hub $(registryPassword):passw0rd $(registryPort):5000 $(registryUsername):admin $(sandboxImage):pause:3.9 $SEALOS_SYS_KUBE_VERSION:v1.26.1 $criData:/var/lib/containerd $defaultVIP:10.103.97.2 $disableApparmor:false $registryConfig:/etc/registry $registryData:/var/lib/registry $registryDomain:sealos.hub $registryPassword:passw0rd $registryPort:5000 $registryUsername:admin $sandboxImage:pause:3.9 ${SEALOS_SYS_KUBE_VERSION}:v1.26.1 ${criData}:/var/lib/containerd ${defaultVIP}:10.103.97.2 ${disableApparmor}:false ${registryConfig}:/etc/registry ${registryData}:/var/lib/registry ${registryDomain}:sealos.hub ${registryPassword}:passw0rd ${registryPort}:5000 ${registryUsername}:admin ${sandboxImage}:pause:3.9] ; text: $defaultVIP
2023-01-30T11:38:27 debug get vip is 10.103.97.2
2023-01-30T11:38:27 debug start to exec remote 192.168.64.92:22 shell: /var/lib/sealos/data/default/rootfs/opt/sealctl cri socket
2023-01-30T11:38:27 debug host 192.168.64.92:22 is local, command via exec
2023-01-30T11:38:27 debug cmd for bash in host:  /var/lib/sealos/data/default/rootfs/opt/sealctl cri socket
2023-01-30T11:38:27 debug get nodes [192.168.64.92:22] cri socket is [/run/containerd/containerd.sock]
2023-01-30T11:38:27 debug start to exec remote 192.168.64.92:22 shell: /var/lib/sealos/data/default/rootfs/opt/sealctl cri cgroup-driver --short
2023-01-30T11:38:27 debug host 192.168.64.92:22 is local, command via exec
2023-01-30T11:38:27 debug cmd for bash in host:  /var/lib/sealos/data/default/rootfs/opt/sealctl cri cgroup-driver --short
2023-01-30T11:38:27 debug get nodes [192.168.64.92:22] cgroup driver is [systemd]
2023-01-30T11:38:27 debug local 192.168.64.92:22 copy files src /root/.sealos/default/tmp/kubeadm-init.yaml to dst /root/.sealos/default/etc/kubeadm-init.yaml
2023-01-30T11:38:27 info start to generate cert and kubeConfig...
2023-01-30T11:38:27 debug host 192.168.64.92:22 is local, command via exec
2023-01-30T11:38:27 debug cmd for pipe in host:  bash -c rm -rf /etc/kubernetes/admin.conf
2023-01-30T11:38:27 info start to generator cert and copy to masters...
2023-01-30T11:38:27 debug start to exec remote 192.168.64.92:22 shell: /var/lib/sealos/data/default/rootfs/opt/sealctl hostname
2023-01-30T11:38:27 debug host 192.168.64.92:22 is local, command via exec
2023-01-30T11:38:27 debug cmd for bash in host:  /var/lib/sealos/data/default/rootfs/opt/sealctl hostname
2023-01-30T11:38:28 info apiserver altNames : {map[apiserver.cluster.local:apiserver.cluster.local kubernetes:kubernetes kubernetes.default:kubernetes.default kubernetes.default.svc:kubernetes.default.svc kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local localhost:localhost yyj-test:yyj-test] map[10.103.97.2:10.103.97.2 10.96.0.1:10.96.0.1 127.0.0.1:127.0.0.1 192.168.64.92:192.168.64.92]}
2023-01-30T11:38:28 info Etcd altnames : {map[localhost:localhost yyj-test:yyj-test] map[127.0.0.1:127.0.0.1 192.168.64.92:192.168.64.92 ::1:::1]}, commonName : yyj-test
2023-01-30T11:38:29 debug cert.GenerateCert getServiceCIDR  10.96.0.0/22
2023-01-30T11:38:29 debug cert.GenerateCert param: /root/.sealos/default/pki /root/.sealos/default/pki/etcd [127.0.0.1 apiserver.cluster.local 10.103.97.2 192.168.64.92] 192.168.64.92 yyj-test 10.96.0.0/22 cluster.local
2023-01-30T11:38:29 info start to copy etc pki files to masters
2023-01-30T11:38:29 debug local 192.168.64.92:22 copy files src /root/.sealos/default/pki to dst /etc/kubernetes/pki
2023-01-30T11:38:29 info start to copy etc pki files to masters
2023-01-30T11:38:29 debug local 192.168.64.92:22 copy files src /root/.sealos/default/pki to dst /etc/kubernetes/pki
2023-01-30T11:38:29 info start to create kubeconfig...
2023-01-30T11:38:29 debug start to exec remote 192.168.64.92:22 shell: /var/lib/sealos/data/default/rootfs/opt/sealctl hostname
2023-01-30T11:38:29 debug host 192.168.64.92:22 is local, command via exec
2023-01-30T11:38:29 debug cmd for bash in host:  /var/lib/sealos/data/default/rootfs/opt/sealctl hostname
2023-01-30T11:38:29 debug [kubeconfig] Writing "admin.conf" kubeconfig file

2023-01-30T11:38:30 debug [kubeconfig] Writing "controller-manager.conf" kubeconfig file

2023-01-30T11:38:30 debug [kubeconfig] Writing "scheduler.conf" kubeconfig file

2023-01-30T11:38:30 debug [kubeconfig] Writing "kubelet.conf" kubeconfig file

2023-01-30T11:38:30 info start to copy kubeconfig files to masters
2023-01-30T11:38:30 debug local 192.168.64.92:22 copy files src /root/.sealos/default/etc/admin.conf to dst /etc/kubernetes/admin.conf
2023-01-30T11:38:30 debug local 192.168.64.92:22 copy files src /root/.sealos/default/etc/controller-manager.conf to dst /etc/kubernetes/controller-manager.conf
2023-01-30T11:38:30 debug local 192.168.64.92:22 copy files src /root/.sealos/default/etc/scheduler.conf to dst /etc/kubernetes/scheduler.conf
2023-01-30T11:38:30 debug local 192.168.64.92:22 copy files src /root/.sealos/default/etc/kubelet.conf to dst /etc/kubernetes/kubelet.conf
2023-01-30T11:38:30 info start to copy static files to masters
2023-01-30T11:38:30 debug host 192.168.64.92:22 is local, command via exec
2023-01-30T11:38:30 debug cmd for pipe in host:  bash -c mkdir -p /etc/kubernetes && cp -f /var/lib/sealos/data/default/rootfs/statics/audit-policy.yml /etc/kubernetes/audit-policy.yml
2023-01-30T11:38:30 info start to init master0...
2023-01-30T11:38:30 info registry auth in node 192.168.64.92:22
2023-01-30T11:38:30 debug host 192.168.64.92:22 is local, command via exec
2023-01-30T11:38:30 debug cmd for bash in host:  cat /var/lib/sealos/data/default/rootfs/etc/registry.yml
2023-01-30T11:38:30 debug image shim data info: # Copyright © 2022 sealos.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

domain: sealos.hub
port: "5000"
username: "admin"
password: "passw0rd"
data: "/var/lib/registry"

2023-01-30T11:38:30 debug show registry info, IP: 192.168.64.92:22, Domain: sealos.hub, Data: /var/lib/registry
2023-01-30T11:38:30 debug start to exec remote 192.168.64.92:22 shell: /var/lib/sealos/data/default/rootfs/opt/sealctl  hosts add --ip 192.168.64.92  --domain sealos.hub
2023-01-30T11:38:30 debug host 192.168.64.92:22 is local, command via exec
2023-01-30T11:38:30 debug cmd for pipe in host:  bash -c /var/lib/sealos/data/default/rootfs/opt/sealctl  hosts add --ip 192.168.64.92  --domain sealos.hub
2023-01-30T11:38:30 info domain sealos.hub:192.168.64.92 append success
2023-01-30T11:38:30 debug host 192.168.64.92:22 is local, command via exec
2023-01-30T11:38:30 debug cmd for pipe in host:  bash -c registryPassword=(passw0rd) disableApparmor=(false) defaultVIP=(10.103.97.2) sandboxImage=(pause:3.9) SEALOS_SYS_KUBE_VERSION=(v1.26.1) registryData=(/var/lib/registry) registryPort=(5000) registryConfig=(/etc/registry) registryUsername=(admin) criData=(/var/lib/containerd) registryDomain=(sealos.hub) && cd /var/lib/sealos/data/default/rootfs/scripts && bash auth.sh
2023-01-30T11:38:30 debug start to exec remote 192.168.64.92:22 shell: /var/lib/sealos/data/default/rootfs/opt/sealctl  hosts add --ip 192.168.64.92  --domain apiserver.cluster.local
2023-01-30T11:38:30 debug host 192.168.64.92:22 is local, command via exec
2023-01-30T11:38:30 debug cmd for pipe in host:  bash -c /var/lib/sealos/data/default/rootfs/opt/sealctl  hosts add --ip 192.168.64.92  --domain apiserver.cluster.local
2023-01-30T11:38:30 info domain apiserver.cluster.local:192.168.64.92 append success
2023-01-30T11:38:30 debug renderTextFromEnv: replaces: map[$(SEALOS_SYS_KUBE_VERSION):v1.26.1 $(criData):/var/lib/containerd $(defaultVIP):10.103.97.2 $(disableApparmor):false $(registryConfig):/etc/registry $(registryData):/var/lib/registry $(registryDomain):sealos.hub $(registryPassword):passw0rd $(registryPort):5000 $(registryUsername):admin $(sandboxImage):pause:3.9 $SEALOS_SYS_KUBE_VERSION:v1.26.1 $criData:/var/lib/containerd $defaultVIP:10.103.97.2 $disableApparmor:false $registryConfig:/etc/registry $registryData:/var/lib/registry $registryDomain:sealos.hub $registryPassword:passw0rd $registryPort:5000 $registryUsername:admin $sandboxImage:pause:3.9 ${SEALOS_SYS_KUBE_VERSION}:v1.26.1 ${criData}:/var/lib/containerd ${defaultVIP}:10.103.97.2 ${disableApparmor}:false ${registryConfig}:/etc/registry ${registryData}:/var/lib/registry ${registryDomain}:sealos.hub ${registryPassword}:passw0rd ${registryPort}:5000 ${registryUsername}:admin ${sandboxImage}:pause:3.9] ; text: $defaultVIP
2023-01-30T11:38:30 debug get vip is 10.103.97.2
2023-01-30T11:38:30 debug host 192.168.64.92:22 is local, command via exec
2023-01-30T11:38:30 debug cmd for pipe in host:  bash -c kubeadm init --config=/root/.sealos/default/etc/kubeadm-init.yaml --skip-certificate-key-print --skip-token-print -v 6 --ignore-preflight-errors=SystemVerification
I0130 11:38:30.833405    8980 initconfiguration.go:254] loading configuration from "/root/.sealos/default/etc/kubeadm-init.yaml"
W0130 11:38:30.835500    8980 initconfiguration.go:305] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeproxy.config.k8s.io", Version:"v1alpha1", Kind:"KubeProxyConfiguration"}: strict decoding error: unknown field "udpIdleTimeout"
W0130 11:38:30.835820    8980 configset.go:177] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeproxy.config.k8s.io", Version:"v1alpha1", Kind:"KubeProxyConfiguration"}: strict decoding error: unknown field "udpIdleTimeout"
W0130 11:38:30.836645    8980 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
W0130 11:38:30.836665    8980 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0
I0130 11:38:30.840333    8980 certs.go:519] validating certificate period for CA certificate
I0130 11:38:30.840401    8980 certs.go:519] validating certificate period for front-proxy CA certificate
[init] Using Kubernetes version: v1.26.1
[preflight] Running pre-flight checks
I0130 11:38:30.840680    8980 checks.go:568] validating Kubernetes and kubeadm version
I0130 11:38:30.840870    8980 checks.go:168] validating if the firewall is enabled and active
I0130 11:38:30.853138    8980 checks.go:203] validating availability of port 6443
I0130 11:38:30.855053    8980 checks.go:203] validating availability of port 10259
I0130 11:38:30.855201    8980 checks.go:203] validating availability of port 10257
I0130 11:38:30.855249    8980 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0130 11:38:30.855365    8980 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0130 11:38:30.855370    8980 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0130 11:38:30.855376    8980 checks.go:280] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0130 11:38:30.855394    8980 checks.go:430] validating if the connectivity type is via proxy or direct
I0130 11:38:30.855407    8980 checks.go:469] validating http connectivity to first IP address in the CIDR
I0130 11:38:30.855422    8980 checks.go:469] validating http connectivity to first IP address in the CIDR
I0130 11:38:30.855427    8980 checks.go:104] validating the container runtime
I0130 11:38:30.873210    8980 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0130 11:38:30.873271    8980 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0130 11:38:30.873299    8980 checks.go:644] validating whether swap is enabled or not
I0130 11:38:30.873327    8980 checks.go:370] validating the presence of executable crictl
I0130 11:38:30.873347    8980 checks.go:370] validating the presence of executable conntrack
I0130 11:38:30.873362    8980 checks.go:370] validating the presence of executable ip
I0130 11:38:30.873443    8980 checks.go:370] validating the presence of executable iptables
I0130 11:38:30.873470    8980 checks.go:370] validating the presence of executable mount
I0130 11:38:30.873479    8980 checks.go:370] validating the presence of executable nsenter
I0130 11:38:30.873487    8980 checks.go:370] validating the presence of executable ebtables
I0130 11:38:30.873505    8980 checks.go:370] validating the presence of executable ethtool
I0130 11:38:30.873513    8980 checks.go:370] validating the presence of executable socat
    [WARNING FileExisting-socat]: socat not found in system path
I0130 11:38:30.873578    8980 checks.go:370] validating the presence of executable tc
I0130 11:38:30.873588    8980 checks.go:370] validating the presence of executable touch
I0130 11:38:30.873596    8980 checks.go:516] running all checks
I0130 11:38:30.882346    8980 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost
I0130 11:38:30.882723    8980 checks.go:610] validating kubelet version
I0130 11:38:30.946520    8980 checks.go:130] validating if the "kubelet" service is enabled and active
I0130 11:38:30.955184    8980 checks.go:203] validating availability of port 10250
I0130 11:38:30.955234    8980 checks.go:203] validating availability of port 2379
I0130 11:38:30.955278    8980 checks.go:203] validating availability of port 2380
I0130 11:38:30.955366    8980 checks.go:243] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0130 11:38:30.955417    8980 checks.go:832] using image pull policy: IfNotPresent
I0130 11:38:31.143980    8980 checks.go:849] pulling: registry.k8s.io/kube-apiserver:v1.26.1
I0130 11:38:32.932387    8980 checks.go:849] pulling: registry.k8s.io/kube-controller-manager:v1.26.1
I0130 11:38:34.411842    8980 checks.go:849] pulling: registry.k8s.io/kube-scheduler:v1.26.1
I0130 11:38:35.883313    8980 checks.go:849] pulling: registry.k8s.io/kube-proxy:v1.26.1
I0130 11:38:37.234234    8980 checks.go:849] pulling: registry.k8s.io/pause:3.9
I0130 11:38:38.066067    8980 checks.go:849] pulling: registry.k8s.io/etcd:3.5.6-0
I0130 11:38:40.818246    8980 checks.go:849] pulling: registry.k8s.io/coredns/coredns:v1.9.3
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0130 11:38:41.833751    8980 certs.go:519] validating certificate period for ca certificate
[certs] Using existing ca certificate authority
I0130 11:38:41.834335    8980 certs.go:519] validating certificate period for apiserver certificate
[certs] Using existing apiserver certificate and key on disk
I0130 11:38:41.834613    8980 certs.go:519] validating certificate period for apiserver-kubelet-client certificate
[certs] Using existing apiserver-kubelet-client certificate and key on disk
I0130 11:38:41.834739    8980 certs.go:519] validating certificate period for front-proxy-ca certificate
[certs] Using existing front-proxy-ca certificate authority
I0130 11:38:41.834819    8980 certs.go:519] validating certificate period for front-proxy-client certificate
[certs] Using existing front-proxy-client certificate and key on disk
I0130 11:38:41.834940    8980 certs.go:519] validating certificate period for etcd/ca certificate
[certs] Using existing etcd/ca certificate authority
I0130 11:38:41.835098    8980 certs.go:519] validating certificate period for etcd/server certificate
[certs] Using existing etcd/server certificate and key on disk
I0130 11:38:41.835209    8980 certs.go:519] validating certificate period for etcd/peer certificate
[certs] Using existing etcd/peer certificate and key on disk
I0130 11:38:41.835319    8980 certs.go:519] validating certificate period for etcd/healthcheck-client certificate
[certs] Using existing etcd/healthcheck-client certificate and key on disk
I0130 11:38:41.835454    8980 certs.go:519] validating certificate period for apiserver-etcd-client certificate
[certs] Using existing apiserver-etcd-client certificate and key on disk
I0130 11:38:41.835587    8980 certs.go:78] creating new public/private key files for signing service account users
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0130 11:38:41.835721    8980 kubeconfig.go:103] creating kubeconfig file for admin.conf
I0130 11:38:41.920592    8980 loader.go:373] Config loaded from file:  /etc/kubernetes/admin.conf
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
I0130 11:38:41.920637    8980 kubeconfig.go:103] creating kubeconfig file for kubelet.conf
I0130 11:38:41.985632    8980 loader.go:373] Config loaded from file:  /etc/kubernetes/kubelet.conf
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
I0130 11:38:41.985658    8980 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf
I0130 11:38:42.120444    8980 loader.go:373] Config loaded from file:  /etc/kubernetes/controller-manager.conf
W0130 11:38:42.120471    8980 kubeconfig.go:264] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://192.168.64.92:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
I0130 11:38:42.120496    8980 kubeconfig.go:103] creating kubeconfig file for scheduler.conf
I0130 11:38:42.186783    8980 loader.go:373] Config loaded from file:  /etc/kubernetes/scheduler.conf
W0130 11:38:42.186809    8980 kubeconfig.go:264] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://192.168.64.92:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
I0130 11:38:42.186832    8980 kubelet.go:67] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0130 11:38:42.461318    8980 manifests.go:99] [control-plane] getting StaticPodSpecs
I0130 11:38:42.461626    8980 manifests.go:125] [control-plane] adding volume "audit" for component "kube-apiserver"
I0130 11:38:42.461634    8980 manifests.go:125] [control-plane] adding volume "audit-log" for component "kube-apiserver"
I0130 11:38:42.461637    8980 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0130 11:38:42.461640    8980 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0130 11:38:42.461642    8980 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0130 11:38:42.461645    8980 manifests.go:125] [control-plane] adding volume "localtime" for component "kube-apiserver"
I0130 11:38:42.461648    8980 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I0130 11:38:42.461650    8980 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I0130 11:38:42.463322    8980 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0130 11:38:42.463343    8980 manifests.go:99] [control-plane] getting StaticPodSpecs
I0130 11:38:42.463466    8980 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0130 11:38:42.463487    8980 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0130 11:38:42.463493    8980 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0130 11:38:42.463498    8980 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0130 11:38:42.463502    8980 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0130 11:38:42.463507    8980 manifests.go:125] [control-plane] adding volume "localtime" for component "kube-controller-manager"
I0130 11:38:42.463511    8980 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I0130 11:38:42.463687    8980 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I0130 11:38:42.464546    8980 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0130 11:38:42.464583    8980 manifests.go:99] [control-plane] getting StaticPodSpecs
I0130 11:38:42.465245    8980 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0130 11:38:42.465257    8980 manifests.go:125] [control-plane] adding volume "localtime" for component "kube-scheduler"
I0130 11:38:42.465608    8980 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0130 11:38:42.467218    8980 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0130 11:38:42.467242    8980 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
I0130 11:38:42.467770    8980 loader.go:373] Config loaded from file:  /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0130 11:38:42.481868    8980 round_trippers.go:553] GET https://apiserver.cluster.local:6443/healthz?timeout=10s  in 0 milliseconds
I0130 11:38:42.985107    8980 round_trippers.go:553] GET https://apiserver.cluster.local:6443/healthz?timeout=10s  in 0 milliseconds
I0130 11:38:43.483350    8980 round_trippers.go:553] GET https://apiserver.cluster.local:6443/healthz?timeout=10s  in 0 milliseconds
I0130 11:38:44.001794    8980 round_trippers.go:553] GET https://apiserver.cluster.local:6443/healthz?timeout=10s  in 0 milliseconds
I0130 11:38:44.486553    8980 round_trippers.go:553] GET https://apiserver.cluster.local:6443/healthz?timeout=10s  in 0 milliseconds
I0130 11:38:44.998147    8980 round_trippers.go:553] GET https://apiserver.cluster.local:6443/healthz?timeout=10s  in 0 milliseconds
I0130 11:38:45.498417    8980 round_trippers.go:553] GET https://apiserver.cluster.local:6443/healthz?timeout=10s  in 0 milliseconds
I0130 11:38:45.992656    8980 round_trippers.go:553] GET https://apiserver.cluster.local:6443/healthz?timeout=10s  in 0 milliseconds
I0130 11:38:46.485068    8980 round_trippers.go:553] GET https://apiserver.cluster.local:6443/healthz?timeout=10s  in 1
Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
    - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
    cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:108
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
    cmd/kubeadm/app/cmd/phases/workflow/runner.go:259
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
    cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
    cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
    cmd/kubeadm/app/cmd/init.go:112
github.com/spf13/cobra.(*Command).execute
    vendor/github.com/spf13/cobra/command.go:916
github.com/spf13/cobra.(*Command).ExecuteC
    vendor/github.com/spf13/cobra/command.go:1040
github.com/spf13/cobra.(*Command).Execute
    vendor/github.com/spf13/cobra/command.go:968
k8s.io/kubernetes/cmd/kubeadm/app.Run
    cmd/kubeadm/app/kubeadm.go:50
main.main
    cmd/kubeadm/kubeadm.go:25
runtime.main
    /usr/local/go/src/runtime/proc.go:250
runtime.goexit
    /usr/local/go/src/runtime/asm_arm64.s:1172
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
    cmd/kubeadm/app/cmd/phases/workflow/runner.go:260
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
    cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
    cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
    cmd/kubeadm/app/cmd/init.go:112
github.com/spf13/cobra.(*Command).execute
    vendor/github.com/spf13/cobra/command.go:916
github.com/spf13/cobra.(*Command).ExecuteC
    vendor/github.com/spf13/cobra/command.go:1040
github.com/spf13/cobra.(*Command).Execute
    vendor/github.com/spf13/cobra/command.go:968
k8s.io/kubernetes/cmd/kubeadm/app.Run
    cmd/kubeadm/app/kubeadm.go:50
main.main
    cmd/kubeadm/kubeadm.go:25
runtime.main
    /usr/local/go/src/runtime/proc.go:250
runtime.goexit
    /usr/local/go/src/runtime/asm_arm64.s:1172
2023-01-30T11:42:42 error Applied to cluster error: failed to init init master0 failed, error: exit status 1. Please clean and reinstall
2023-01-30T11:42:42 debug write cluster file to local storage: /root/.sealos/default/Clusterfile
Error: failed to init init master0 failed, error: exit status 1. Please clean and reinstall
failed to init init master0 failed, error: exit status 1. Please clean and reinstall
root@yyj-test:/home/ubuntu# journalctl -xeu kubelet --all
Jan 30 11:48:20 yyj-test kubelet[9182]: E0130 11:48:20.923781    9182 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime >
Jan 30 11:48:20 yyj-test kubelet[9182]: E0130 11:48:20.923837    9182 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime >
Jan 30 11:48:20 yyj-test kubelet[9182]: E0130 11:48:20.923918    9182 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-yyj-test_kube-system(6220df38f6dd0b1dfc9d8f9dcc7fe593)\" with CreatePodSandboxError: \"Failed to create sandbox f>
Jan 30 11:48:21 yyj-test kubelet[9182]: E0130 11:48:21.035282    9182 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"yyj-test.173ef907ac0f0602", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", Resource>
Jan 30 11:48:22 yyj-test kubelet[9182]: W0130 11:48:22.395685    9182 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://apiserver.cluster.local:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192>
Jan 30 11:48:22 yyj-test kubelet[9182]: E0130 11:48:22.395783    9182 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://apiserver.cluster.local:6443/apis/node.k8s.io/v1/runtimeclasses?limit=50>
Jan 30 11:48:22 yyj-test kubelet[9182]: W0130 11:48:22.697877    9182 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://apiserver.cluster.local:6443/api/v1/nodes?fieldSelector=metadata.name%3Dyyj-test&limit=500&resourceVersion=0": dia>
Jan 30 11:48:22 yyj-test kubelet[9182]: E0130 11:48:22.698000    9182 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://apiserver.cluster.local:6443/api/v1/nodes?fieldSelector=metadata.name%3Dyyj-test&limit=5>
Jan 30 11:48:22 yyj-test kubelet[9182]: E0130 11:48:22.878591    9182 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI run>
Jan 30 11:48:22 yyj-test kubelet[9182]: E0130 11:48:22.878628    9182 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime >
Jan 30 11:48:22 yyj-test kubelet[9182]: E0130 11:48:22.878641    9182 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime >
Jan 30 11:48:22 yyj-test kubelet[9182]: E0130 11:48:22.878690    9182 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-yyj-test_kube-system(eb433c640f5de58e655dfb1e718e4de6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\>
Jan 30 11:48:23 yyj-test kubelet[9182]: E0130 11:48:23.949675    9182 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI run>
Jan 30 11:48:23 yyj-test kubelet[9182]: E0130 11:48:23.949722    9182 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime >
Jan 30 11:48:23 yyj-test kubelet[9182]: E0130 11:48:23.949749    9182 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime >
Jan 30 11:48:23 yyj-test kubelet[9182]: E0130 11:48:23.949808    9182 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-yyj-test_kube-system(ab557e6bc427dc9e50925caee65aee8f)\" with CreatePodSandboxError: \"Failed to create >
Jan 30 11:48:24 yyj-test kubelet[9182]: E0130 11:48:24.077021    9182 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"yyj-test\" not found"
Jan 30 11:48:24 yyj-test kubelet[9182]: E0130 11:48:24.776278    9182 controller.go:146] failed to ensure lease exists, will retry in 7s, error: Get "https://apiserver.cluster.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/yyj-test?timeout=10s": dial tcp 192.168.6>
Jan 30 11:48:25 yyj-test kubelet[9182]: I0130 11:48:25.069511    9182 kubelet_node_status.go:70] "Attempting to register node" node="yyj-test"
Jan 30 11:48:25 yyj-test kubelet[9182]: E0130 11:48:25.071950    9182 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://apiserver.cluster.local:6443/api/v1/nodes\": dial tcp 192.168.64.92:6443: connect: connection refused" node="yyj-test"
Jan 30 11:48:28 yyj-test kubelet[9182]: E0130 11:48:28.909578    9182 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI run>
Jan 30 11:48:28 yyj-test kubelet[9182]: E0130 11:48:28.909669    9182 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime >
Jan 30 11:48:28 yyj-test kubelet[9182]: E0130 11:48:28.909710    9182 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime >
Jan 30 11:48:28 yyj-test kubelet[9182]: E0130 11:48:28.909768    9182 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-yyj-test_kube-system(bcfcf21cb6796021e9e79a111d35e4fb)\" with CreatePodSandboxError: \"Failed to create sandbox f>
Jan 30 11:48:31 yyj-test kubelet[9182]: E0130 11:48:31.101207    9182 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"yyj-test.173ef907ac0f0602", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", Resource>
Jan 30 11:48:31 yyj-test kubelet[9182]: W0130 11:48:31.265432    9182 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://apiserver.cluster.local:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168>
Jan 30 11:48:31 yyj-test kubelet[9182]: E0130 11:48:31.265812    9182 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://apiserver.cluster.local:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resou>
Jan 30 11:48:31 yyj-test kubelet[9182]: E0130 11:48:31.788059    9182 controller.go:146] failed to ensure lease exists, will retry in 7s, error: Get "https://apiserver.cluster.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/yyj-test?timeout=10s": dial tcp 192.168.6>
Jan 30 11:48:32 yyj-test kubelet[9182]: I0130 11:48:32.082448    9182 kubelet_node_status.go:70] "Attempting to register node" node="yyj-test"
Jan 30 11:48:32 yyj-test kubelet[9182]: E0130 11:48:32.084982    9182 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://apiserver.cluster.local:6443/api/v1/nodes\": dial tcp 192.168.64.92:6443: connect: connection refused" node="yyj-test"
Jan 30 11:48:34 yyj-test kubelet[9182]: E0130 11:48:34.077682    9182 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"yyj-test\" not found"
Jan 30 11:48:34 yyj-test kubelet[9182]: E0130 11:48:34.884061    9182 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI run>
Jan 30 11:48:34 yyj-test kubelet[9182]: E0130 11:48:34.884111    9182 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime >
Jan 30 11:48:34 yyj-test kubelet[9182]: E0130 11:48:34.884149    9182 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime >
Jan 30 11:48:34 yyj-test kubelet[9182]: E0130 11:48:34.884208    9182 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-yyj-test_kube-system(6220df38f6dd0b1dfc9d8f9dcc7fe593)\" with CreatePodSandboxError: \"Failed to create sandbox f>
Jan 30 11:48:34 yyj-test kubelet[9182]: E0130 11:48:34.885495    9182 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI run>
Jan 30 11:48:34 yyj-test kubelet[9182]: E0130 11:48:34.885528    9182 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime >
Jan 30 11:48:34 yyj-test kubelet[9182]: E0130 11:48:34.885546    9182 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime >
Jan 30 11:48:34 yyj-test kubelet[9182]: E0130 11:48:34.885600    9182 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-yyj-test_kube-system(eb433c640f5de58e655dfb1e718e4de6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\>
Jan 30 11:48:38 yyj-test kubelet[9182]: E0130 11:48:38.839885    9182 controller.go:146] failed to ensure lease exists, will retry in 7s, error: Get "https://apiserver.cluster.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/yyj-test?timeout=10s": dial tcp 192.168.6>
Jan 30 11:48:38 yyj-test kubelet[9182]: E0130 11:48:38.891557    9182 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI run>
Jan 30 11:48:38 yyj-test kubelet[9182]: E0130 11:48:38.891614    9182 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime >
Jan 30 11:48:38 yyj-test kubelet[9182]: E0130 11:48:38.891634    9182 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime >
Jan 30 11:48:38 yyj-test kubelet[9182]: E0130 11:48:38.891675    9182 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-yyj-test_kube-system(ab557e6bc427dc9e50925caee65aee8f)\" with CreatePodSandboxError: \"Failed to create >
Jan 30 11:48:39 yyj-test kubelet[9182]: I0130 11:48:39.119508    9182 kubelet_node_status.go:70] "Attempting to register node" node="yyj-test"
Jan 30 11:48:39 yyj-test kubelet[9182]: E0130 11:48:39.120536    9182 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://apiserver.cluster.local:6443/api/v1/nodes\": dial tcp 192.168.64.92:6443: connect: connection refused" node="yyj-test"
Jan 30 11:48:39 yyj-test kubelet[9182]: E0130 11:48:39.918957    9182 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI run>
Jan 30 11:48:39 yyj-test kubelet[9182]: E0130 11:48:39.919006    9182 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime >
Jan 30 11:48:39 yyj-test kubelet[9182]: E0130 11:48:39.919034    9182 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime >
Jan 30 11:48:39 yyj-test kubelet[9182]: E0130 11:48:39.919087    9182 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-yyj-test_kube-system(bcfcf21cb6796021e9e79a111d35e4fb)\" with CreatePodSandboxError: \"Failed to create sandbox f>
Jan 30 11:48:41 yyj-test kubelet[9182]: E0130 11:48:41.158119    9182 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"yyj-test.173ef907ac0f0602", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", Resource>
Jan 30 11:48:42 yyj-test kubelet[9182]: W0130 11:48:42.754964    9182 machine.go:65] Cannot read vendor id correctly, set empty.
Jan 30 11:48:44 yyj-test kubelet[9182]: E0130 11:48:44.083319    9182 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"yyj-test\" not found"
Jan 30 11:48:45 yyj-test kubelet[9182]: E0130 11:48:45.847978    9182 controller.go:146] failed to ensure lease exists, will retry in 7s, error: Get "https://apiserver.cluster.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/yyj-test?timeout=10s": dial tcp 192.168.6>
Jan 30 11:48:46 yyj-test kubelet[9182]: I0130 11:48:46.180178    9182 kubelet_node_status.go:70] "Attempting to register node" node="yyj-test"
Jan 30 11:48:46 yyj-test kubelet[9182]: E0130 11:48:46.181087    9182 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://apiserver.cluster.local:6443/api/v1/nodes\": dial tcp 192.168.64.92:6443: connect: connection refused" node="yyj-test"
Jan 30 11:48:46 yyj-test kubelet[9182]: E0130 11:48:46.885052    9182 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI run>
Jan 30 11:48:46 yyj-test kubelet[9182]: E0130 11:48:46.885132    9182 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime >
Jan 30 11:48:46 yyj-test kubelet[9182]: E0130 11:48:46.885152    9182 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime >
Jan 30 11:48:46 yyj-test kubelet[9182]: E0130 11:48:46.885232    9182 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-yyj-test_kube-system(6220df38f6dd0b1dfc9d8f9dcc7fe593)\" with CreatePodSandboxError: \"Failed to create sandbox f>
Jan 30 11:48:48 yyj-test kubelet[9182]: E0130 11:48:48.930163    9182 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI run>
Jan 30 11:48:48 yyj-test kubelet[9182]: E0130 11:48:48.930270    9182 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime >
Jan 30 11:48:48 yyj-test kubelet[9182]: E0130 11:48:48.930309    9182 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime >
Jan 30 11:48:48 yyj-test kubelet[9182]: E0130 11:48:48.930381    9182 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-yyj-test_kube-system(eb433c640f5de58e655dfb1e718e4de6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\>
Jan 30 11:48:50 yyj-test kubelet[9182]: E0130 11:48:50.005124    9182 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://apiserver.cluster.loc>
Jan 30 11:48:51 yyj-test kubelet[9182]: E0130 11:48:51.162675    9182 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"yyj-test.173ef907ac0f0602", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", Resource>
Jan 30 11:48:52 yyj-test kubelet[9182]: E0130 11:48:52.886110    9182 controller.go:146] failed to ensure lease exists, will retry in 7s, error: Get "https://apiserver.cluster.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/yyj-test?timeout=10s": dial tcp 192.168.6>
Jan 30 11:48:52 yyj-test kubelet[9182]: E0130 11:48:52.981226    9182 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI run>
Jan 30 11:48:52 yyj-test kubelet[9182]: E0130 11:48:52.981288    9182 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime >
Jan 30 11:48:52 yyj-test kubelet[9182]: E0130 11:48:52.981306    9182 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime >
Jan 30 11:48:52 yyj-test kubelet[9182]: E0130 11:48:52.981356    9182 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-yyj-test_kube-system(ab557e6bc427dc9e50925caee65aee8f)\" with CreatePodSandboxError: \"Failed to create >
Jan 30 11:48:52 yyj-test kubelet[9182]: E0130 11:48:52.985651    9182 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI run>
Jan 30 11:48:52 yyj-test kubelet[9182]: E0130 11:48:52.985684    9182 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime >
Jan 30 11:48:52 yyj-test kubelet[9182]: E0130 11:48:52.985722    9182 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime >
Jan 30 11:48:52 yyj-test kubelet[9182]: E0130 11:48:52.985751    9182 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-yyj-test_kube-system(bcfcf21cb6796021e9e79a111d35e4fb)\" with CreatePodSandboxError: \"Failed to create sandbox f>
Jan 30 11:48:53 yyj-test kubelet[9182]: I0130 11:48:53.194111    9182 kubelet_node_status.go:70] "Attempting to register node" node="yyj-test"
Jan 30 11:48:53 yyj-test kubelet[9182]: E0130 11:48:53.194398    9182 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://apiserver.cluster.local:6443/api/v1/nodes\": dial tcp 192.168.64.92:6443: connect: connection refused" node="yyj-test"
Jan 30 11:48:54 yyj-test kubelet[9182]: E0130 11:48:54.094023    9182 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"yyj-test\" not found"
Jan 30 11:48:59 yyj-test kubelet[9182]: W0130 11:48:59.013765    9182 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://apiserver.cluster.local:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.64.92:6443: connect>
Jan 30 11:48:59 yyj-test kubelet[9182]: E0130 11:48:59.013963    9182 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://apiserver.cluster.local:6443/api/v1/services?limit=500&resourceVersion=0": dial tc
willzhang commented 1 year ago

maybe releated:https://github.com/labring/sealos/issues/2447

fanux commented 1 year ago

root@sealos-offic-04:~# cat /var/lib/kubelet/config.yaml|grep cgroup cgroupDriver: systemd

原因是 cgroup driver 配置不一致,正在修复

SupRenekton commented 3 months ago

修复完了么,咋还是有这个问题啊

sealos-ci-robot commented 3 months ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


Has it been repaired? Why do I still have this problem?