Closed fsckzy closed 6 months ago
感觉你的containerd可能没有起来,你看看containerd是不是有什么报错
Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑🤝🧑👫🧑🏿🤝🧑🏻👩🏾🤝👨🏿👬🏿
It seems that your containerd may not be up, please check if there is any error reported by containerd
apiVersion: apps.sealos.io/v1beta1
kind: Config
metadata:
name: containerd-config
spec:
path: etc/config.toml.tmpl
match: registry.rootcloud.com/cloudimages/kubernetes:v1.23.10
strategy: override
data: |
version = 2
root = "{{ .criData }}"
state = "/run/containerd"
oom_score = 0
[grpc]
address = "/run/containerd/containerd.sock"
uid = 0
gid = 0
max_recv_message_size = 16777216
max_send_message_size = 16777216
[debug]
address = "/run/containerd/containerd-debug.sock"
uid = 0
gid = 0
level = "warn"
[timeouts]
"io.containerd.timeout.shim.cleanup" = "5s"
"io.containerd.timeout.shim.load" = "5s"
"io.containerd.timeout.shim.shutdown" = "3s"
"io.containerd.timeout.task.state" = "2s"
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "{{ .registryDomain }}:{{ .registryPort }}/{{ .sandboxImage }}"
max_container_log_line_size = -1
max_concurrent_downloads = 20
disable_apparmor = {{ .disableApparmor }}
[plugins."io.containerd.grpc.v1.cri".containerd]
snapshotter = "overlayfs"
default_runtime_name = "runc"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
runtime_engine = ""
runtime_root = ""
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = false
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = "/etc/containerd/certs.d"
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."{{ .registryDomain }}:{{ .registryPort }}".auth]
username = "{{ .registryUsername }}"
password = "{{ .registryPassword }}"
--config-file
option, like sealos run --config-file=cgroupfs.yaml ...
This issue has been automatically closed because we haven't heard back for more than 60 days, please reopen this issue if necessary.
我和你碰到了一样的问题,请问下这个问题解决了么,我也是用麒麟sp3的机器搭建k8s集群,然后静态pod拉不起来,kubelet报错找不到master01 node节点,有解决办法告诉下我,谢谢
Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑🤝🧑👫🧑🏿🤝🧑🏻👩🏾🤝👨🏿👬🏿
I encountered the same problem as you. Have you solved this problem? I also used a Kirin sp3 machine to build a k8s cluster, and then the static pod could not be pulled up. The kubelet reported an error that the master01 node could not be found. If you have a solution, let me know. ,Thanks
Sealos Version
4.3.0
How to reproduce the bug?
What is the expected behavior?
What do you see instead?
Operating environment
Additional information
No response