Closed caixiaomao closed 2 years ago
如果是centos的话更新最新的k8s版本镜像
如果是centos的话更新最新的k8s版本镜像
操作系统是 Debian 11,
kubelet status 看一下
image-cri-shim 日志发一下
集群镜像最好还是不要用k8s.gcr.io这种公网的,如果私有化机器容量85%以上镜像被gc了就完蛋
集群镜像最好还是不要用k8s.gcr.io这种公网的,如果私有化机器容量85%以上镜像被gc了就完蛋
无所谓应该是,虽然是这个名字,但是实际在私有仓库拉取的其实。
kubelet status 看一下
没起来
journalctl -xe --no-pager -u kubelet
journalctl -xe --no-pager -u kubelet
执行了没有任何日志,就没有贴 😢
journalctl -xe --no-pager -u kubelet
执行了没有任何日志,就没有贴 😢
systemctl cat --no-pager kubelet
手动模拟,命令行前台启动下看看啥错误
journalctl -xe --no-pager -u kubelet
执行了没有任何日志,就没有贴 😢
systemctl cat --no-pager kubelet
手动模拟,命令行前台启动下看看啥错误
镜像仓库的好像是没有开防火墙端口,开了之后有其它的错误。日志如下: systemctl status kubelet:
journalctl -xe --no-pager -u kubelet:
Aug 01 15:10:31 tencent-81 kubelet[503854]: E0801 15:10:31.171872 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:31 tencent-81 kubelet[503854]: E0801 15:10:31.272255 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:31 tencent-81 kubelet[503854]: E0801 15:10:31.373001 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:31 tencent-81 kubelet[503854]: E0801 15:10:31.473660 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:31 tencent-81 kubelet[503854]: E0801 15:10:31.574112 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:31 tencent-81 kubelet[503854]: E0801 15:10:31.674592 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:31 tencent-81 kubelet[503854]: E0801 15:10:31.775074 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:31 tencent-81 kubelet[503854]: E0801 15:10:31.875760 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:31 tencent-81 kubelet[503854]: E0801 15:10:31.976242 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:32 tencent-81 kubelet[503854]: E0801 15:10:32.076712 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:32 tencent-81 kubelet[503854]: E0801 15:10:32.177124 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:32 tencent-81 kubelet[503854]: E0801 15:10:32.277227 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:32 tencent-81 kubelet[503854]: E0801 15:10:32.377917 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:32 tencent-81 kubelet[503854]: E0801 15:10:32.478593 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:32 tencent-81 kubelet[503854]: E0801 15:10:32.579075 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:32 tencent-81 kubelet[503854]: I0801 15:10:32.606025 503854 kubelet_node_status.go:70] "Attempting to register node" node="tencent-81"
Aug 01 15:10:32 tencent-81 kubelet[503854]: E0801 15:10:32.679411 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:32 tencent-81 kubelet[503854]: E0801 15:10:32.779903 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:32 tencent-81 kubelet[503854]: E0801 15:10:32.880785 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:32 tencent-81 kubelet[503854]: E0801 15:10:32.981259 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:33 tencent-81 kubelet[503854]: E0801 15:10:33.081881 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:33 tencent-81 kubelet[503854]: E0801 15:10:33.182752 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:33 tencent-81 kubelet[503854]: E0801 15:10:33.283553 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:33 tencent-81 kubelet[503854]: E0801 15:10:33.384304 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:33 tencent-81 kubelet[503854]: E0801 15:10:33.484992 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:33 tencent-81 kubelet[503854]: E0801 15:10:33.585431 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:33 tencent-81 kubelet[503854]: E0801 15:10:33.685869 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:33 tencent-81 kubelet[503854]: I0801 15:10:33.749054 503854 scope.go:110] "RemoveContainer" containerID="bbc0f892a7af1b6be9bb815ab22d001aaec933f17e89f8228f607ffcecae1d97"
Aug 01 15:10:33 tencent-81 kubelet[503854]: E0801 15:10:33.749346 503854 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=etcd pod=etcd-tencent-81_kube-system(6d93d9c376f28acdcdad3369076be65d)\"" pod="kube-system/etcd-tencent-81" podUID=6d93d9c376f28acdcdad3369076be65d
Aug 01 15:10:33 tencent-81 kubelet[503854]: E0801 15:10:33.786433 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:33 tencent-81 kubelet[503854]: E0801 15:10:33.887124 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:33 tencent-81 kubelet[503854]: E0801 15:10:33.987584 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:34 tencent-81 kubelet[503854]: E0801 15:10:34.088066 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:34 tencent-81 kubelet[503854]: E0801 15:10:34.103739 503854 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Aug 01 15:10:34 tencent-81 kubelet[503854]: E0801 15:10:34.188260 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:34 tencent-81 kubelet[503854]: I0801 15:10:34.237688 503854 scope.go:110] "RemoveContainer" containerID="d1a9f0efc51d530f0860c5133e1df39636584f316313d2ef9b575012933dd29e"
Aug 01 15:10:34 tencent-81 kubelet[503854]: E0801 15:10:34.238187 503854 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-tencent-81_kube-system(d8e612d792c5144b9d433b6dac02724d)\"" pod="kube-system/kube-apiserver-tencent-81" podUID=d8e612d792c5144b9d433b6dac02724d
Aug 01 15:10:34 tencent-81 kubelet[503854]: E0801 15:10:34.288983 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:34 tencent-81 kubelet[503854]: I0801 15:10:34.325742 503854 scope.go:110] "RemoveContainer" containerID="bbc0f892a7af1b6be9bb815ab22d001aaec933f17e89f8228f607ffcecae1d97"
Aug 01 15:10:34 tencent-81 kubelet[503854]: E0801 15:10:34.326059 503854 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=etcd pod=etcd-tencent-81_kube-system(6d93d9c376f28acdcdad3369076be65d)\"" pod="kube-system/etcd-tencent-81" podUID=6d93d9c376f28acdcdad3369076be65d
Aug 01 15:10:34 tencent-81 kubelet[503854]: E0801 15:10:34.389924 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:34 tencent-81 kubelet[503854]: E0801 15:10:34.490741 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:34 tencent-81 kubelet[503854]: E0801 15:10:34.591208 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:34 tencent-81 kubelet[503854]: E0801 15:10:34.691678 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:34 tencent-81 kubelet[503854]: E0801 15:10:34.792140 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:34 tencent-81 kubelet[503854]: E0801 15:10:34.892809 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:34 tencent-81 kubelet[503854]: E0801 15:10:34.993278 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:35 tencent-81 kubelet[503854]: E0801 15:10:35.093744 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:35 tencent-81 kubelet[503854]: E0801 15:10:35.194137 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:35 tencent-81 kubelet[503854]: E0801 15:10:35.294745 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:35 tencent-81 kubelet[503854]: E0801 15:10:35.395537 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:35 tencent-81 kubelet[503854]: E0801 15:10:35.495876 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:35 tencent-81 kubelet[503854]: E0801 15:10:35.596214 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:35 tencent-81 kubelet[503854]: E0801 15:10:35.696596 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:35 tencent-81 kubelet[503854]: E0801 15:10:35.796974 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:35 tencent-81 kubelet[503854]: E0801 15:10:35.897659 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:35 tencent-81 kubelet[503854]: E0801 15:10:35.998118 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:36 tencent-81 kubelet[503854]: E0801 15:10:36.098587 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:36 tencent-81 kubelet[503854]: E0801 15:10:36.199308 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:36 tencent-81 kubelet[503854]: E0801 15:10:36.300373 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:36 tencent-81 kubelet[503854]: E0801 15:10:36.401369 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:36 tencent-81 kubelet[503854]: E0801 15:10:36.502028 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:36 tencent-81 kubelet[503854]: E0801 15:10:36.518218 503854 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://apiserver.cluster.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/tencent-81?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Aug 01 15:10:36 tencent-81 kubelet[503854]: E0801 15:10:36.602646 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:36 tencent-81 kubelet[503854]: E0801 15:10:36.703175 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:36 tencent-81 kubelet[503854]: E0801 15:10:36.803618 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:36 tencent-81 kubelet[503854]: E0801 15:10:36.904167 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:37 tencent-81 kubelet[503854]: E0801 15:10:37.004591 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:37 tencent-81 kubelet[503854]: E0801 15:10:37.105021 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:37 tencent-81 kubelet[503854]: E0801 15:10:37.205591 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:37 tencent-81 kubelet[503854]: E0801 15:10:37.306289 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:37 tencent-81 kubelet[503854]: E0801 15:10:37.407075 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:37 tencent-81 kubelet[503854]: E0801 15:10:37.507548 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:37 tencent-81 kubelet[503854]: E0801 15:10:37.607997 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:37 tencent-81 kubelet[503854]: E0801 15:10:37.708461 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:37 tencent-81 kubelet[503854]: E0801 15:10:37.808997 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:37 tencent-81 kubelet[503854]: E0801 15:10:37.909552 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:38 tencent-81 kubelet[503854]: E0801 15:10:38.010049 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:38 tencent-81 kubelet[503854]: E0801 15:10:38.110511 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:38 tencent-81 kubelet[503854]: E0801 15:10:38.211162 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:38 tencent-81 kubelet[503854]: E0801 15:10:38.311913 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:38 tencent-81 kubelet[503854]: E0801 15:10:38.412444 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:38 tencent-81 kubelet[503854]: E0801 15:10:38.513265 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:38 tencent-81 kubelet[503854]: E0801 15:10:38.613742 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:38 tencent-81 kubelet[503854]: E0801 15:10:38.714227 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:38 tencent-81 kubelet[503854]: E0801 15:10:38.814970 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:38 tencent-81 kubelet[503854]: E0801 15:10:38.915038 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:39 tencent-81 kubelet[503854]: E0801 15:10:39.015491 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:39 tencent-81 kubelet[503854]: E0801 15:10:39.024683 503854 eviction_manager.go:254] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"tencent-81\" not found"
Aug 01 15:10:39 tencent-81 kubelet[503854]: E0801 15:10:39.104101 503854 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Aug 01 15:10:39 tencent-81 kubelet[503854]: E0801 15:10:39.116506 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:39 tencent-81 kubelet[503854]: E0801 15:10:39.217159 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:39 tencent-81 kubelet[503854]: E0801 15:10:39.317917 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:39 tencent-81 kubelet[503854]: E0801 15:10:39.418865 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:39 tencent-81 kubelet[503854]: E0801 15:10:39.519277 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:39 tencent-81 kubelet[503854]: E0801 15:10:39.619829 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:39 tencent-81 kubelet[503854]: E0801 15:10:39.720291 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:39 tencent-81 kubelet[503854]: E0801 15:10:39.820886 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:39 tencent-81 kubelet[503854]: E0801 15:10:39.921435 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:40 tencent-81 kubelet[503854]: E0801 15:10:40.021927 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:40 tencent-81 kubelet[503854]: E0801 15:10:40.122320 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:40 tencent-81 kubelet[503854]: E0801 15:10:40.223406 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:40 tencent-81 kubelet[503854]: E0801 15:10:40.324162 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:40 tencent-81 kubelet[503854]: E0801 15:10:40.424895 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:40 tencent-81 kubelet[503854]: E0801 15:10:40.525736 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:40 tencent-81 kubelet[503854]: E0801 15:10:40.626204 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:40 tencent-81 kubelet[503854]: E0801 15:10:40.726711 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:40 tencent-81 kubelet[503854]: E0801 15:10:40.827330 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:40 tencent-81 kubelet[503854]: E0801 15:10:40.927903 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:41 tencent-81 kubelet[503854]: E0801 15:10:41.028463 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:41 tencent-81 kubelet[503854]: E0801 15:10:41.128922 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:41 tencent-81 kubelet[503854]: E0801 15:10:41.229387 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:41 tencent-81 kubelet[503854]: E0801 15:10:41.330463 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:41 tencent-81 kubelet[503854]: E0801 15:10:41.431032 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:41 tencent-81 kubelet[503854]: E0801 15:10:41.531501 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:41 tencent-81 kubelet[503854]: E0801 15:10:41.632274 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:41 tencent-81 kubelet[503854]: E0801 15:10:41.732715 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:41 tencent-81 kubelet[503854]: E0801 15:10:41.833291 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:41 tencent-81 kubelet[503854]: E0801 15:10:41.933841 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:42 tencent-81 kubelet[503854]: E0801 15:10:42.034316 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:42 tencent-81 kubelet[503854]: E0801 15:10:42.134774 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:42 tencent-81 kubelet[503854]: E0801 15:10:42.235388 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:42 tencent-81 kubelet[503854]: E0801 15:10:42.336024 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:42 tencent-81 kubelet[503854]: E0801 15:10:42.436801 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:42 tencent-81 kubelet[503854]: E0801 15:10:42.537277 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:42 tencent-81 kubelet[503854]: E0801 15:10:42.637917 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:42 tencent-81 kubelet[503854]: E0801 15:10:42.738392 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:42 tencent-81 kubelet[503854]: E0801 15:10:42.839012 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:42 tencent-81 kubelet[503854]: E0801 15:10:42.939573 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:43 tencent-81 kubelet[503854]: E0801 15:10:43.040188 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:43 tencent-81 kubelet[503854]: E0801 15:10:43.140656 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:43 tencent-81 kubelet[503854]: E0801 15:10:43.241388 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:43 tencent-81 kubelet[503854]: E0801 15:10:43.342020 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:43 tencent-81 kubelet[503854]: E0801 15:10:43.442968 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:43 tencent-81 kubelet[503854]: E0801 15:10:43.543113 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:43 tencent-81 kubelet[503854]: E0801 15:10:43.643184 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:43 tencent-81 kubelet[503854]: E0801 15:10:43.743679 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:43 tencent-81 kubelet[503854]: E0801 15:10:43.844413 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:43 tencent-81 kubelet[503854]: E0801 15:10:43.944999 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:44 tencent-81 kubelet[503854]: E0801 15:10:44.045442 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:44 tencent-81 kubelet[503854]: E0801 15:10:44.105461 503854 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Aug 01 15:10:44 tencent-81 kubelet[503854]: E0801 15:10:44.145913 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:44 tencent-81 kubelet[503854]: E0801 15:10:44.246758 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:44 tencent-81 kubelet[503854]: E0801 15:10:44.347395 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:44 tencent-81 kubelet[503854]: E0801 15:10:44.448159 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:44 tencent-81 kubelet[503854]: E0801 15:10:44.548980 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:44 tencent-81 kubelet[503854]: E0801 15:10:44.649450 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:44 tencent-81 kubelet[503854]: E0801 15:10:44.749919 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:44 tencent-81 kubelet[503854]: E0801 15:10:44.850585 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:44 tencent-81 kubelet[503854]: E0801 15:10:44.951208 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:45 tencent-81 kubelet[503854]: E0801 15:10:45.051749 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:45 tencent-81 kubelet[503854]: E0801 15:10:45.152236 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:45 tencent-81 kubelet[503854]: E0801 15:10:45.252978 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:45 tencent-81 kubelet[503854]: E0801 15:10:45.353840 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:45 tencent-81 kubelet[503854]: E0801 15:10:45.454624 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:45 tencent-81 kubelet[503854]: E0801 15:10:45.555083 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:45 tencent-81 kubelet[503854]: E0801 15:10:45.655549 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:45 tencent-81 kubelet[503854]: E0801 15:10:45.755963 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:45 tencent-81 kubelet[503854]: E0801 15:10:45.856586 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:45 tencent-81 kubelet[503854]: E0801 15:10:45.957562 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:46 tencent-81 kubelet[503854]: E0801 15:10:46.058031 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:46 tencent-81 kubelet[503854]: E0801 15:10:46.158484 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:46 tencent-81 kubelet[503854]: I0801 15:10:46.237284 503854 scope.go:110] "RemoveContainer" containerID="d1a9f0efc51d530f0860c5133e1df39636584f316313d2ef9b575012933dd29e"
Aug 01 15:10:46 tencent-81 kubelet[503854]: E0801 15:10:46.237760 503854 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-tencent-81_kube-system(d8e612d792c5144b9d433b6dac02724d)\"" pod="kube-system/kube-apiserver-tencent-81" podUID=d8e612d792c5144b9d433b6dac02724d
Aug 01 15:10:46 tencent-81 kubelet[503854]: E0801 15:10:46.259359 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:46 tencent-81 kubelet[503854]: E0801 15:10:46.360000 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:46 tencent-81 kubelet[503854]: E0801 15:10:46.460667 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:46 tencent-81 kubelet[503854]: E0801 15:10:46.561085 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:46 tencent-81 kubelet[503854]: E0801 15:10:46.661403 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:46 tencent-81 kubelet[503854]: E0801 15:10:46.761762 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:46 tencent-81 kubelet[503854]: E0801 15:10:46.862390 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:46 tencent-81 kubelet[503854]: E0801 15:10:46.962829 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:47 tencent-81 kubelet[503854]: E0801 15:10:47.063308 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:47 tencent-81 kubelet[503854]: E0801 15:10:47.163766 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:47 tencent-81 kubelet[503854]: E0801 15:10:47.264496 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:47 tencent-81 kubelet[503854]: E0801 15:10:47.365118 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:47 tencent-81 kubelet[503854]: E0801 15:10:47.465804 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:47 tencent-81 kubelet[503854]: E0801 15:10:47.566293 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:47 tencent-81 kubelet[503854]: E0801 15:10:47.666776 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:47 tencent-81 kubelet[503854]: E0801 15:10:47.767410 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:47 tencent-81 kubelet[503854]: E0801 15:10:47.868147 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:47 tencent-81 kubelet[503854]: E0801 15:10:47.968619 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:48 tencent-81 kubelet[503854]: E0801 15:10:48.069109 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:48 tencent-81 kubelet[503854]: E0801 15:10:48.169579 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:48 tencent-81 kubelet[503854]: E0801 15:10:48.270339 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:48 tencent-81 kubelet[503854]: W0801 15:10:48.275577 503854 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list v1.CSIDriver: Get "https://apiserver.cluster.local:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 81.70.253.133:6443: i/o timeout
Aug 01 15:10:48 tencent-81 kubelet[503854]: I0801 15:10:48.275664 503854 trace.go:205] Trace[370566960]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:134 (01-Aug-2022 15:10:18.274) (total time: 30001ms):
Aug 01 15:10:48 tencent-81 kubelet[503854]: Trace[370566960]: ---"Objects listed" error:Get "https://apiserver.cluster.local:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 81.70.253.133:6443: i/o timeout 30001ms (15:10:48.275)
Aug 01 15:10:48 tencent-81 kubelet[503854]: Trace[370566960]: [30.001431279s] [30.001431279s] END
Aug 01 15:10:48 tencent-81 kubelet[503854]: E0801 15:10:48.275676 503854 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch v1.CSIDriver: failed to list v1.CSIDriver: Get "https://apiserver.cluster.local:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 81.70.253.133:6443: i/o timeout
Aug 01 15:10:48 tencent-81 kubelet[503854]: E0801 15:10:48.371356 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:48 tencent-81 kubelet[503854]: E0801 15:10:48.472234 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:48 tencent-81 kubelet[503854]: E0801 15:10:48.573132 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:48 tencent-81 kubelet[503854]: E0801 15:10:48.673599 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:48 tencent-81 kubelet[503854]: E0801 15:10:48.774111 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:48 tencent-81 kubelet[503854]: E0801 15:10:48.874206 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:48 tencent-81 kubelet[503854]: E0801 15:10:48.974834 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:49 tencent-81 kubelet[503854]: E0801 15:10:49.025041 503854 eviction_manager.go:254] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"tencent-81\" not found"
Aug 01 15:10:49 tencent-81 kubelet[503854]: E0801 15:10:49.075457 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:49 tencent-81 kubelet[503854]: E0801 15:10:49.106161 503854 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Aug 01 15:10:49 tencent-81 kubelet[503854]: E0801 15:10:49.175688 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:49 tencent-81 kubelet[503854]: I0801 15:10:49.237050 503854 scope.go:110] "RemoveContainer" containerID="bbc0f892a7af1b6be9bb815ab22d001aaec933f17e89f8228f607ffcecae1d97"
Aug 01 15:10:49 tencent-81 kubelet[503854]: E0801 15:10:49.237349 503854 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=etcd pod=etcd-tencent-81_kube-system(6d93d9c376f28acdcdad3369076be65d)\"" pod="kube-system/etcd-tencent-81" podUID=6d93d9c376f28acdcdad3369076be65d
Aug 01 15:10:49 tencent-81 kubelet[503854]: E0801 15:10:49.276389 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:49 tencent-81 kubelet[503854]: E0801 15:10:49.377076 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:49 tencent-81 kubelet[503854]: E0801 15:10:49.477673 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:49 tencent-81 kubelet[503854]: E0801 15:10:49.578135 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:49 tencent-81 kubelet[503854]: E0801 15:10:49.678605 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:49 tencent-81 kubelet[503854]: E0801 15:10:49.779106 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:49 tencent-81 kubelet[503854]: E0801 15:10:49.879195 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:49 tencent-81 kubelet[503854]: E0801 15:10:49.979667 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:50 tencent-81 kubelet[503854]: E0801 15:10:50.031970 503854 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://apiserver.cluster.local:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 81.70.253.133:6443: i/o timeout
Aug 01 15:10:50 tencent-81 kubelet[503854]: E0801 15:10:50.080284 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:50 tencent-81 kubelet[503854]: E0801 15:10:50.181250 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:50 tencent-81 kubelet[503854]: E0801 15:10:50.282145 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:50 tencent-81 kubelet[503854]: E0801 15:10:50.383064 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:50 tencent-81 kubelet[503854]: E0801 15:10:50.483679 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:50 tencent-81 kubelet[503854]: E0801 15:10:50.584111 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:50 tencent-81 kubelet[503854]: E0801 15:10:50.684606 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:50 tencent-81 kubelet[503854]: E0801 15:10:50.785115 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:50 tencent-81 kubelet[503854]: E0801 15:10:50.885865 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:50 tencent-81 kubelet[503854]: E0801 15:10:50.986488 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:51 tencent-81 kubelet[503854]: E0801 15:10:51.086981 503854 kubelet.go:2424] "Error getting node" err="node \"tencent-81\" not found"
Aug 01 15:10:51 tencent-81 kubelet[503854]: E0801 15:10:51.146246 503854 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"tencent-81.1707265abc89fa66", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:
sealos 安装日志如下:
$ sealos run labring/kubernetes:v1.24.3 --masters 81.70.253.133 --pk /home/tools/sealos/caixiaomao
2022-08-01 14:44:09 [INFO] Start to create a new cluster: master [81.70.253.133], worker []
2022-08-01 14:44:09 [INFO] Executing pipeline Check in CreateProcessor.
2022-08-01 14:44:09 [INFO] checker:hostname [81.70.253.133:22]
2022-08-01 14:44:09 [INFO] checker:timeSync [81.70.253.133:22]
2022-08-01 14:44:09 [INFO] Executing pipeline PreProcess in CreateProcessor.
32b8fcc13f968a4054ad0e69beaa1b6e35c74e8bed6494e2af64cfbcd89db884
default-elgynbib
2022-08-01 14:44:10 [INFO] Executing pipeline RunConfig in CreateProcessor.
2022-08-01 14:44:10 [INFO] Executing pipeline MountRootfs in CreateProcessor.
81.70.253.133:22: INFO [2022-08-01 14:54:41] >> check root,port,cri success
81.70.253.133:22: Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service.
81.70.253.133:22: INFO [2022-08-01 14:54:43] >> Health check containerd!
81.70.253.133:22: INFO [2022-08-01 14:54:44] >> containerd is running
81.70.253.133:22: INFO [2022-08-01 14:54:44] >> init containerd success
81.70.253.133:22: Created symlink /etc/systemd/system/multi-user.target.wants/image-cri-shim.service → /etc/systemd/system/image-cri-shim.service.
81.70.253.133:22: INFO [2022-08-01 14:54:44] >> Health check image-cri-shim!
81.70.253.133:22: INFO [2022-08-01 14:54:44] >> image-cri-shim is running
81.70.253.133:22: INFO [2022-08-01 14:54:44] >> init shim success
81.70.253.133:22: Applying /usr/lib/sysctl.d/50-pid-max.conf ...
81.70.253.133:22: kernel.pid_max = 4194304
81.70.253.133:22: Applying /etc/sysctl.d/99-sysctl.conf ...
81.70.253.133:22: kernel.core_uses_pid = 1
81.70.253.133:22: net.ipv4.ip_forward = 0
81.70.253.133:22: net.ipv6.conf.all.forwarding = 0
81.70.253.133:22: net.ipv4.conf.default.rp_filter = 1
81.70.253.133:22: kernel.msgmnb = 65536
81.70.253.133:22: kernel.msgmax = 65536
81.70.253.133:22: net.ipv4.conf.default.accept_source_route = 0
81.70.253.133:22: net.ipv4.tcp_syncookies = 1
81.70.253.133:22: net.ipv6.conf.all.disable_ipv6 = 0
81.70.253.133:22: net.ipv6.conf.default.disable_ipv6 = 0
81.70.253.133:22: net.ipv6.conf.lo.disable_ipv6 = 0
81.70.253.133:22: net.ipv4.conf.all.promote_secondaries = 1
81.70.253.133:22: net.ipv4.conf.default.promote_secondaries = 1
81.70.253.133:22: net.ipv6.neigh.default.gc_thresh3 = 4096
81.70.253.133:22: net.ipv4.neigh.default.gc_thresh3 = 4096
81.70.253.133:22: kernel.printk = 5 4 1 7
81.70.253.133:22: kernel.softlockup_panic = 1
81.70.253.133:22: kernel.sysrq = 1
81.70.253.133:22: Applying /etc/sysctl.d/k8s.conf ...
81.70.253.133:22: net.bridge.bridge-nf-call-ip6tables = 1
81.70.253.133:22: net.bridge.bridge-nf-call-iptables = 1
81.70.253.133:22: net.ipv4.conf.all.rp_filter = 0
81.70.253.133:22: Applying /usr/lib/sysctl.d/protect-links.conf ...
81.70.253.133:22: fs.protected_fifos = 1
81.70.253.133:22: fs.protected_hardlinks = 1
81.70.253.133:22: fs.protected_regular = 2
81.70.253.133:22: fs.protected_symlinks = 1
81.70.253.133:22: * Applying /etc/sysctl.conf ...
81.70.253.133:22: kernel.core_uses_pid = 1
81.70.253.133:22: net.ipv4.ip_forward = 0
81.70.253.133:22: net.ipv6.conf.all.forwarding = 0
81.70.253.133:22: net.ipv4.conf.default.rp_filter = 1
81.70.253.133:22: kernel.msgmnb = 65536
81.70.253.133:22: kernel.msgmax = 65536
81.70.253.133:22: net.ipv4.conf.default.accept_source_route = 0
81.70.253.133:22: net.ipv4.tcp_syncookies = 1
81.70.253.133:22: net.ipv6.conf.all.disable_ipv6 = 0
81.70.253.133:22: net.ipv6.conf.default.disable_ipv6 = 0
81.70.253.133:22: net.ipv6.conf.lo.disable_ipv6 = 0
81.70.253.133:22: net.ipv4.conf.all.promote_secondaries = 1
81.70.253.133:22: net.ipv4.conf.default.promote_secondaries = 1
81.70.253.133:22: net.ipv6.neigh.default.gc_thresh3 = 4096
81.70.253.133:22: net.ipv4.neigh.default.gc_thresh3 = 4096
81.70.253.133:22: kernel.printk = 5 4 1 7
81.70.253.133:22: kernel.softlockup_panic = 1
81.70.253.133:22: kernel.sysrq = 1
81.70.253.133:22: net.ipv4.ip_forward = 1
81.70.253.133:22: INFO [2022-08-01 14:54:44] >> init kube success
81.70.253.133:22: INFO [2022-08-01 14:54:44] >> init containerd rootfs success
2022-08-01 14:54:44 [INFO] Executing pipeline Init in CreateProcessor.
2022-08-01 14:54:44 [INFO] start to copy kubeadm config to master0
2022-08-01 14:54:45 [INFO] start to generate cert and kubeConfig... 5 it/s)
2022-08-01 14:54:45 [INFO] start to generator cert and copy to masters...
2022-08-01 14:54:45 [INFO] apiserver altNames : {map[apiserver.cluster.local:apiserver.cluster.local kubernetes:kubernetes kubernetes.default:kubernetes.default kubernetes.default.svc:kubernetes.default.svc kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local localhost:localhost tencent-81:tencent-81] map[10.103.97.2:10.103.97.2 10.96.0.1:10.96.0.1 127.0.0.1:127.0.0.1 81.70.253.133:81.70.253.133]}
2022-08-01 14:54:45 [INFO] Etcd altnames : {map[localhost:localhost tencent-81:tencent-81] map[127.0.0.1:127.0.0.1 81.70.253.133:81.70.253.133 ::1:::1]}, commonName : tencent-81
2022-08-01 14:54:47 [INFO] start to copy etc pki files to masters
2022-08-01 14:54:52 [INFO] start to create kubeconfig...
2022-08-01 14:54:53 [INFO] start to copy kubeconfig files to masters
2022-08-01 14:54:54 [INFO] start to copy static files to masters 5 it/s)
2022-08-01 14:54:54 [INFO] start to apply registry
81.70.253.133:22: unpacking docker.io/library/registry:2.7.1 (sha256:49bd6b1420deba16b51bd073977ea6ae4000b816a356b12e805a699c4e5d3dba)...done
81.70.253.133:22: b138a56f4ff30fcb419f3a83a3fc418bfb12f10dd9d40abf03b55da1e60397a5
81.70.253.133:22: INFO [2022-08-01 14:54:56] >> init registry success
2022-08-01 14:54:56 [INFO] start to init master0...
2022-08-01 14:54:56 [INFO] registry auth in node 81.70.253.133:22
81.70.253.133:22: 2022-08-01T14:54:56 info domain sealos.hub:81.70.253.133 append success
81.70.253.133:22: 2022-08-01T14:54:56 info domain apiserver.cluster.local:81.70.253.133 append success
81.70.253.133:22: W0801 14:54:57.098553 502918 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
81.70.253.133:22: [init] Using Kubernetes version: v1.24.3
81.70.253.133:22: [preflight] Running pre-flight checks
81.70.253.133:22: [WARNING FileExisting-socat]: socat not found in system path
81.70.253.133:22: [WARNING SystemVerification]: missing optional cgroups: blkio
81.70.253.133:22: [preflight] Pulling images required for setting up a Kubernetes cluster
81.70.253.133:22: [preflight] This might take a minute or two, depending on the speed of your internet connection
81.70.253.133:22: [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
81.70.253.133:22: [certs] Using certificateDir folder "/etc/kubernetes/pki"
81.70.253.133:22: [certs] Using existing ca certificate authority
81.70.253.133:22: [certs] Using existing apiserver certificate and key on disk
81.70.253.133:22: [certs] Using existing apiserver-kubelet-client certificate and key on disk
81.70.253.133:22: [certs] Using existing front-proxy-ca certificate authority
81.70.253.133:22: [certs] Using existing front-proxy-client certificate and key on disk
81.70.253.133:22: [certs] Using existing etcd/ca certificate authority
81.70.253.133:22: [certs] Using existing etcd/server certificate and key on disk
81.70.253.133:22: [certs] Using existing etcd/peer certificate and key on disk
81.70.253.133:22: [certs] Using existing etcd/healthcheck-client certificate and key on disk
81.70.253.133:22: [certs] Using existing apiserver-etcd-client certificate and key on disk
81.70.253.133:22: [certs] Using the existing "sa" key
81.70.253.133:22: [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
81.70.253.133:22: [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
81.70.253.133:22: [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
81.70.253.133:22: W0801 14:59:17.879291 502918 kubeconfig.go:249] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://81.70.253.133:6443, got: https://apiserver.cluster.local:6443
81.70.253.133:22: [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
81.70.253.133:22: W0801 14:59:18.010543 502918 kubeconfig.go:249] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://81.70.253.133:6443, got: https://apiserver.cluster.local:6443
81.70.253.133:22: [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
81.70.253.133:22: [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
81.70.253.133:22: [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
81.70.253.133:22: [kubelet-start] Starting the kubelet
81.70.253.133:22: [control-plane] Using manifest folder "/etc/kubernetes/manifests"
81.70.253.133:22: [control-plane] Creating static Pod manifest for "kube-apiserver"
81.70.253.133:22: [control-plane] Creating static Pod manifest for "kube-controller-manager"
81.70.253.133:22: [control-plane] Creating static Pod manifest for "kube-scheduler"
81.70.253.133:22: [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
81.70.253.133:22: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
81.70.253.133:22: [kubelet-check] Initial timeout of 40s passed.
81.70.253.133:22:
81.70.253.133:22: Unfortunately, an error has occurred:
81.70.253.133:22: timed out waiting for the condition
81.70.253.133:22:
81.70.253.133:22: This error is likely caused by:
81.70.253.133:22: - The kubelet is not running
81.70.253.133:22: - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
81.70.253.133:22:
81.70.253.133:22: If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
81.70.253.133:22: - 'systemctl status kubelet'
81.70.253.133:22: - 'journalctl -xeu kubelet'
81.70.253.133:22:
81.70.253.133:22: Additionally, a control plane component may have crashed or exited when started by the container runtime.
81.70.253.133:22: To troubleshoot, list all containers using your preferred container runtimes CLI.
81.70.253.133:22: Here is one example how you may list all running Kubernetes containers by using crictl:
81.70.253.133:22: - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
81.70.253.133:22: Once you have found the failing container, you can inspect its logs with:
81.70.253.133:22: - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
81.70.253.133:22: error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
81.70.253.133:22: To see the stack trace of this error execute with --v=5 or higher
2022-08-01 15:03:47 [EROR] Applied to cluster error: failed to init init master0 failed, error: failed to execute command(kubeadm init --config=/var/lib/sealos/data/default/etc/kubeadm-init.yaml --skip-certificate-key-print --skip-token-print -v 0 --ignore-preflight-errors=SystemVerification) on host(81.70.253.133:22): output(W0801 14:54:57.098553 502918 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[init] Using Kubernetes version: v1.24.3
[preflight] Running pre-flight checks
[WARNING FileExisting-socat]: socat not found in system path
[WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
W0801 14:59:17.879291 502918 kubeconfig.go:249] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://81.70.253.133:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
W0801 14:59:18.010543 502918 kubeconfig.go:249] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://81.70.253.133:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred: timed out waiting for the condition
This error is likely caused by:
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl:
crictl images crictl ps -a 都执行一下看一下
crictl images crictl ps -a 都执行一下看一下
etcd 启动失败,看日志报错如下:
$ crictl logs 5c56895a3c89f
{"level":"info","ts":"2022-08-01T12:24:12.254Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://81.70.253.133:2379","--cert-file=/etc/kubernetes/pki/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/etcd","--experimental-initial-corrupt-check=true","--initial-advertise-peer-urls=https://81.70.253.133:2380","--initial-cluster=tencent-81=https://81.70.253.133:2380","--key-file=/etc/kubernetes/pki/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://81.70.253.133:2379","--listen-metrics-urls=http://0.0.0.0:2381","--listen-peer-urls=https://81.70.253.133:2380","--name=tencent-81","--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/etc/kubernetes/pki/etcd/peer.key","--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt"]}
{"level":"info","ts":"2022-08-01T12:24:12.254Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["https://81.70.253.133:2380"]}
{"level":"info","ts":"2022-08-01T12:24:12.254Z","caller":"embed/etcd.go:479","msg":"starting with peer TLS","tls-info":"cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-08-01T12:24:12.254Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"tencent-81","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://81.70.253.133:2380"],"advertise-client-urls":["https://81.70.253.133:2379"]}
{"level":"info","ts":"2022-08-01T12:24:12.254Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"tencent-81","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://81.70.253.133:2380"],"advertise-client-urls":["https://81.70.253.133:2379"]}
{"level":"warn","ts":"2022-08-01T12:24:12.255Z","caller":"etcdmain/etcd.go:146","msg":"failed to start etcd","error":"listen tcp 81.70.253.133:2380: bind: cannot assign requested address"}
{"level":"fatal","ts":"2022-08-01T12:24:12.255Z","caller":"etcdmain/etcd.go:204","msg":"discovery failed","error":"listen tcp 81.70.253.133:2380: bind: cannot assign requested address","stacktrace":"go.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\t/go/src/go.etcd.io/etcd/release/etcd/server/etcdmain/etcd.go:204\ngo.etcd.io/etcd/server/v3/etcdmain.Main\n\t/go/src/go.etcd.io/etcd/release/etcd/server/etcdmain/main.go:40\nmain.main\n\t/go/src/go.etcd.io/etcd/release/etcd/server/main.go:32\nruntime.main\n\t/go/gos/go1.16.15/src/runtime/proc.go:225"}
etcd 启动失败,看日志报错如下:
$ crictl logs 5c56895a3c89f {"level":"info","ts":"2022-08-01T12:24:12.254Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://81.70.253.133:2379","--cert-file=/etc/kubernetes/pki/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/etcd","--experimental-initial-corrupt-check=true","--initial-advertise-peer-urls=https://81.70.253.133:2380","--initial-cluster=tencent-81=https://81.70.253.133:2380","--key-file=/etc/kubernetes/pki/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://81.70.253.133:2379","--listen-metrics-urls=http://0.0.0.0:2381","--listen-peer-urls=https://81.70.253.133:2380","--name=tencent-81","--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/etc/kubernetes/pki/etcd/peer.key","--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt"]} {"level":"info","ts":"2022-08-01T12:24:12.254Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["https://81.70.253.133:2380"]} {"level":"info","ts":"2022-08-01T12:24:12.254Z","caller":"embed/etcd.go:479","msg":"starting with peer TLS","tls-info":"cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2022-08-01T12:24:12.254Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"tencent-81","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://81.70.253.133:2380"],"advertise-client-urls":["https://81.70.253.133:2379"]} {"level":"info","ts":"2022-08-01T12:24:12.254Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"tencent-81","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://81.70.253.133:2380"],"advertise-client-urls":["https://81.70.253.133:2379"]} {"level":"warn","ts":"2022-08-01T12:24:12.255Z","caller":"etcdmain/etcd.go:146","msg":"failed to start etcd","error":"listen tcp 81.70.253.133:2380: bind: cannot assign requested address"} {"level":"fatal","ts":"2022-08-01T12:24:12.255Z","caller":"etcdmain/etcd.go:204","msg":"discovery failed","error":"listen tcp 81.70.253.133:2380: bind: cannot assign requested address","stacktrace":"go.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\t/go/src/go.etcd.io/etcd/release/etcd/server/etcdmain/etcd.go:204\ngo.etcd.io/etcd/server/v3/etcdmain.Main\n\t/go/src/go.etcd.io/etcd/release/etcd/server/etcdmain/main.go:40\nmain.main\n\t/go/src/go.etcd.io/etcd/release/etcd/server/main.go:32\nruntime.main\n\t/go/gos/go1.16.15/src/runtime/proc.go:225"}
查了相关资料,云厂商的公网ip可能会存在类似问题,目前监听的就是是公网 ip,是不是可能是这个导致的?
公有云的公网ip是nat的,不会配置在网卡上,这个确实是生成的etcd配置有问题
公有云的公网ip是nat的,不会配置在网卡上,这个确实是生成的etcd配置有问题
好吧,我看看怎么解决,谢谢啦
所以是怎么解决的。。
Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑🤝🧑👫🧑🏿🤝🧑🏻👩🏾🤝👨🏿👬🏿
So how was it solved. .
Which command or component sealos run labring/kubernetes:v1.24.3 --masters 81.70.253.133 --pk /home/tools/sealos/caixiaomao
The Description of the question 详细日志 sealos.log $ sealos run labring/kubernetes:v1.24.3 --masters 81.70.253.133 --pk /home/tools/sealos/caixiaomao 2022-07-31 16:55:59 [INFO] Start to create a new cluster: master [81.70.253.133], worker [] 2022-07-31 16:55:59 [INFO] Executing pipeline Check in CreateProcessor. 2022-07-31 16:55:59 [INFO] checker:hostname [81.70.253.133:22] 2022-07-31 16:55:59 [INFO] checker:timeSync [81.70.253.133:22] 2022-07-31 16:56:00 [INFO] Executing pipeline PreProcess in CreateProcessor. 32b8fcc13f968a4054ad0e69beaa1b6e35c74e8bed6494e2af64cfbcd89db884 default-wcuqxmyp 2022-07-31 16:56:00 [INFO] Executing pipeline RunConfig in CreateProcessor. 2022-07-31 16:56:00 [INFO] Executing pipeline MountRootfs in CreateProcessor. 81.70.253.133:22: INFO [2022-07-31 17:06:35] >> check root,port,cri success
81.70.253.133:22: Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service. 81.70.253.133:22: INFO [2022-07-31 17:06:37] >> Health check containerd! 81.70.253.133:22: INFO [2022-07-31 17:06:37] >> containerd is running 81.70.253.133:22: INFO [2022-07-31 17:06:37] >> init containerd success 81.70.253.133:22: Created symlink /etc/systemd/system/multi-user.target.wants/image-cri-shim.service → /etc/systemd/system/image-cri-shim.service. 81.70.253.133:22: INFO [2022-07-31 17:06:37] >> Health check image-cri-shim! 81.70.253.133:22: INFO [2022-07-31 17:06:37] >> image-cri-shim is running 81.70.253.133:22: INFO [2022-07-31 17:06:37] >> init shim success 81.70.253.133:22: Applying /usr/lib/sysctl.d/50-pid-max.conf ... 81.70.253.133:22: kernel.pid_max = 4194304 81.70.253.133:22: Applying /etc/sysctl.d/99-sysctl.conf ... 81.70.253.133:22: kernel.core_uses_pid = 1 81.70.253.133:22: net.ipv4.ip_forward = 0 81.70.253.133:22: net.ipv6.conf.all.forwarding = 0 81.70.253.133:22: net.ipv4.conf.default.rp_filter = 1 81.70.253.133:22: kernel.msgmnb = 65536 81.70.253.133:22: kernel.msgmax = 65536 81.70.253.133:22: net.ipv4.conf.default.accept_source_route = 0 81.70.253.133:22: net.ipv4.tcp_syncookies = 1 81.70.253.133:22: net.ipv6.conf.all.disable_ipv6 = 0 81.70.253.133:22: net.ipv6.conf.default.disable_ipv6 = 0 81.70.253.133:22: net.ipv6.conf.lo.disable_ipv6 = 0 81.70.253.133:22: net.ipv4.conf.all.promote_secondaries = 1 81.70.253.133:22: net.ipv4.conf.default.promote_secondaries = 1 81.70.253.133:22: net.ipv6.neigh.default.gc_thresh3 = 4096 81.70.253.133:22: net.ipv4.neigh.default.gc_thresh3 = 4096 81.70.253.133:22: kernel.printk = 5 4 1 7 81.70.253.133:22: kernel.softlockup_panic = 1 81.70.253.133:22: kernel.sysrq = 1 81.70.253.133:22: Applying /etc/sysctl.d/k8s.conf ... 81.70.253.133:22: net.bridge.bridge-nf-call-ip6tables = 1 81.70.253.133:22: net.bridge.bridge-nf-call-iptables = 1 81.70.253.133:22: net.ipv4.conf.all.rp_filter = 0 81.70.253.133:22: Applying /usr/lib/sysctl.d/protect-links.conf ... 81.70.253.133:22: fs.protected_fifos = 1 81.70.253.133:22: fs.protected_hardlinks = 1 81.70.253.133:22: fs.protected_regular = 2 81.70.253.133:22: fs.protected_symlinks = 1 81.70.253.133:22: * Applying /etc/sysctl.conf ... 81.70.253.133:22: kernel.core_uses_pid = 1 81.70.253.133:22: net.ipv4.ip_forward = 0 81.70.253.133:22: net.ipv6.conf.all.forwarding = 0 81.70.253.133:22: net.ipv4.conf.default.rp_filter = 1 81.70.253.133:22: kernel.msgmnb = 65536 81.70.253.133:22: kernel.msgmax = 65536 81.70.253.133:22: net.ipv4.conf.default.accept_source_route = 0 81.70.253.133:22: net.ipv4.tcp_syncookies = 1 81.70.253.133:22: net.ipv6.conf.all.disable_ipv6 = 0 81.70.253.133:22: net.ipv6.conf.default.disable_ipv6 = 0 81.70.253.133:22: net.ipv6.conf.lo.disable_ipv6 = 0 81.70.253.133:22: net.ipv4.conf.all.promote_secondaries = 1 81.70.253.133:22: net.ipv4.conf.default.promote_secondaries = 1 81.70.253.133:22: net.ipv6.neigh.default.gc_thresh3 = 4096 81.70.253.133:22: net.ipv4.neigh.default.gc_thresh3 = 4096 81.70.253.133:22: kernel.printk = 5 4 1 7 81.70.253.133:22: kernel.softlockup_panic = 1 81.70.253.133:22: kernel.sysrq = 1 81.70.253.133:22: net.ipv4.ip_forward = 1 81.70.253.133:22: INFO [2022-07-31 17:06:37] >> init kube success 81.70.253.133:22: INFO [2022-07-31 17:06:37] >> init containerd rootfs success 2022-07-31 17:06:37 [INFO] Executing pipeline Init in CreateProcessor. 2022-07-31 17:06:37 [INFO] start to copy kubeadm config to master0 2022-07-31 17:06:38 [INFO] start to generate cert and kubeConfig... 5 it/s) 2022-07-31 17:06:38 [INFO] start to generator cert and copy to masters... 2022-07-31 17:06:39 [INFO] apiserver altNames : {map[apiserver.cluster.local:apiserver.cluster.local kubernetes:kubernetes kubernetes.default:kubernetes.default kubernetes.default.svc:kubernetes.default.svc kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local localhost:localhost tencent-81:tencent-81] map[10.103.97.2:10.103.97.2 10.96.0.1:10.96.0.1 127.0.0.1:127.0.0.1 81.70.253.133:81.70.253.133]} 2022-07-31 17:06:39 [INFO] Etcd altnames : {map[localhost:localhost tencent-81:tencent-81] map[127.0.0.1:127.0.0.1 81.70.253.133:81.70.253.133 ::1:::1]}, commonName : tencent-81 2022-07-31 17:06:40 [INFO] start to copy etc pki files to masters 2022-07-31 17:06:45 [INFO] start to create kubeconfig...
2022-07-31 17:06:46 [INFO] start to copy kubeconfig files to masters 2022-07-31 17:06:48 [INFO] start to copy static files to masters 4 it/s) 2022-07-31 17:06:48 [INFO] start to apply registry 81.70.253.133:22: unpacking docker.io/library/registry:2.7.1 (sha256:49bd6b1420deba16b51bd073977ea6ae4000b816a356b12e805a699c4e5d3dba)...done 81.70.253.133:22: a760590eb17ee0109a6a0537d3cff9f079b2b9c2a8507846823173f47227ec90 81.70.253.133:22: INFO [2022-07-31 17:06:50] >> init registry success 2022-07-31 17:06:50 [INFO] start to init master0... 2022-07-31 17:06:50 [INFO] registry auth in node 81.70.253.133:22 81.70.253.133:22: 2022-07-31T17:06:50 info domain sealos.hub:81.70.253.133 append success 81.70.253.133:22: 2022-07-31T17:06:50 info domain apiserver.cluster.local:81.70.253.133 append success 81.70.253.133:22: W0731 17:06:50.746876 290227 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration! 81.70.253.133:22: [init] Using Kubernetes version: v1.24.3 81.70.253.133:22: [preflight] Running pre-flight checks 81.70.253.133:22: [WARNING FileExisting-socat]: socat not found in system path 81.70.253.133:22: [WARNING SystemVerification]: missing optional cgroups: blkio 81.70.253.133:22: [preflight] Pulling images required for setting up a Kubernetes cluster 81.70.253.133:22: [preflight] This might take a minute or two, depending on the speed of your internet connection 81.70.253.133:22: [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' 81.70.253.133:22: error execution phase preflight: [preflight] Some fatal errors occurred: 81.70.253.133:22: [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.24.3: output: time="2022-07-31T17:10:21+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/kube-apiserver:v1.24.3\": failed to resolve reference \"k8s.gcr.io/kube-apiserver:v1.24.3\": failed to do request: Head \"https://k8s.gcr.io/v2/kube-apiserver/manifests/v1.24.3\": dial tcp 64.233.189.82:443: i/o timeout" 81.70.253.133:22: , error: exit status 1 81.70.253.133:22: [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.24.3: output: time="2022-07-31T17:13:51+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/kube-controller-manager:v1.24.3\": failed to resolve reference \"k8s.gcr.io/kube-controller-manager:v1.24.3\": failed to do request: Head \"https://k8s.gcr.io/v2/kube-controller-manager/manifests/v1.24.3\": dial tcp 108.177.97.82:443: i/o timeout" 81.70.253.133:22: , error: exit status 1 81.70.253.133:22: [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.24.3: output: time="2022-07-31T17:17:21+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/kube-scheduler:v1.24.3\": failed to resolve reference \"k8s.gcr.io/kube-scheduler:v1.24.3\": failed to do request: Head \"https://k8s.gcr.io/v2/kube-scheduler/manifests/v1.24.3\": dial tcp 74.125.203.82:443: i/o timeout" 81.70.253.133:22: , error: exit status 1 81.70.253.133:22: [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.24.3: output: time="2022-07-31T17:20:51+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/kube-proxy:v1.24.3\": failed to resolve reference \"k8s.gcr.io/kube-proxy:v1.24.3\": failed to do request: Head \"https://k8s.gcr.io/v2/kube-proxy/manifests/v1.24.3\": dial tcp 64.233.189.82:443: i/o timeout" 81.70.253.133:22: , error: exit status 1 81.70.253.133:22: [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.7: output: time="2022-07-31T17:24:21+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/pause:3.7\": failed to resolve reference \"k8s.gcr.io/pause:3.7\": failed to do request: Head \"https://k8s.gcr.io/v2/pause/manifests/3.7\": dial tcp 108.177.97.82:443: i/o timeout" 81.70.253.133:22: , error: exit status 1 81.70.253.133:22: [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.5.3-0: output: time="2022-07-31T17:27:51+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/etcd:3.5.3-0\": failed to resolve reference \"k8s.gcr.io/etcd:3.5.3-0\": failed to do request: Head \"https://k8s.gcr.io/v2/etcd/manifests/3.5.3-0\": dial tcp 108.177.97.82:443: i/o timeout" 81.70.253.133:22: , error: exit status 1 81.70.253.133:22: [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns/coredns:v1.8.6: output: time="2022-07-31T17:31:22+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/coredns/coredns:v1.8.6\": failed to resolve reference \"k8s.gcr.io/coredns/coredns:v1.8.6\": failed to do request: Head \"https://k8s.gcr.io/v2/coredns/coredns/manifests/v1.8.6\": dial tcp 142.250.157.82:443: i/o timeout" 81.70.253.133:22: , error: exit status 1 81.70.253.133:22: [preflight] If you know what you are doing, you can make a check non-fatal with
--ignore-preflight-errors=...
81.70.253.133:22: To see the stack trace of this error execute with --v=5 or higher 2022-07-31 17:31:22 [EROR] Applied to cluster error: failed to init init master0 failed, error: failed to execute command(kubeadm init --config=/var/lib/sealos/data/default/etc/kubeadm-init.yaml --skip-certificate-key-print --skip-token-print -v 0 --ignore-preflight-errors=SystemVerification) on host(81.70.253.133:22): output(W0731 17:06:50.746876 290227 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration! [init] Using Kubernetes version: v1.24.3 [preflight] Running pre-flight checks [WARNING FileExisting-socat]: socat not found in system path [WARNING SystemVerification]: missing optional cgroups: blkio [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.24.3: output: time="2022-07-31T17:10:21+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/kube-apiserver:v1.24.3\": failed to resolve reference \"k8s.gcr.io/kube-apiserver:v1.24.3\": failed to do request: Head \"https://k8s.gcr.io/v2/kube-apiserver/manifests/v1.24.3\": dial tcp 64.233.189.82:443: i/o timeout" , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.24.3: output: time="2022-07-31T17:13:51+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/kube-controller-manager:v1.24.3\": failed to resolve reference \"k8s.gcr.io/kube-controller-manager:v1.24.3\": failed to do request: Head \"https://k8s.gcr.io/v2/kube-controller-manager/manifests/v1.24.3\": dial tcp 108.177.97.82:443: i/o timeout" , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.24.3: output: time="2022-07-31T17:17:21+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/kube-scheduler:v1.24.3\": failed to resolve reference \"k8s.gcr.io/kube-scheduler:v1.24.3\": failed to do request: Head \"https://k8s.gcr.io/v2/kube-scheduler/manifests/v1.24.3\": dial tcp 74.125.203.82:443: i/o timeout" , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.24.3: output: time="2022-07-31T17:20:51+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/kube-proxy:v1.24.3\": failed to resolve reference \"k8s.gcr.io/kube-proxy:v1.24.3\": failed to do request: Head \"https://k8s.gcr.io/v2/kube-proxy/manifests/v1.24.3\": dial tcp 64.233.189.82:443: i/o timeout" , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.7: output: time="2022-07-31T17:24:21+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/pause:3.7\": failed to resolve reference \"k8s.gcr.io/pause:3.7\": failed to do request: Head \"https://k8s.gcr.io/v2/pause/manifests/3.7\": dial tcp 108.177.97.82:443: i/o timeout" , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.5.3-0: output: time="2022-07-31T17:27:51+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/etcd:3.5.3-0\": failed to resolve reference \"k8s.gcr.io/etcd:3.5.3-0\": failed to do request: Head \"https://k8s.gcr.io/v2/etcd/manifests/3.5.3-0\": dial tcp 108.177.97.82:443: i/o timeout" , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns/coredns:v1.8.6: output: time="2022-07-31T17:31:22+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/coredns/coredns:v1.8.6\": failed to resolve reference \"k8s.gcr.io/coredns/coredns:v1.8.6\": failed to do request: Head \"https://k8s.gcr.io/v2/coredns/coredns/manifests/v1.8.6\": dial tcp 142.250.157.82:443: i/o timeout" , error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with--ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher), error(Process exited with status 1). Please clean and reinstallSome reference materials you see eg. the reference of the documentation
请问各位大大这种问题需要如何处理?