flannel-io / flannel

flannel is a network fabric for containers, designed for Kubernetes
Apache License 2.0
8.81k stars 2.87k forks source link

Node Not Ready even flannel pod is running when creating K8s cluster with kubeadm #2039

Closed James-Lu-none closed 2 months ago

James-Lu-none commented 2 months ago

Expected Behavior

All nodes in Ready state and all pods in Running state.

Current Behavior

flannel pod in Running state, control-plane in Not Ready state, pods that requires control-plane ready in Pending state.

james@node01:~$ kubectl get all -A -o wide
NAMESPACE      NAME                                 READY   STATUS    RESTARTS   AGE    IP             NODE     NOMINATED NODE   READINESS GATES
kube-flannel   pod/kube-flannel-ds-jqwj2            1/1     Running   0          154m   192.168.1.26   node01   <none>           <none>
kube-system    pod/coredns-6f6b679f8f-d75kj         0/1     Pending   0          157m   <none>         <none>   <none>           <none>
kube-system    pod/coredns-6f6b679f8f-nv86r         0/1     Pending   0          157m   <none>         <none>   <none>           <none>
kube-system    pod/etcd-node01                      1/1     Running   0          157m   192.168.1.26   node01   <none>           <none>
kube-system    pod/kube-apiserver-node01            1/1     Running   0          157m   192.168.1.26   node01   <none>           <none>
kube-system    pod/kube-controller-manager-node01   1/1     Running   0          157m   192.168.1.26   node01   <none>           <none>
kube-system    pod/kube-proxy-dgnxv                 1/1     Running   0          157m   192.168.1.26   node01   <none>           <none>
kube-system    pod/kube-scheduler-node01            1/1     Running   0          157m   192.168.1.26   node01   <none>           <none>

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE    SELECTOR
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  157m   <none>
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   157m   k8s-app=kube-dns

NAMESPACE      NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE    CONTAINERS     IMAGES                               SELECTOR
kube-flannel   daemonset.apps/kube-flannel-ds   1         1         1       1            1           <none>                   154m   kube-flannel   docker.io/flannel/flannel:v0.25.5    app=flannel,k8s-app=flannel
kube-system    daemonset.apps/kube-proxy        1         1         1       1            1           kubernetes.io/os=linux   157m   kube-proxy     registry.k8s.io/kube-proxy:v1.31.0   k8s-app=kube-proxy

NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS   IMAGES                                    SELECTOR
kube-system   deployment.apps/coredns   0/2     2            0           157m   coredns      registry.k8s.io/coredns/coredns:v1.11.1   k8s-app=kube-dns

NAMESPACE     NAME                                 DESIRED   CURRENT   READY   AGE    CONTAINERS   IMAGES                                    SELECTOR
kube-system   replicaset.apps/coredns-6f6b679f8f   2         2         0       157m   coredns      registry.k8s.io/coredns/coredns:v1.11.1   k8s-app=kube-dns,pod-template-hash=6f6b679f8f
james@node01:~$ kubectl get nodes -o wide
NAME     STATUS     ROLES           AGE    VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE           KERNEL-VERSION     CONTAINER-RUNTIME
node01   NotReady   control-plane   158m   v1.31.0   192.168.1.26   <none>        Ubuntu 24.04 LTS   6.8.0-31-generic   containerd://1.7.21

Possible Solution

Steps to Reproduce (for bugs)

sudo apt update
sudo apt upgrade
sudo install -m 0755 -d /etc/apt/keyrings

# install kubelet kubeadm kubectl
# apt-transport-https may be a dummy package; if so, you can skip that package
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

sudo systemctl enable --now kubelet

# install container interface
wget https://github.com/containerd/containerd/releases/download/v1.7.21/containerd-1.7.21-linux-amd64.tar.gz
sudo tar Cxzvf /usr/local containerd-1.7.21-linux-amd64.tar.gz
sudo mkdir -p /usr/local/lib/systemd/system
sudo wget -O /usr/local/lib/systemd/system/containerd.service https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
sudo systemctl daemon-reload
sudo systemctl enable --now containerd
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
# set SystemdCgroup to true for runc
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
sudo systemctl restart containerd
# install runc
wget https://github.com/opencontainers/runc/releases/download/v1.1.13/runc.amd64
sudo install -m 755 runc.amd64 /usr/local/sbin/runc
# install cni plugin
wget https://github.com/containernetworking/plugins/releases/download/v1.5.1/cni-plugins-linux-amd64-v1.5.1.tgz
sudo mkdir -p /opt/cni/bin
sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.5.1.tgz

# install kubeadm images for containerd
sudo kubeadm config images pull --cri-socket=/run/containerd/containerd.sock --kubernetes-version=v1.31.0

autoInit=false
cri=containerd

THIS_SCRIPT_PATH=$(cd "$(dirname "$0")" && pwd)
cd "$THIS_SCRIPT_PATH"

sudo swapoff -a
sudo modprobe br_netfilter
sudo modprobe overlay

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

sudo cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

sudo sysctl --system

sudo kubeadm init --pod-network-cidr=10.244.0.0/16 \
--upload-certs \
--kubernetes-version=v1.31.0 \
--control-plane-endpoint=192.168.1.26:6443 \
--cri-socket=/run/containerd/containerd.sock
sudo mkdir $HOME/.kube/
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

Context

I want to create K8s HA cluster for my project.

Your Environment

rbrtbnfgl commented 2 months ago

You could check with kubectl describe why the status of the node is not ready and why the pods are pending.

James-Lu-none commented 2 months ago

it says Network plugin returns error: cni plugin not initialized and pending pods are due to the node is not ready

Warning  FailedScheduling  27s (x33 over 160m)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
james@node01:~$ kubectl describe node node01
Name:               node01
Roles:              control-plane
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=node01
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
                    node.kubernetes.io/exclude-from-external-load-balancers=
Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"02:56:7c:28:3e:59"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.1.26
                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Tue, 27 Aug 2024 16:45:36 +0800
Taints:             node-role.kubernetes.io/control-plane:NoSchedule
                    node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  node01
  AcquireTime:     <unset>
  RenewTime:       Tue, 27 Aug 2024 19:20:45 +0800
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Tue, 27 Aug 2024 16:49:21 +0800   Tue, 27 Aug 2024 16:49:21 +0800   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Tue, 27 Aug 2024 19:17:40 +0800   Tue, 27 Aug 2024 16:45:35 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Tue, 27 Aug 2024 19:17:40 +0800   Tue, 27 Aug 2024 16:45:35 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Tue, 27 Aug 2024 19:17:40 +0800   Tue, 27 Aug 2024 16:45:35 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                False   Tue, 27 Aug 2024 19:17:40 +0800   Tue, 27 Aug 2024 16:45:35 +0800   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Addresses:
  InternalIP:  192.168.1.26
  Hostname:    node01
Capacity:
  cpu:                12
  ephemeral-storage:  490048472Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             24449960Ki
  pods:               110
Allocatable:
  cpu:                12
  ephemeral-storage:  451628671048
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             24347560Ki
  pods:               110
System Info:
  Machine ID:                 0e3cd420f53e496aa0addfd7258f1321
  System UUID:                97afcf58-9f65-194a-96cf-bbe6e92855ff
  Boot ID:                    06872c07-4408-4e0a-853e-1a54ba66c6a0
  Kernel Version:             6.8.0-31-generic
  OS Image:                   Ubuntu 24.04 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.7.21
  Kubelet Version:            v1.31.0
  Kube-Proxy Version:         
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (6 in total)
  Namespace                   Name                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                              ------------  ----------  ---------------  -------------  ---
  kube-flannel                kube-flannel-ds-jqwj2             100m (0%)     0 (0%)      50Mi (0%)        0 (0%)         151m
  kube-system                 etcd-node01                       100m (0%)     0 (0%)      100Mi (0%)       0 (0%)         155m
  kube-system                 kube-apiserver-node01             250m (2%)     0 (0%)      0 (0%)           0 (0%)         155m
  kube-system                 kube-controller-manager-node01    200m (1%)     0 (0%)      0 (0%)           0 (0%)         155m
  kube-system                 kube-proxy-dgnxv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         155m
  kube-system                 kube-scheduler-node01             100m (0%)     0 (0%)      0 (0%)           0 (0%)         155m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                750m (6%)   0 (0%)
  memory             150Mi (0%)  0 (0%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:              <none>
rbrtbnfgl commented 2 months ago

what is the content of /opt/cni/bin?

James-Lu-none commented 2 months ago
james@node01:~/workspace/On-Premise-dev$ ls -al /opt/cni/bin
total 84124
drwxr-xr-x 2 1001  127     4096 Aug 27 20:02 .
drwxr-xr-x 3 root root     4096 Aug 27 15:27 ..
-rwxr-xr-x 1 root root  4272898 Jun 17 23:51 bandwidth
-rwxr-xr-x 1 root root  4788191 Jun 17 23:51 bridge
-rwxr-xr-x 1 root root 11419738 Jun 17 23:51 dhcp
-rwxr-xr-x 1 root root  4424930 Jun 17 23:51 dummy
-rwxr-xr-x 1 root root  4943846 Jun 17 23:51 firewall
-rwxr-xr-x 1 root root  2631704 Aug 27 20:02 flannel
-rwxr-xr-x 1 root root  4345300 Jun 17 23:51 host-device
-rwxr-xr-x 1 root root  3679575 Jun 17 23:51 host-local
-rwxr-xr-x 1 root root  4443729 Jun 17 23:51 ipvlan
-rw-r--r-- 1 root root    11357 Jun 17 23:51 LICENSE
-rwxr-xr-x 1 root root  3750882 Jun 17 23:51 loopback
-rwxr-xr-x 1 root root  4480422 Jun 17 23:51 macvlan
-rwxr-xr-x 1 root root  4228332 Jun 17 23:51 portmap
-rwxr-xr-x 1 root root  4602833 Jun 17 23:51 ptp
-rw-r--r-- 1 root root     2343 Jun 17 23:51 README.md
-rwxr-xr-x 1 root root  3957166 Jun 17 23:51 sbr
-rwxr-xr-x 1 root root  3223947 Jun 17 23:51 static
-rwxr-xr-x 1 root root  4503742 Jun 17 23:51 tap
-rwxr-xr-x 1 root root  3838043 Jun 17 23:51 tuning
-rwxr-xr-x 1 root root  4440528 Jun 17 23:51 vlan
-rwxr-xr-x 1 root root  4103500 Jun 17 23:51 vrf
zhangguanzhang commented 2 months ago
ls -l /etc/cni/net.d
journalctl -xe --no-pager -u kubelet
James-Lu-none commented 2 months ago
james@node01:~/workspace/On-Premise-dev$ ls -l /etc/cni/net.d
total 4
-rw-r--r-- 1 root root 292 Aug 27 20:02 10-flannel.conflist

let me restart the cluster real quick

James-Lu-none commented 2 months ago
░░ Subject: A stop job for unit kubelet.service has begun execution
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A stop job for unit kubelet.service has begun execution.
░░ 
░░ The job identifier is 66676.
Aug 27 22:24:46 node01 systemd[1]: kubelet.service: Deactivated successfully.
░░ Subject: Unit succeeded
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ The unit kubelet.service has successfully entered the 'dead' state.
Aug 27 22:24:46 node01 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
░░ Subject: A stop job for unit kubelet.service has finished
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A stop job for unit kubelet.service has finished.
░░ 
░░ The job identifier is 66676 and the job result is done.
Aug 27 22:24:46 node01 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
░░ Subject: A start job for unit kubelet.service has finished successfully
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A start job for unit kubelet.service has finished successfully.
░░ 
░░ The job identifier is 66676.
Aug 27 22:24:46 node01 kubelet[130280]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Aug 27 22:24:46 node01 kubelet[130280]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.487901  130280 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.491709  130280 server.go:486] "Kubelet version" kubeletVersion="v1.31.0"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.491721  130280 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.491864  130280 server.go:929] "Client rotation is on, will bootstrap in background"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.492647  130280 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.493870  130280 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Aug 27 22:24:46 node01 kubelet[130280]: E0827 22:24:46.495664  130280 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.495869  130280 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.499786  130280 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.499846  130280 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.499910  130280 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.499927  130280 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"node01","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.500054  130280 topology_manager.go:138] "Creating topology manager with none policy"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.500061  130280 container_manager_linux.go:300] "Creating device plugin manager"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.500078  130280 state_mem.go:36] "Initialized new in-memory state store"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.500136  130280 kubelet.go:408] "Attempting to sync node with API server"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.500145  130280 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.500161  130280 kubelet.go:314] "Adding apiserver pod source"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.500169  130280 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.500513  130280 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.500840  130280 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.501092  130280 server.go:1269] "Started kubelet"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.501110  130280 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.501145  130280 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.501420  130280 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.502358  130280 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Aug 27 22:24:46 node01 kubelet[130280]: E0827 22:24:46.502533  130280 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"node01\" not found"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.502590  130280 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.502620  130280 volume_manager.go:289] "Starting Kubelet Volume Manager"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.502667  130280 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.502815  130280 reconciler.go:26] "Reconciler: start to sync state"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.503041  130280 server.go:460] "Adding debug handlers to kubelet server"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.503435  130280 factory.go:221] Registration of the systemd container factory successfully
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.503515  130280 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Aug 27 22:24:46 node01 kubelet[130280]: E0827 22:24:46.504266  130280 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.504401  130280 factory.go:221] Registration of the containerd container factory successfully
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.512232  130280 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.512983  130280 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.513004  130280 status_manager.go:217] "Starting to sync pod status with apiserver"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.513029  130280 kubelet.go:2321] "Starting kubelet main sync loop"
Aug 27 22:24:46 node01 kubelet[130280]: E0827 22:24:46.513055  130280 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.531481  130280 cpu_manager.go:214] "Starting CPU manager" policy="none"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.531493  130280 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.531505  130280 state_mem.go:36] "Initialized new in-memory state store"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.531625  130280 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.531633  130280 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.531652  130280 policy_none.go:49] "None policy: Start"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.531961  130280 memory_manager.go:170] "Starting memorymanager" policy="None"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.531971  130280 state_mem.go:35] "Initializing new in-memory state store"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.532052  130280 state_mem.go:75] "Updated machine memory state"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.534552  130280 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.534651  130280 eviction_manager.go:189] "Eviction manager: starting control loop"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.534660  130280 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.534952  130280 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Aug 27 22:24:46 node01 kubelet[130280]: E0827 22:24:46.626605  130280 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-node01\" already exists" pod="kube-system/etcd-node01"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.637869  130280 kubelet_node_status.go:72] "Attempting to register node" node="node01"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.654881  130280 kubelet_node_status.go:111] "Node was previously registered" node="node01"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.655084  130280 kubelet_node_status.go:75] "Successfully registered node" node="node01"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.704165  130280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d5dd991c3953172e76c71b6bf62cbd65-etc-ca-certificates\") pod \"kube-controller-manager-node01\" (UID: \"d5dd991c3953172e76c71b6bf62cbd65\") " pod="kube-system/kube-controller-manager-node01"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.704213  130280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d5dd991c3953172e76c71b6bf62cbd65-k8s-certs\") pod \"kube-controller-manager-node01\" (UID: \"d5dd991c3953172e76c71b6bf62cbd65\") " pod="kube-system/kube-controller-manager-node01"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.704269  130280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/eac6a626899dc659467710357e65732f-etcd-certs\") pod \"etcd-node01\" (UID: \"eac6a626899dc659467710357e65732f\") " pod="kube-system/etcd-node01"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.704305  130280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d42119b6c1dab7589d4988d8e4b856d2-etc-ca-certificates\") pod \"kube-apiserver-node01\" (UID: \"d42119b6c1dab7589d4988d8e4b856d2\") " pod="kube-system/kube-apiserver-node01"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.704349  130280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d42119b6c1dab7589d4988d8e4b856d2-usr-share-ca-certificates\") pod \"kube-apiserver-node01\" (UID: \"d42119b6c1dab7589d4988d8e4b856d2\") " pod="kube-system/kube-apiserver-node01"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.704390  130280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d5dd991c3953172e76c71b6bf62cbd65-flexvolume-dir\") pod \"kube-controller-manager-node01\" (UID: \"d5dd991c3953172e76c71b6bf62cbd65\") " pod="kube-system/kube-controller-manager-node01"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.704429  130280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5dd991c3953172e76c71b6bf62cbd65-kubeconfig\") pod \"kube-controller-manager-node01\" (UID: \"d5dd991c3953172e76c71b6bf62cbd65\") " pod="kube-system/kube-controller-manager-node01"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.704452  130280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d5dd991c3953172e76c71b6bf62cbd65-usr-share-ca-certificates\") pod \"kube-controller-manager-node01\" (UID: \"d5dd991c3953172e76c71b6bf62cbd65\") " pod="kube-system/kube-controller-manager-node01"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.704484  130280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/02562b06283420a61a5ef8ab1e026cda-kubeconfig\") pod \"kube-scheduler-node01\" (UID: \"02562b06283420a61a5ef8ab1e026cda\") " pod="kube-system/kube-scheduler-node01"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.704515  130280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/eac6a626899dc659467710357e65732f-etcd-data\") pod \"etcd-node01\" (UID: \"eac6a626899dc659467710357e65732f\") " pod="kube-system/etcd-node01"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.704540  130280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d42119b6c1dab7589d4988d8e4b856d2-usr-local-share-ca-certificates\") pod \"kube-apiserver-node01\" (UID: \"d42119b6c1dab7589d4988d8e4b856d2\") " pod="kube-system/kube-apiserver-node01"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.704583  130280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d42119b6c1dab7589d4988d8e4b856d2-k8s-certs\") pod \"kube-apiserver-node01\" (UID: \"d42119b6c1dab7589d4988d8e4b856d2\") " pod="kube-system/kube-apiserver-node01"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.704619  130280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d42119b6c1dab7589d4988d8e4b856d2-ca-certs\") pod \"kube-apiserver-node01\" (UID: \"d42119b6c1dab7589d4988d8e4b856d2\") " pod="kube-system/kube-apiserver-node01"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.704643  130280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d5dd991c3953172e76c71b6bf62cbd65-usr-local-share-ca-certificates\") pod \"kube-controller-manager-node01\" (UID: \"d5dd991c3953172e76c71b6bf62cbd65\") " pod="kube-system/kube-controller-manager-node01"
Aug 27 22:24:46 node01 kubelet[130280]: I0827 22:24:46.704663  130280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d5dd991c3953172e76c71b6bf62cbd65-ca-certs\") pod \"kube-controller-manager-node01\" (UID: \"d5dd991c3953172e76c71b6bf62cbd65\") " pod="kube-system/kube-controller-manager-node01"
Aug 27 22:24:47 node01 kubelet[130280]: I0827 22:24:47.501205  130280 apiserver.go:52] "Watching apiserver"
Aug 27 22:24:47 node01 kubelet[130280]: E0827 22:24:47.534083  130280 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-node01\" already exists" pod="kube-system/kube-scheduler-node01"
Aug 27 22:24:47 node01 kubelet[130280]: E0827 22:24:47.534607  130280 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-node01\" already exists" pod="kube-system/kube-apiserver-node01"
Aug 27 22:24:47 node01 kubelet[130280]: E0827 22:24:47.535460  130280 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-node01\" already exists" pod="kube-system/etcd-node01"
Aug 27 22:24:47 node01 kubelet[130280]: E0827 22:24:47.537062  130280 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-node01\" already exists" pod="kube-system/kube-controller-manager-node01"
Aug 27 22:24:47 node01 kubelet[130280]: I0827 22:24:47.554242  130280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-node01" podStartSLOduration=1.554224592 podStartE2EDuration="1.554224592s" podCreationTimestamp="2024-08-27 22:24:46 +0800 CST" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-27 22:24:47.554225691 +0800 CST m=+1.086980920" watchObservedRunningTime="2024-08-27 22:24:47.554224592 +0800 CST m=+1.086979816"
Aug 27 22:24:47 node01 kubelet[130280]: I0827 22:24:47.566569  130280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-node01" podStartSLOduration=3.566558952 podStartE2EDuration="3.566558952s" podCreationTimestamp="2024-08-27 22:24:44 +0800 CST" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-27 22:24:47.561259319 +0800 CST m=+1.094014541" watchObservedRunningTime="2024-08-27 22:24:47.566558952 +0800 CST m=+1.099314169"
Aug 27 22:24:47 node01 kubelet[130280]: I0827 22:24:47.566644  130280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-node01" podStartSLOduration=1.566638083 podStartE2EDuration="1.566638083s" podCreationTimestamp="2024-08-27 22:24:46 +0800 CST" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-27 22:24:47.566500353 +0800 CST m=+1.099255574" watchObservedRunningTime="2024-08-27 22:24:47.566638083 +0800 CST m=+1.099393301"
Aug 27 22:24:47 node01 kubelet[130280]: I0827 22:24:47.570780  130280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-node01" podStartSLOduration=1.570769468 podStartE2EDuration="1.570769468s" podCreationTimestamp="2024-08-27 22:24:46 +0800 CST" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-27 22:24:47.570714135 +0800 CST m=+1.103469355" watchObservedRunningTime="2024-08-27 22:24:47.570769468 +0800 CST m=+1.103524685"
Aug 27 22:24:47 node01 kubelet[130280]: I0827 22:24:47.603572  130280 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
Aug 27 22:24:52 node01 kubelet[130280]: I0827 22:24:52.472344  130280 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Aug 27 22:24:52 node01 kubelet[130280]: I0827 22:24:52.473553  130280 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Aug 27 22:24:53 node01 kubelet[130280]: I0827 22:24:53.351568  130280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c3acb71-52f8-4985-816a-6a028e988b01-xtables-lock\") pod \"kube-proxy-9s4fd\" (UID: \"8c3acb71-52f8-4985-816a-6a028e988b01\") " pod="kube-system/kube-proxy-9s4fd"
Aug 27 22:24:53 node01 kubelet[130280]: I0827 22:24:53.351714  130280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxw4t\" (UniqueName: \"kubernetes.io/projected/8c3acb71-52f8-4985-816a-6a028e988b01-kube-api-access-wxw4t\") pod \"kube-proxy-9s4fd\" (UID: \"8c3acb71-52f8-4985-816a-6a028e988b01\") " pod="kube-system/kube-proxy-9s4fd"
Aug 27 22:24:53 node01 kubelet[130280]: I0827 22:24:53.351814  130280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8c3acb71-52f8-4985-816a-6a028e988b01-kube-proxy\") pod \"kube-proxy-9s4fd\" (UID: \"8c3acb71-52f8-4985-816a-6a028e988b01\") " pod="kube-system/kube-proxy-9s4fd"
Aug 27 22:24:53 node01 kubelet[130280]: I0827 22:24:53.351899  130280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c3acb71-52f8-4985-816a-6a028e988b01-lib-modules\") pod \"kube-proxy-9s4fd\" (UID: \"8c3acb71-52f8-4985-816a-6a028e988b01\") " pod="kube-system/kube-proxy-9s4fd"
Aug 27 22:24:54 node01 kubelet[130280]: I0827 22:24:54.638166  130280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9s4fd" podStartSLOduration=1.638130828 podStartE2EDuration="1.638130828s" podCreationTimestamp="2024-08-27 22:24:53 +0800 CST" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-27 22:24:54.553886719 +0800 CST m=+8.086642027" watchObservedRunningTime="2024-08-27 22:24:54.638130828 +0800 CST m=+8.170886087"
Aug 27 22:25:06 node01 kubelet[130280]: I0827 22:25:06.746599  130280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/dfc92f54-6b75-4e57-a7c4-d63b4ab42933-cni-plugin\") pod \"kube-flannel-ds-n8lln\" (UID: \"dfc92f54-6b75-4e57-a7c4-d63b4ab42933\") " pod="kube-flannel/kube-flannel-ds-n8lln"
Aug 27 22:25:06 node01 kubelet[130280]: I0827 22:25:06.746684  130280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfc92f54-6b75-4e57-a7c4-d63b4ab42933-xtables-lock\") pod \"kube-flannel-ds-n8lln\" (UID: \"dfc92f54-6b75-4e57-a7c4-d63b4ab42933\") " pod="kube-flannel/kube-flannel-ds-n8lln"
Aug 27 22:25:06 node01 kubelet[130280]: I0827 22:25:06.746739  130280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w546x\" (UniqueName: \"kubernetes.io/projected/dfc92f54-6b75-4e57-a7c4-d63b4ab42933-kube-api-access-w546x\") pod \"kube-flannel-ds-n8lln\" (UID: \"dfc92f54-6b75-4e57-a7c4-d63b4ab42933\") " pod="kube-flannel/kube-flannel-ds-n8lln"
Aug 27 22:25:06 node01 kubelet[130280]: I0827 22:25:06.746861  130280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/dfc92f54-6b75-4e57-a7c4-d63b4ab42933-flannel-cfg\") pod \"kube-flannel-ds-n8lln\" (UID: \"dfc92f54-6b75-4e57-a7c4-d63b4ab42933\") " pod="kube-flannel/kube-flannel-ds-n8lln"
Aug 27 22:25:06 node01 kubelet[130280]: I0827 22:25:06.746976  130280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/dfc92f54-6b75-4e57-a7c4-d63b4ab42933-run\") pod \"kube-flannel-ds-n8lln\" (UID: \"dfc92f54-6b75-4e57-a7c4-d63b4ab42933\") " pod="kube-flannel/kube-flannel-ds-n8lln"
Aug 27 22:25:06 node01 kubelet[130280]: I0827 22:25:06.747026  130280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/dfc92f54-6b75-4e57-a7c4-d63b4ab42933-cni\") pod \"kube-flannel-ds-n8lln\" (UID: \"dfc92f54-6b75-4e57-a7c4-d63b4ab42933\") " pod="kube-flannel/kube-flannel-ds-n8lln"
Aug 27 22:25:09 node01 kubelet[130280]: I0827 22:25:09.602495  130280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-n8lln" podStartSLOduration=3.602463218 podStartE2EDuration="3.602463218s" podCreationTimestamp="2024-08-27 22:25:06 +0800 CST" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-27 22:25:09.602164942 +0800 CST m=+23.134920222" watchObservedRunningTime="2024-08-27 22:25:09.602463218 +0800 CST m=+23.135218479"
Aug 27 22:25:46 node01 kubelet[130280]: E0827 22:25:46.505720  130280 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a1a1ab1f5ce306078b1ef9397bfa843e8c1b50f3eec0af2375a67eb6165cccda\": not found" containerID="a1a1ab1f5ce306078b1ef9397bfa843e8c1b50f3eec0af2375a67eb6165cccda"
Aug 27 22:25:46 node01 kubelet[130280]: I0827 22:25:46.505755  130280 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="a1a1ab1f5ce306078b1ef9397bfa843e8c1b50f3eec0af2375a67eb6165cccda" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a1a1ab1f5ce306078b1ef9397bfa843e8c1b50f3eec0af2375a67eb6165cccda\": not found"
Aug 27 22:25:46 node01 kubelet[130280]: E0827 22:25:46.506018  130280 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6b6ba4bb4a044bd1260b84d11c41d2a264159873a9c7d4b0f176c35333625361\": not found" containerID="6b6ba4bb4a044bd1260b84d11c41d2a264159873a9c7d4b0f176c35333625361"
Aug 27 22:25:46 node01 kubelet[130280]: I0827 22:25:46.506043  130280 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="6b6ba4bb4a044bd1260b84d11c41d2a264159873a9c7d4b0f176c35333625361" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6b6ba4bb4a044bd1260b84d11c41d2a264159873a9c7d4b0f176c35333625361\": not found"
Aug 27 22:25:46 node01 kubelet[130280]: E0827 22:25:46.506292  130280 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9478472555a18ff9f50de34f410c8e7ace6c551253cbddbf58ab0bf84c48d658\": not found" containerID="9478472555a18ff9f50de34f410c8e7ace6c551253cbddbf58ab0bf84c48d658"
Aug 27 22:25:46 node01 kubelet[130280]: I0827 22:25:46.506309  130280 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="9478472555a18ff9f50de34f410c8e7ace6c551253cbddbf58ab0bf84c48d658" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9478472555a18ff9f50de34f410c8e7ace6c551253cbddbf58ab0bf84c48d658\": not found"
Aug 27 22:25:46 node01 kubelet[130280]: E0827 22:25:46.506545  130280 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba2021cc6c69ddfdad077f50602ac8560725bb7a392ef4daa7510b8f5d49998c\": not found" containerID="ba2021cc6c69ddfdad077f50602ac8560725bb7a392ef4daa7510b8f5d49998c"
Aug 27 22:25:46 node01 kubelet[130280]: I0827 22:25:46.506563  130280 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="ba2021cc6c69ddfdad077f50602ac8560725bb7a392ef4daa7510b8f5d49998c" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba2021cc6c69ddfdad077f50602ac8560725bb7a392ef4daa7510b8f5d49998c\": not found"
Aug 27 22:26:46 node01 kubelet[130280]: E0827 22:26:46.547633  130280 kubelet_node_status.go:447] "Node not becoming ready in time after startup"
Aug 27 22:26:46 node01 kubelet[130280]: E0827 22:26:46.569084  130280 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Aug 27 22:26:51 node01 kubelet[130280]: E0827 22:26:51.571162  130280 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Aug 27 22:26:56 node01 kubelet[130280]: E0827 22:26:56.573108  130280 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Aug 27 22:27:01 node01 kubelet[130280]: E0827 22:27:01.574610  130280 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Aug 27 22:27:06 node01 kubelet[130280]: E0827 22:27:06.575279  130280 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"

after Aug 27 22:27:06, it spams the same line "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" again and again

zhangguanzhang commented 2 months ago
[root@guan ~]# cd github/kubernetes/
[root@guan  ~/github/kubernetes]# find -type f -exec grep 'cni plugin not initialized' {} \;
[root@guan  ~/github/kubernetes]# cd ../containerd/
[root@guan  ~/github/containerd]# find -type f -exec grep 'cni plugin not initialized' {} \;
    ErrCNINotInitialized = errors.New("cni plugin not initialized")
^C
[root@guan  ~/github/containerd]# find -type f -exec grep -l 'cni plugin not initialized' {} \;
./vendor/github.com/containerd/go-cni/errors.go
[root@guan  ~/github/containerd]# 

need containerd logs,

journalctl -xe --no-pager -u containerd
James-Lu-none commented 2 months ago
Aug 28 08:25:13 node01 containerd[1324]: time="2024-08-28T08:25:13.924230251+08:00" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.784506820+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-node01,Uid:eac6a626899dc659467710357e65732f,Namespace:kube-system,Attempt:0,}"
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.798469384+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-node01,Uid:d42119b6c1dab7589d4988d8e4b856d2,Namespace:kube-system,Attempt:0,}"
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.801483990+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-node01,Uid:d5dd991c3953172e76c71b6bf62cbd65,Namespace:kube-system,Attempt:0,}"
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.816634168+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-node01,Uid:02562b06283420a61a5ef8ab1e026cda,Namespace:kube-system,Attempt:0,}"
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.849085781+08:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.849141817+08:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.849159940+08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.849166652+08:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.849223222+08:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.849249680+08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.849275359+08:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.849295937+08:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.849336071+08:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.849346922+08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.849333849+08:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.849347405+08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.849540635+08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.849540255+08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.849547786+08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.849546937+08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.927871274+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-node01,Uid:eac6a626899dc659467710357e65732f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1ba422c13ecdde67a15934b32fc6a2814ac18b6f7fddf5c1acc1019af6da0af\""
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.927883339+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-node01,Uid:02562b06283420a61a5ef8ab1e026cda,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8c8e2b2a6fda06df4fc8bbf666248834e0f01589a29b811f2b10bd2f0244b08\""
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.928273486+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-node01,Uid:d5dd991c3953172e76c71b6bf62cbd65,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd987226402f08e88c6fe901c6a0713892817fb80353157dfc82b116d0731439\""
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.930500637+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-node01,Uid:d42119b6c1dab7589d4988d8e4b856d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e24bbc6ff0b79a77611c365972315a418413050119592ecc682552e139fca0d4\""
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.930578777+08:00" level=info msg="CreateContainer within sandbox \"a8c8e2b2a6fda06df4fc8bbf666248834e0f01589a29b811f2b10bd2f0244b08\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:4,}"
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.930639410+08:00" level=info msg="CreateContainer within sandbox \"b1ba422c13ecdde67a15934b32fc6a2814ac18b6f7fddf5c1acc1019af6da0af\" for container &ContainerMetadata{Name:etcd,Attempt:3,}"
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.930755361+08:00" level=info msg="CreateContainer within sandbox \"dd987226402f08e88c6fe901c6a0713892817fb80353157dfc82b116d0731439\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:4,}"
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.931518999+08:00" level=info msg="CreateContainer within sandbox \"e24bbc6ff0b79a77611c365972315a418413050119592ecc682552e139fca0d4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:3,}"
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.966447384+08:00" level=info msg="CreateContainer within sandbox \"dd987226402f08e88c6fe901c6a0713892817fb80353157dfc82b116d0731439\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:4,} returns container id \"1da77b17e150e1f95e31c500220069fe74f872d28bf8a0b75eb3779d0bb6708a\""
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.966877705+08:00" level=info msg="StartContainer for \"1da77b17e150e1f95e31c500220069fe74f872d28bf8a0b75eb3779d0bb6708a\""
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.967708584+08:00" level=info msg="CreateContainer within sandbox \"a8c8e2b2a6fda06df4fc8bbf666248834e0f01589a29b811f2b10bd2f0244b08\" for &ContainerMetadata{Name:kube-scheduler,Attempt:4,} returns container id \"7752747201017ce80e880f5d770b60d10f581ade2877c6825521bb5c2f1ca025\""
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.967985243+08:00" level=info msg="StartContainer for \"7752747201017ce80e880f5d770b60d10f581ade2877c6825521bb5c2f1ca025\""
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.968954306+08:00" level=info msg="CreateContainer within sandbox \"e24bbc6ff0b79a77611c365972315a418413050119592ecc682552e139fca0d4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:3,} returns container id \"68441f813731c6c6027a07ec2f52adc05d8bd8db32f9aed9be9d590a8d63cab8\""
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.969199024+08:00" level=info msg="StartContainer for \"68441f813731c6c6027a07ec2f52adc05d8bd8db32f9aed9be9d590a8d63cab8\""
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.969683687+08:00" level=info msg="CreateContainer within sandbox \"b1ba422c13ecdde67a15934b32fc6a2814ac18b6f7fddf5c1acc1019af6da0af\" for &ContainerMetadata{Name:etcd,Attempt:3,} returns container id \"6f5266a6e63c05032a194f2219bc972c6ab72239f22dd52d0e72ae555376ac71\""
Aug 28 08:25:41 node01 containerd[1324]: time="2024-08-28T08:25:41.969873618+08:00" level=info msg="StartContainer for \"6f5266a6e63c05032a194f2219bc972c6ab72239f22dd52d0e72ae555376ac71\""
Aug 28 08:25:42 node01 containerd[1324]: time="2024-08-28T08:25:42.069308340+08:00" level=info msg="StartContainer for \"7752747201017ce80e880f5d770b60d10f581ade2877c6825521bb5c2f1ca025\" returns successfully"
Aug 28 08:25:42 node01 containerd[1324]: time="2024-08-28T08:25:42.073293091+08:00" level=info msg="StartContainer for \"6f5266a6e63c05032a194f2219bc972c6ab72239f22dd52d0e72ae555376ac71\" returns successfully"
Aug 28 08:25:42 node01 containerd[1324]: time="2024-08-28T08:25:42.073302683+08:00" level=info msg="StartContainer for \"1da77b17e150e1f95e31c500220069fe74f872d28bf8a0b75eb3779d0bb6708a\" returns successfully"
Aug 28 08:25:42 node01 containerd[1324]: time="2024-08-28T08:25:42.076255369+08:00" level=info msg="StartContainer for \"68441f813731c6c6027a07ec2f52adc05d8bd8db32f9aed9be9d590a8d63cab8\" returns successfully"
Aug 28 08:25:52 node01 containerd[1324]: time="2024-08-28T08:25:52.004535885+08:00" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Aug 28 08:25:53 node01 containerd[1324]: time="2024-08-28T08:25:53.172665162+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hdm26,Uid:72449ca3-f5b1-40ae-9ca5-ac1bffd2dd77,Namespace:kube-system,Attempt:0,}"
Aug 28 08:25:53 node01 containerd[1324]: time="2024-08-28T08:25:53.175821479+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-b9bpv,Uid:ab136393-4be8-411b-a262-133e01a2fb7c,Namespace:kube-flannel,Attempt:0,}"
Aug 28 08:25:53 node01 containerd[1324]: time="2024-08-28T08:25:53.215327883+08:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 28 08:25:53 node01 containerd[1324]: time="2024-08-28T08:25:53.215443380+08:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 28 08:25:53 node01 containerd[1324]: time="2024-08-28T08:25:53.215474707+08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 28 08:25:53 node01 containerd[1324]: time="2024-08-28T08:25:53.215689515+08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 28 08:25:53 node01 containerd[1324]: time="2024-08-28T08:25:53.217128394+08:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 28 08:25:53 node01 containerd[1324]: time="2024-08-28T08:25:53.218005576+08:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 28 08:25:53 node01 containerd[1324]: time="2024-08-28T08:25:53.218035281+08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 28 08:25:53 node01 containerd[1324]: time="2024-08-28T08:25:53.218188500+08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 28 08:25:53 node01 containerd[1324]: time="2024-08-28T08:25:53.283608305+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hdm26,Uid:72449ca3-f5b1-40ae-9ca5-ac1bffd2dd77,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ee7163a3c14b22003ab0cdbe2b4186af10bd314f5b176a1a39e193cb723efff\""
Aug 28 08:25:53 node01 containerd[1324]: time="2024-08-28T08:25:53.286688989+08:00" level=info msg="CreateContainer within sandbox \"9ee7163a3c14b22003ab0cdbe2b4186af10bd314f5b176a1a39e193cb723efff\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Aug 28 08:25:53 node01 containerd[1324]: time="2024-08-28T08:25:53.297712001+08:00" level=info msg="CreateContainer within sandbox \"9ee7163a3c14b22003ab0cdbe2b4186af10bd314f5b176a1a39e193cb723efff\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"79cc2eab1b21761a85e83f4c1742e62c4d6ad2c47591fbc4374c54a21602f38f\""
Aug 28 08:25:53 node01 containerd[1324]: time="2024-08-28T08:25:53.298000672+08:00" level=info msg="StartContainer for \"79cc2eab1b21761a85e83f4c1742e62c4d6ad2c47591fbc4374c54a21602f38f\""
Aug 28 08:25:53 node01 containerd[1324]: time="2024-08-28T08:25:53.299945454+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-b9bpv,Uid:ab136393-4be8-411b-a262-133e01a2fb7c,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"5e2686febb327a57c242e695394872b463959f8ae64db25eb967309ccd9ad70f\""
Aug 28 08:25:53 node01 containerd[1324]: time="2024-08-28T08:25:53.300851289+08:00" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel2\""
Aug 28 08:25:53 node01 containerd[1324]: time="2024-08-28T08:25:53.339774520+08:00" level=info msg="StartContainer for \"79cc2eab1b21761a85e83f4c1742e62c4d6ad2c47591fbc4374c54a21602f38f\" returns successfully"
Aug 28 08:25:59 node01 containerd[1324]: time="2024-08-28T08:25:59.873880020+08:00" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel2\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug 28 08:25:59 node01 containerd[1324]: time="2024-08-28T08:25:59.874669067+08:00" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel2: active requests=0, bytes read=4850122"
Aug 28 08:25:59 node01 containerd[1324]: time="2024-08-28T08:25:59.875734068+08:00" level=info msg="ImageCreate event name:\"sha256:962fd97b50f9c6693b3e1ff4786c5643eef5eefafca5f43d3240ae8b87cbac2e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug 28 08:25:59 node01 containerd[1324]: time="2024-08-28T08:25:59.877792142+08:00" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:5d4fb9f90389a33b397fba4c8f371454c21aa146696aec46481214892e66c1b8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug 28 08:25:59 node01 containerd[1324]: time="2024-08-28T08:25:59.878436761+08:00" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel2\" with image id \"sha256:962fd97b50f9c6693b3e1ff4786c5643eef5eefafca5f43d3240ae8b87cbac2e\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:5d4fb9f90389a33b397fba4c8f371454c21aa146696aec46481214892e66c1b8\", size \"4839240\" in 6.577553808s"
Aug 28 08:25:59 node01 containerd[1324]: time="2024-08-28T08:25:59.878456372+08:00" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel2\" returns image reference \"sha256:962fd97b50f9c6693b3e1ff4786c5643eef5eefafca5f43d3240ae8b87cbac2e\""
Aug 28 08:25:59 node01 containerd[1324]: time="2024-08-28T08:25:59.879877959+08:00" level=info msg="CreateContainer within sandbox \"5e2686febb327a57c242e695394872b463959f8ae64db25eb967309ccd9ad70f\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}"
Aug 28 08:25:59 node01 containerd[1324]: time="2024-08-28T08:25:59.887148143+08:00" level=info msg="CreateContainer within sandbox \"5e2686febb327a57c242e695394872b463959f8ae64db25eb967309ccd9ad70f\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"6d9e2d8196569f1dee690c3b2eb392ead8e3b8c2a2d6ef7aecc40e8edea27d52\""
Aug 28 08:25:59 node01 containerd[1324]: time="2024-08-28T08:25:59.887450140+08:00" level=info msg="StartContainer for \"6d9e2d8196569f1dee690c3b2eb392ead8e3b8c2a2d6ef7aecc40e8edea27d52\""
Aug 28 08:25:59 node01 containerd[1324]: time="2024-08-28T08:25:59.924046964+08:00" level=info msg="StartContainer for \"6d9e2d8196569f1dee690c3b2eb392ead8e3b8c2a2d6ef7aecc40e8edea27d52\" returns successfully"
Aug 28 08:26:00 node01 containerd[1324]: time="2024-08-28T08:26:00.018374200+08:00" level=info msg="shim disconnected" id=6d9e2d8196569f1dee690c3b2eb392ead8e3b8c2a2d6ef7aecc40e8edea27d52 namespace=k8s.io
Aug 28 08:26:00 node01 containerd[1324]: time="2024-08-28T08:26:00.018436174+08:00" level=warning msg="cleaning up after shim disconnected" id=6d9e2d8196569f1dee690c3b2eb392ead8e3b8c2a2d6ef7aecc40e8edea27d52 namespace=k8s.io
Aug 28 08:26:00 node01 containerd[1324]: time="2024-08-28T08:26:00.018454144+08:00" level=info msg="cleaning up dead shim" namespace=k8s.io
Aug 28 08:26:00 node01 containerd[1324]: time="2024-08-28T08:26:00.787139430+08:00" level=info msg="PullImage \"docker.io/flannel/flannel:v0.25.6\""
Aug 28 08:26:09 node01 containerd[1324]: time="2024-08-28T08:26:09.245922974+08:00" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.25.6\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug 28 08:26:09 node01 containerd[1324]: time="2024-08-28T08:26:09.246710526+08:00" level=info msg="stop pulling image docker.io/flannel/flannel:v0.25.6: active requests=0, bytes read=25206473"
Aug 28 08:26:09 node01 containerd[1324]: time="2024-08-28T08:26:09.247722221+08:00" level=info msg="ImageCreate event name:\"sha256:f7b837852a098666994810678aaa979b971cce77b32619aebafbbfd291214b7b\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug 28 08:26:09 node01 containerd[1324]: time="2024-08-28T08:26:09.250254687+08:00" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:ce2d6cb79949b33b4a9d527bce91f7da4b62a95a1bd4ea090bcdcde8840b6baf\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug 28 08:26:09 node01 containerd[1324]: time="2024-08-28T08:26:09.251241245+08:00" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.25.6\" with image id \"sha256:f7b837852a098666994810678aaa979b971cce77b32619aebafbbfd291214b7b\", repo tag \"docker.io/flannel/flannel:v0.25.6\", repo digest \"docker.io/flannel/flannel@sha256:ce2d6cb79949b33b4a9d527bce91f7da4b62a95a1bd4ea090bcdcde8840b6baf\", size \"28822026\" in 8.464074714s"
Aug 28 08:26:09 node01 containerd[1324]: time="2024-08-28T08:26:09.251258244+08:00" level=info msg="PullImage \"docker.io/flannel/flannel:v0.25.6\" returns image reference \"sha256:f7b837852a098666994810678aaa979b971cce77b32619aebafbbfd291214b7b\""
Aug 28 08:26:09 node01 containerd[1324]: time="2024-08-28T08:26:09.252906855+08:00" level=info msg="CreateContainer within sandbox \"5e2686febb327a57c242e695394872b463959f8ae64db25eb967309ccd9ad70f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Aug 28 08:26:09 node01 containerd[1324]: time="2024-08-28T08:26:09.260946724+08:00" level=info msg="CreateContainer within sandbox \"5e2686febb327a57c242e695394872b463959f8ae64db25eb967309ccd9ad70f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"178eaca89e265ffdd114fb42ec3077cf43dd3f1bd0a0af3edc2eab3cc49fec72\""
Aug 28 08:26:09 node01 containerd[1324]: time="2024-08-28T08:26:09.261320502+08:00" level=info msg="StartContainer for \"178eaca89e265ffdd114fb42ec3077cf43dd3f1bd0a0af3edc2eab3cc49fec72\""
Aug 28 08:26:09 node01 containerd[1324]: time="2024-08-28T08:26:09.339178612+08:00" level=info msg="StartContainer for \"178eaca89e265ffdd114fb42ec3077cf43dd3f1bd0a0af3edc2eab3cc49fec72\" returns successfully"
Aug 28 08:26:09 node01 containerd[1324]: time="2024-08-28T08:26:09.415049434+08:00" level=info msg="shim disconnected" id=178eaca89e265ffdd114fb42ec3077cf43dd3f1bd0a0af3edc2eab3cc49fec72 namespace=k8s.io
Aug 28 08:26:09 node01 containerd[1324]: time="2024-08-28T08:26:09.415104678+08:00" level=warning msg="cleaning up after shim disconnected" id=178eaca89e265ffdd114fb42ec3077cf43dd3f1bd0a0af3edc2eab3cc49fec72 namespace=k8s.io
Aug 28 08:26:09 node01 containerd[1324]: time="2024-08-28T08:26:09.415114916+08:00" level=info msg="cleaning up dead shim" namespace=k8s.io
Aug 28 08:26:09 node01 containerd[1324]: time="2024-08-28T08:26:09.815040875+08:00" level=info msg="CreateContainer within sandbox \"5e2686febb327a57c242e695394872b463959f8ae64db25eb967309ccd9ad70f\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}"
Aug 28 08:26:09 node01 containerd[1324]: time="2024-08-28T08:26:09.832197214+08:00" level=info msg="CreateContainer within sandbox \"5e2686febb327a57c242e695394872b463959f8ae64db25eb967309ccd9ad70f\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"22089bd67ec5484df72b7d64b4a8c2085515fb9db9f3f478414096bf60b14e21\""
Aug 28 08:26:09 node01 containerd[1324]: time="2024-08-28T08:26:09.833280914+08:00" level=info msg="StartContainer for \"22089bd67ec5484df72b7d64b4a8c2085515fb9db9f3f478414096bf60b14e21\""
Aug 28 08:26:09 node01 containerd[1324]: time="2024-08-28T08:26:09.941232383+08:00" level=info msg="StartContainer for \"22089bd67ec5484df72b7d64b4a8c2085515fb9db9f3f478414096bf60b14e21\" returns successfully"
Aug 28 08:26:46 node01 containerd[1324]: time="2024-08-28T08:26:46.746265235+08:00" level=error msg="ContainerStatus for \"2613a5700abf3e1ea49f7fb5a0142a48de1110b673ea56e5dc3cc126292f58d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2613a5700abf3e1ea49f7fb5a0142a48de1110b673ea56e5dc3cc126292f58d7\": not found"
Aug 28 08:26:46 node01 containerd[1324]: time="2024-08-28T08:26:46.746587354+08:00" level=error msg="ContainerStatus for \"bb3d115d18e798f2eb2240d47f2be2755b778ad3f479d6aba499b58ee4e3c97d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb3d115d18e798f2eb2240d47f2be2755b778ad3f479d6aba499b58ee4e3c97d\": not found"
Aug 28 08:26:46 node01 containerd[1324]: time="2024-08-28T08:26:46.746852105+08:00" level=error msg="ContainerStatus for \"d49781e2e9ba0fb39c65ad0a3649b74a9e6e16bf17ed156bf3ced4a462ccedac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d49781e2e9ba0fb39c65ad0a3649b74a9e6e16bf17ed156bf3ced4a462ccedac\": not found"
Aug 28 08:26:46 node01 containerd[1324]: time="2024-08-28T08:26:46.747212653+08:00" level=error msg="ContainerStatus for \"edddff3f88a37fc9f908f1773cfca8978e32989d7beef961319b86df70f0b800\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"edddff3f88a37fc9f908f1773cfca8978e32989d7beef961319b86df70f0b800\": not found"
zhangguanzhang commented 2 months ago
Aug 28 08:25:13 node01 containerd[1324]: time="2024-08-28T08:25:13.924230251+08:00" level=error 
msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d\")" 
   error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"

...
08:25:52.004535885+08:00" level=info msg="No cni config template is specified, wait for other system components to drop the config.

reason is here😕 try:

rm -f /etc/cni/net.d/*
kubectl -n kube-system delete pod -l k8s-app=flannel

wait a 1m

kubectl get node

or restart containerd

James-Lu-none commented 2 months ago

wtf, restart containerd solves the issue

James-Lu-none commented 2 months ago

thank you guys for the help!! you made my day!!!

James-Lu-none commented 2 months ago

At Aug 28 16:00:17, i removed /etc/cni/net.d/10-flannel.conflist and the flannel pod, and kubelet recreate it but nodes still aren't ready:

rm -f /etc/cni/net.d/*
kubectl -n kube-system delete pod -l k8s-app=flannel

At Aug 28 16:19:26, i did

sudo systemctl restart containerd

At Aug 28 16:19:33, CNI plugin configures network settings and containers deployed and running in coredns pods

i'll try to inspect these and try to figure out what was happend, thank you!!

Aug 28 16:00:17 node01 containerd[1324]: time="2024-08-28T16:00:17.014427631+08:00" level=info msg="RemoveContainer for \"e05464004243acd5527105aa1e2d06ee37d47b4ee2c71f19a7fc602fe4fd713d\""
Aug 28 16:00:17 node01 containerd[1324]: time="2024-08-28T16:00:17.019904942+08:00" level=info msg="CreateContainer within sandbox \"99849c49986f7911a24457d727becdab2c0a7f278c42a1d2dca9030d02ba7382\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Aug 28 16:00:17 node01 containerd[1324]: time="2024-08-28T16:00:17.020232867+08:00" level=info msg="RemoveContainer for \"e05464004243acd5527105aa1e2d06ee37d47b4ee2c71f19a7fc602fe4fd713d\" returns successfully"
Aug 28 16:00:17 node01 containerd[1324]: time="2024-08-28T16:00:17.023166593+08:00" level=info msg="RemoveContainer for \"e1155518105669171bacd8f4d56753b5d02f24c6027e4c0911f3e75415a3cf35\""
Aug 28 16:00:17 node01 containerd[1324]: time="2024-08-28T16:00:17.028867292+08:00" level=info msg="RemoveContainer for \"e1155518105669171bacd8f4d56753b5d02f24c6027e4c0911f3e75415a3cf35\" returns successfully"
Aug 28 16:00:17 node01 containerd[1324]: time="2024-08-28T16:00:17.031637346+08:00" level=info msg="RemoveContainer for \"fdba61867f225126a32f477dd7b7ddcc483050bc7ce0d0ab27226db634d14e0f\""
Aug 28 16:00:17 node01 containerd[1324]: time="2024-08-28T16:00:17.037160733+08:00" level=info msg="CreateContainer within sandbox \"99849c49986f7911a24457d727becdab2c0a7f278c42a1d2dca9030d02ba7382\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dddb2e391ff7da6442bd4e6a56b5a84f8bde5cd4b310a1a97b618233e9fc9295\""
Aug 28 16:00:17 node01 containerd[1324]: time="2024-08-28T16:00:17.037955310+08:00" level=info msg="StartContainer for \"dddb2e391ff7da6442bd4e6a56b5a84f8bde5cd4b310a1a97b618233e9fc9295\""
Aug 28 16:00:17 node01 containerd[1324]: time="2024-08-28T16:00:17.038470816+08:00" level=info msg="RemoveContainer for \"fdba61867f225126a32f477dd7b7ddcc483050bc7ce0d0ab27226db634d14e0f\" returns successfully"
Aug 28 16:00:17 node01 containerd[1324]: time="2024-08-28T16:00:17.039821846+08:00" level=error msg="ContainerStatus for \"e05464004243acd5527105aa1e2d06ee37d47b4ee2c71f19a7fc602fe4fd713d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e05464004243acd5527105aa1e2d06ee37d47b4ee2c71f19a7fc602fe4fd713d\": not found"
Aug 28 16:00:17 node01 containerd[1324]: time="2024-08-28T16:00:17.041550177+08:00" level=error msg="ContainerStatus for \"e1155518105669171bacd8f4d56753b5d02f24c6027e4c0911f3e75415a3cf35\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e1155518105669171bacd8f4d56753b5d02f24c6027e4c0911f3e75415a3cf35\": not found"
Aug 28 16:00:17 node01 containerd[1324]: time="2024-08-28T16:00:17.042566432+08:00" level=error msg="ContainerStatus for \"fdba61867f225126a32f477dd7b7ddcc483050bc7ce0d0ab27226db634d14e0f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fdba61867f225126a32f477dd7b7ddcc483050bc7ce0d0ab27226db634d14e0f\": not found"
Aug 28 16:00:17 node01 containerd[1324]: time="2024-08-28T16:00:17.103024708+08:00" level=info msg="StartContainer for \"dddb2e391ff7da6442bd4e6a56b5a84f8bde5cd4b310a1a97b618233e9fc9295\" returns successfully"
Aug 28 16:00:17 node01 containerd[1324]: time="2024-08-28T16:00:17.117021199+08:00" level=info msg="shim disconnected" id=dddb2e391ff7da6442bd4e6a56b5a84f8bde5cd4b310a1a97b618233e9fc9295 namespace=k8s.io
Aug 28 16:00:17 node01 containerd[1324]: time="2024-08-28T16:00:17.117062323+08:00" level=warning msg="cleaning up after shim disconnected" id=dddb2e391ff7da6442bd4e6a56b5a84f8bde5cd4b310a1a97b618233e9fc9295 namespace=k8s.io
Aug 28 16:00:17 node01 containerd[1324]: time="2024-08-28T16:00:17.117070874+08:00" level=info msg="cleaning up dead shim" namespace=k8s.io
Aug 28 16:00:18 node01 containerd[1324]: time="2024-08-28T16:00:18.026211259+08:00" level=info msg="CreateContainer within sandbox \"99849c49986f7911a24457d727becdab2c0a7f278c42a1d2dca9030d02ba7382\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}"
Aug 28 16:00:18 node01 containerd[1324]: time="2024-08-28T16:00:18.043188256+08:00" level=info msg="CreateContainer within sandbox \"99849c49986f7911a24457d727becdab2c0a7f278c42a1d2dca9030d02ba7382\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"2c3abf0ba8ab5b0c1464d87660b867e2eb959d3a64b9cdff63de512507f15db4\""
Aug 28 16:00:18 node01 containerd[1324]: time="2024-08-28T16:00:18.044253595+08:00" level=info msg="StartContainer for \"2c3abf0ba8ab5b0c1464d87660b867e2eb959d3a64b9cdff63de512507f15db4\""
Aug 28 16:00:18 node01 containerd[1324]: time="2024-08-28T16:00:18.102853296+08:00" level=info msg="StartContainer for \"2c3abf0ba8ab5b0c1464d87660b867e2eb959d3a64b9cdff63de512507f15db4\" returns successfully"
Aug 28 16:00:48 node01 containerd[1324]: time="2024-08-28T16:00:48.485125712+08:00" level=info msg="StopPodSandbox for \"2c5abd29f3803c621a167b80b71c66f0aa757d57863967a0d2377164e1a7d15c\""
Aug 28 16:00:48 node01 containerd[1324]: time="2024-08-28T16:00:48.485351264+08:00" level=info msg="TearDown network for sandbox \"2c5abd29f3803c621a167b80b71c66f0aa757d57863967a0d2377164e1a7d15c\" successfully"
Aug 28 16:00:48 node01 containerd[1324]: time="2024-08-28T16:00:48.485387121+08:00" level=info msg="StopPodSandbox for \"2c5abd29f3803c621a167b80b71c66f0aa757d57863967a0d2377164e1a7d15c\" returns successfully"
Aug 28 16:00:48 node01 containerd[1324]: time="2024-08-28T16:00:48.486021507+08:00" level=info msg="RemovePodSandbox for \"2c5abd29f3803c621a167b80b71c66f0aa757d57863967a0d2377164e1a7d15c\""
Aug 28 16:00:48 node01 containerd[1324]: time="2024-08-28T16:00:48.486090523+08:00" level=info msg="Forcibly stopping sandbox \"2c5abd29f3803c621a167b80b71c66f0aa757d57863967a0d2377164e1a7d15c\""
Aug 28 16:00:48 node01 containerd[1324]: time="2024-08-28T16:00:48.486268640+08:00" level=info msg="TearDown network for sandbox \"2c5abd29f3803c621a167b80b71c66f0aa757d57863967a0d2377164e1a7d15c\" successfully"
Aug 28 16:00:48 node01 containerd[1324]: time="2024-08-28T16:00:48.490803261+08:00" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c5abd29f3803c621a167b80b71c66f0aa757d57863967a0d2377164e1a7d15c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Aug 28 16:00:48 node01 containerd[1324]: time="2024-08-28T16:00:48.490914339+08:00" level=info msg="RemovePodSandbox \"2c5abd29f3803c621a167b80b71c66f0aa757d57863967a0d2377164e1a7d15c\" returns successfully"
Aug 28 16:19:26 node01 systemd[1]: Stopping containerd.service - containerd container runtime...
░░ Subject: A stop job for unit containerd.service has begun execution
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A stop job for unit containerd.service has begun execution.
░░ 
░░ The job identifier is 24453.
Aug 28 16:19:26 node01 containerd[1324]: time="2024-08-28T16:19:26.590487072+08:00" level=info msg="Stop CRI service"
Aug 28 16:19:26 node01 containerd[1324]: time="2024-08-28T16:19:26.590541530+08:00" level=info msg="Stop CRI service"
Aug 28 16:19:26 node01 containerd[1324]: time="2024-08-28T16:19:26.590578753+08:00" level=info msg="Event monitor stopped"
Aug 28 16:19:26 node01 containerd[1324]: time="2024-08-28T16:19:26.590592989+08:00" level=info msg="Stream server stopped"
Aug 28 16:19:26 node01 systemd[1]: containerd.service: Deactivated successfully.
░░ Subject: Unit succeeded
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ The unit containerd.service has successfully entered the 'dead' state.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: Unit process 5553 (containerd-shim) remains running after unit stopped.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: Unit process 5554 (containerd-shim) remains running after unit stopped.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: Unit process 5555 (containerd-shim) remains running after unit stopped.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: Unit process 5556 (containerd-shim) remains running after unit stopped.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: Unit process 6502 (containerd-shim) remains running after unit stopped.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: Unit process 123757 (containerd-shim) remains running after unit stopped.
Aug 28 16:19:26 node01 systemd[1]: Stopped containerd.service - containerd container runtime.
░░ Subject: A stop job for unit containerd.service has finished
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A stop job for unit containerd.service has finished.
░░ 
░░ The job identifier is 24453 and the job result is done.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: Consumed 4min 8.174s CPU time, 258.0M memory peak, 0B memory swap peak.
░░ Subject: Resources consumed by unit runtime
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ The unit containerd.service completed and consumed the indicated resources.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: Found left-over process 5553 (containerd-shim) in control group while starting unit. Ignoring.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: Found left-over process 5554 (containerd-shim) in control group while starting unit. Ignoring.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: Found left-over process 5555 (containerd-shim) in control group while starting unit. Ignoring.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: Found left-over process 5556 (containerd-shim) in control group while starting unit. Ignoring.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: Found left-over process 6502 (containerd-shim) in control group while starting unit. Ignoring.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: Found left-over process 123757 (containerd-shim) in control group while starting unit. Ignoring.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Aug 28 16:19:26 node01 systemd[1]: Starting containerd.service - containerd container runtime...
░░ Subject: A start job for unit containerd.service has begun execution
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A start job for unit containerd.service has begun execution.
░░ 
░░ The job identifier is 24453.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: Found left-over process 5553 (containerd-shim) in control group while starting unit. Ignoring.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: Found left-over process 5554 (containerd-shim) in control group while starting unit. Ignoring.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: Found left-over process 5555 (containerd-shim) in control group while starting unit. Ignoring.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: Found left-over process 5556 (containerd-shim) in control group while starting unit. Ignoring.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: Found left-over process 6502 (containerd-shim) in control group while starting unit. Ignoring.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: Found left-over process 123757 (containerd-shim) in control group while starting unit. Ignoring.
Aug 28 16:19:26 node01 systemd[1]: containerd.service: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.632567374+08:00" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.654854538+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.656824520+08:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.8.0-41-generic\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.656854697+08:00" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.656871378+08:00" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.656905184+08:00" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.656919228+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.656950962+08:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.656962516+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.657286418+08:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.657303087+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.657349538+08:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.657361967+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.657387255+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.657487949+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.657774291+08:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.657790116+08:00" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.657808695+08:00" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.657829487+08:00" level=info msg="metadata content store policy set" policy=shared
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.658587497+08:00" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.658645332+08:00" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.658663376+08:00" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.658678498+08:00" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.658699621+08:00" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.658759036+08:00" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.659016090+08:00" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729281301+08:00" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729311130+08:00" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729337088+08:00" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729354665+08:00" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729368094+08:00" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729380760+08:00" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729409636+08:00" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729425530+08:00" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729460190+08:00" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729479103+08:00" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729492645+08:00" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729532863+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729547873+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729559911+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729573893+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729590696+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729604997+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729626877+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729640236+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729652743+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729670786+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729684243+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729696155+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729708023+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729720901+08:00" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729756158+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729771800+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729783281+08:00" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729837398+08:00" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729871383+08:00" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729883560+08:00" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729901183+08:00" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729917950+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729938186+08:00" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.729983064+08:00" level=info msg="NRI interface is disabled by configuration."
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.730003609+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.730920656+08:00" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[BinaryName: CriuImagePath: CriuPath: CriuWorkPath: IoGid:0 IoUid:0 NoNewKeyring:false NoPivotRoot:false Root: ShimCgroup: SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.731137155+08:00" level=info msg="Connect containerd service"
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.731173047+08:00" level=info msg="using legacy CRI server"
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.731185288+08:00" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.731293112+08:00" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.731850153+08:00" level=info msg="Start subscribing containerd event"
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.731907565+08:00" level=info msg="Start recovering state"
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.731925623+08:00" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.731972527+08:00" level=info msg=serving... address=/run/containerd/containerd.sock
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.779692236+08:00" level=info msg="Start event monitor"
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.779718998+08:00" level=info msg="Start snapshots syncer"
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.779729534+08:00" level=info msg="Start cni network conf syncer for default"
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.779736425+08:00" level=info msg="Start streaming server"
Aug 28 16:19:26 node01 containerd[146456]: time="2024-08-28T16:19:26.779794015+08:00" level=info msg="containerd successfully booted in 0.149340s"
Aug 28 16:19:26 node01 systemd[1]: Started containerd.service - containerd container runtime.
░░ Subject: A start job for unit containerd.service has finished successfully
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A start job for unit containerd.service has finished successfully.
░░ 
░░ The job identifier is 24453.
Aug 28 16:19:33 node01 containerd[146456]: time="2024-08-28T16:19:33.081797526+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b5sj9,Uid:da32f7c0-cfce-4869-a493-2c5ebf9cecb7,Namespace:kube-system,Attempt:0,}"
Aug 28 16:19:33 node01 containerd[146456]: time="2024-08-28T16:19:33.083728599+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jzqz8,Uid:5537d490-4370-4e43-b6d8-3ea58fe11501,Namespace:kube-system,Attempt:0,}"
Aug 28 16:19:33 node01 containerd[146456]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"10.244.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xa, 0xf4, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x0, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000a4950), "name":"cbr0", "type":"bridge"}
Aug 28 16:19:33 node01 containerd[146456]: delegateAdd: netconf sent to delegate plugin:
Aug 28 16:19:33 node01 containerd[146456]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"10.244.0.0/24"}]],"routes":[{"dst":"10.244.0.0/16"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}
Aug 28 16:19:33 node01 containerd[146456]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"10.244.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xa, 0xf4, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x0, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000129b0), "name":"cbr0", "type":"bridge"}
Aug 28 16:19:33 node01 containerd[146456]: delegateAdd: netconf sent to delegate plugin:
Aug 28 16:19:33 node01 containerd[146456]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"10.244.0.0/24"}]],"routes":[{"dst":"10.244.0.0/16"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-08-28T16:19:33.174422440+08:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 28 16:19:33 node01 containerd[146456]: time="2024-08-28T16:19:33.174457527+08:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 28 16:19:33 node01 containerd[146456]: time="2024-08-28T16:19:33.174465198+08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 28 16:19:33 node01 containerd[146456]: time="2024-08-28T16:19:33.174514327+08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 28 16:19:33 node01 containerd[146456]: time="2024-08-28T16:19:33.174868387+08:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 28 16:19:33 node01 containerd[146456]: time="2024-08-28T16:19:33.174901004+08:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 28 16:19:33 node01 containerd[146456]: time="2024-08-28T16:19:33.174924832+08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 28 16:19:33 node01 containerd[146456]: time="2024-08-28T16:19:33.174978872+08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 28 16:19:33 node01 containerd[146456]: time="2024-08-28T16:19:33.213343683+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jzqz8,Uid:5537d490-4370-4e43-b6d8-3ea58fe11501,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6429a4d7483edbb6ea628e9c7ad1370b36c25685fb2827abba7fae5d232175e\""
Aug 28 16:19:33 node01 containerd[146456]: time="2024-08-28T16:19:33.213634321+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b5sj9,Uid:da32f7c0-cfce-4869-a493-2c5ebf9cecb7,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a679f1f9195e3fa1f134f699fdfd4954805b323af60a7a0d3915c2b511a6c0d\""
Aug 28 16:19:33 node01 containerd[146456]: time="2024-08-28T16:19:33.214835203+08:00" level=info msg="CreateContainer within sandbox \"7a679f1f9195e3fa1f134f699fdfd4954805b323af60a7a0d3915c2b511a6c0d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Aug 28 16:19:33 node01 containerd[146456]: time="2024-08-28T16:19:33.214843049+08:00" level=info msg="CreateContainer within sandbox \"f6429a4d7483edbb6ea628e9c7ad1370b36c25685fb2827abba7fae5d232175e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Aug 28 16:19:33 node01 containerd[146456]: time="2024-08-28T16:19:33.226620032+08:00" level=info msg="CreateContainer within sandbox \"7a679f1f9195e3fa1f134f699fdfd4954805b323af60a7a0d3915c2b511a6c0d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f1d0e5dbcf5d42b15d5e99b81d8f0fc80b280dcda1f9ca0342b6c8e8d9d29d33\""
Aug 28 16:19:33 node01 containerd[146456]: time="2024-08-28T16:19:33.227050326+08:00" level=info msg="StartContainer for \"f1d0e5dbcf5d42b15d5e99b81d8f0fc80b280dcda1f9ca0342b6c8e8d9d29d33\""
Aug 28 16:19:33 node01 containerd[146456]: time="2024-08-28T16:19:33.229307053+08:00" level=info msg="CreateContainer within sandbox \"f6429a4d7483edbb6ea628e9c7ad1370b36c25685fb2827abba7fae5d232175e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1345f0bc41d656018661b08ebf726a7a362c64a08a4729bb18d1cd99f9f328de\""
Aug 28 16:19:33 node01 containerd[146456]: time="2024-08-28T16:19:33.229646806+08:00" level=info msg="StartContainer for \"1345f0bc41d656018661b08ebf726a7a362c64a08a4729bb18d1cd99f9f328de\""
Aug 28 16:19:33 node01 containerd[146456]: time="2024-08-28T16:19:33.261430348+08:00" level=info msg="StartContainer for \"f1d0e5dbcf5d42b15d5e99b81d8f0fc80b280dcda1f9ca0342b6c8e8d9d29d33\" returns successfully"
Aug 28 16:19:33 node01 containerd[146456]: time="2024-08-28T16:19:33.261430386+08:00" level=info msg="StartContainer for \"1345f0bc41d656018661b08ebf726a7a362c64a08a4729bb18d1cd99f9f328de\" returns successfully"
zhangguanzhang commented 2 months ago

I think you should report this issue to the containerd repository

James-Lu-none commented 2 months ago

ok!