k3s-io / k3s

Lightweight Kubernetes
https://k3s.io
Apache License 2.0
27.83k stars 2.33k forks source link

Flannel setup failed /run/flannel/subnet.env: no such file or directory #7028

Closed yinheli closed 1 year ago

yinheli commented 1 year ago

Environmental Info: K3s Version:

k3s version v1.24.10+k3s1 (546a94e9)
go version go1.19.5

Node(s) CPU architecture, OS, and Version:

Linux k8s-lab-1 5.10.0-18-amd64 #1 SMP Debian 5.10.140-1 (2022-09-02) x86_64 GNU/Linux

Cluster Configuration:

1 server, 2 agents

Describe the bug:

one of agent network setup failed. No flannel device setup.

Steps To Reproduce:

server

curl --retry 10 --retry-delay 0 --retry-all-errors -sfL https://get.k3s.io | \
  sed 's/curl -o/curl --retry 10 --retry-delay 0 --connect-timeout 5 --retry-all-errors -o/g' | \
  INSTALL_K3S_VERSION="v1.24.10+k3s1" \
  INSTALL_K3S_EXEC="--disable traefik,servicelb --flannel-backend=wireguard-native" \
  sh -s - \
   --token test \
   --cluster-init \
   --cluster-cidr=10.180.0.0/23 \
   --service-cidr=10.180.2.0/23

agent

curl --retry 10 --retry-delay 0 --retry-all-errors -sfL https://get.k3s.io | \
  sed 's/curl -o/curl --retry 10 --retry-delay 0 --connect-timeout 5 --retry-all-errors -o/g' | \
  INSTALL_K3S_VERSION="v1.24.10+k3s1" \
  sh -s - \
   agent --server https://192.168.4.84:6443 \
   --token test 

one node's network flannel setup failed.

root@k8s-lab-1:~# ifconfig
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1420
        inet 10.180.1.1  netmask 255.255.255.0  broadcast 10.180.1.255
        inet6 fe80::ecdb:77ff:fe7b:46ed  prefixlen 64  scopeid 0x20<link>
        ether ee:db:77:7b:46:ed  txqueuelen 1000  (Ethernet)
        RX packets 78  bytes 6134 (5.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 76  bytes 12237 (11.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens18: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.4.84  netmask 255.255.255.0  broadcast 192.168.4.255
        inet6 fe80::74e9:7fff:fe05:3526  prefixlen 64  scopeid 0x20<link>
        ether 76:e9:7f:05:35:26  txqueuelen 1000  (Ethernet)
        RX packets 465261  bytes 816571901 (778.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 269836  bytes 104771151 (99.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel-wg: flags=209<UP,POINTOPOINT,RUNNING,NOARP>  mtu 1420
        inet 10.180.1.0  netmask 255.255.255.255  destination 10.180.1.0
        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 0  (UNSPEC)
        RX packets 459  bytes 160800 (157.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 522  bytes 73444 (71.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 957196  bytes 363405547 (346.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 957196  bytes 363405547 (346.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
root@k8s-lab-3:~# ifconfig
ens18: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.4.86  netmask 255.255.255.0  broadcast 192.168.4.255
        inet6 fe80::18d5:bfff:fe11:4734  prefixlen 64  scopeid 0x20<link>
        ether 1a:d5:bf:11:47:34  txqueuelen 1000  (Ethernet)
        RX packets 270870  bytes 502115562 (478.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 174176  bytes 18936293 (18.0 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 112897  bytes 31791807 (30.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 112897  bytes 31791807 (30.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Expected behavior:

Actual behavior:

Additional context / logs:

Mar 07 18:15:48 k8s-lab-3 systemd[1]: Starting Lightweight Kubernetes...
Mar 07 18:15:48 k8s-lab-3 sh[40428]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Mar 07 18:15:48 k8s-lab-3 sh[40429]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory
Mar 07 18:15:49 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:49+08:00" level=info msg="Starting k3s agent v1.24.10+k3s1 (546a94e9)"
Mar 07 18:15:49 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:49+08:00" level=info msg="Running load balancer k3s-agent-load-balancer 127.0.0.1:6444 -> [192.168.4.84:6443]"
Mar 07 18:15:49 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:49+08:00" level=warning msg="Cluster CA certificate is not trusted by the host CA bundle, but the token does not include a CA hash. Use the full token from the server's node-token file to enable Cluster CA validation."
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50+08:00" level=info msg="Module overlay was already loaded"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50+08:00" level=info msg="Module nf_conntrack was already loaded"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50+08:00" level=info msg="Module br_netfilter was already loaded"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50+08:00" level=info msg="Module iptable_nat was already loaded"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50+08:00" level=info msg="Module iptable_filter was already loaded"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50+08:00" level=debug msg="getConntrackMax: using scaled conntrack-max-per-core"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50+08:00" level=debug msg="Creating the CNI conf in directory /var/lib/rancher/k3s/agent/etc/cni/net.d"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50+08:00" level=debug msg="Creating the flannel configuration for backend wireguard-native in file /var/lib/rancher/k3s/agent/etc/flannel/net-conf.json"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50+08:00" level=debug msg="The flannel configuration is {\n\t\"Network\": \"10.180.0.0/23\",\n\t\"EnableIPv6\": false,\n\t\"EnableIPv4\": true,\n\t\"IPv6Network\": \"::/0\",\n\t\"Backend\": {\n\t\"Type\": \"wireguard\",\n\t\"PersistentKeepaliveInterval\": 25,\n\t\"Mode\": \"separate\"\n}\n}\n"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50+08:00" level=debug msg="Searching for nvidia container runtime at /usr/local/nvidia/toolkit/nvidia-container-runtime"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50+08:00" level=debug msg="nvidia container runtime not found at /usr/local/nvidia/toolkit/nvidia-container-runtime"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50+08:00" level=debug msg="Searching for nvidia container runtime at /usr/bin/nvidia-container-runtime"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50+08:00" level=debug msg="nvidia container runtime not found at /usr/bin/nvidia-container-runtime"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50+08:00" level=debug msg="Searching for nvidia-experimental container runtime at /usr/local/nvidia/toolkit/nvidia-container-runtime-experimental"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50+08:00" level=debug msg="nvidia-experimental container runtime not found at /usr/local/nvidia/toolkit/nvidia-container-runtime-experimental"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50+08:00" level=debug msg="Searching for nvidia-experimental container runtime at /usr/bin/nvidia-container-runtime-experimental"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50+08:00" level=debug msg="nvidia-experimental container runtime not found at /usr/bin/nvidia-container-runtime-experimental"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50+08:00" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50+08:00" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.199568541+08:00" level=info msg="starting containerd" revision= version=v1.6.15-k3s1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.258215399+08:00" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.258438696+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.265957923+08:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.0-18-amd64\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.266180263+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.266603530+08:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.btrfs (xfs) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.266697636+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.266754420+08:00" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.266796927+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.266895260+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.267437250+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.fuse-overlayfs\"..." type=io.containerd.snapshotter.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.267635157+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.stargz\"..." type=io.containerd.snapshotter.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.269369431+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.269751891+08:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.269841844+08:00" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.269917744+08:00" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.269997304+08:00" level=info msg="metadata content store policy set" policy=shared
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.270296845+08:00" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.270392982+08:00" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.270450355+08:00" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.270577025+08:00" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.270642515+08:00" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.270696405+08:00" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.270748915+08:00" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.270807578+08:00" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.270859235+08:00" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.270909992+08:00" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.270959052+08:00" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.271012905+08:00" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.271221789+08:00" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.271429378+08:00" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.272629019+08:00" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.272873410+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.272983959+08:00" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.273388457+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.273530017+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.273636943+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.273738786+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.274067050+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.274366093+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.274502077+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.274598990+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.274776591+08:00" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.275211400+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.275353874+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.275457764+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.275587398+08:00" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.275696824+08:00" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.275811198+08:00" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.275936338+08:00" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.276232958+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.277369575+08:00" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/var/lib/rancher/k3s/data/4cdfcad9f220e885cbc32cf86c6cb0d26b496e3949efb0aa33fb37692e11d521/bin NetworkPluginConfDir:/var/lib/rancher/k3s/agent/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:rancher/mirrored-pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:true EnableUnprivilegedICMP:true} ContainerdRootDir:/var/lib/rancher/k3s/agent/containerd ContainerdEndpoint:/run/k3s/containerd/containerd.sock RootDir:/var/lib/rancher/k3s/agent/containerd/io.containerd.grpc.v1.cri StateDir:/run/k3s/containerd/io.containerd.grpc.v1.cri}"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.277931055+08:00" level=info msg="Connect containerd service"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.278144808+08:00" level=info msg="Get image filesystem path \"/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs\""
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.281065367+08:00" level=info msg="Start subscribing containerd event"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.281262967+08:00" level=info msg="Start recovering state"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.281854337+08:00" level=info msg=serving... address=/run/k3s/containerd/containerd.sock.ttrpc
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.282225947+08:00" level=info msg=serving... address=/run/k3s/containerd/containerd.sock
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.282390048+08:00" level=info msg="containerd successfully booted in 0.084876s"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.301502086+08:00" level=info msg="Start event monitor"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.301684820+08:00" level=info msg="Start snapshots syncer"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.301760183+08:00" level=info msg="Start cni network conf syncer for default"
Mar 07 18:15:50 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:50.301819813+08:00" level=info msg="Start streaming server"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=info msg="Containerd is now running"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=info msg="Getting list of apiserver endpoints from server"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=info msg="Updating load balancer k3s-agent-load-balancer default server address -> 192.168.4.84:6443"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=info msg="Connecting to proxy" url="wss://192.168.4.84:6443/v1-k3s/connect"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="Kubelet image credential provider bin directory check failed: stat /var/lib/rancher/credentialprovider/bin: no such file or directory"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.180.2.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k8s-lab-3 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --node-labels= --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: Flag --cloud-provider has been deprecated, will be removed in 1.24 or later, in favor of removing cloud provider code from Kubelet.
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.176338   40432 server.go:192] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=info msg="Tunnel authorizer set Kubelet Port 10250"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=info msg="Running kube-proxy --cluster-cidr=10.180.0.0/23 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=k8s-lab-3 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.192629   40432 server.go:230] "Warning, all flags other than --config, --write-config-to, and --cleanup are deprecated, please begin using a config file ASAP"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=info msg="Annotations and labels have been set successfully on node: k8s-lab-3"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=info msg="Starting flannel with backend wireguard-native"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.239486   40432 server.go:395] "Kubelet version" kubeletVersion="v1.24.10+k3s1"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.239642   40432 server.go:397] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.245241   40432 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: W0307 18:15:51.245366   40432 manager.go:159] Cannot detect current cgroup on cgroup v2
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.262640   40432 node.go:163] Successfully retrieved node IP: 192.168.4.86
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.262769   40432 server_others.go:138] "Detected node IP" address="192.168.4.86"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.280999   40432 server_others.go:206] "Using iptables Proxier"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.281179   40432 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.281230   40432 server_others.go:214] "Creating dualStackProxier for iptables"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.281313   40432 server_others.go:512] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.281438   40432 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.281838   40432 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.281887   40432 server.go:644] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.282473   40432 server.go:661] "Version info" version="v1.24.10+k3s1"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.282533   40432 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.283228   40432 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.283484   40432 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.283635   40432 topology_manager.go:133] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.283701   40432 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.283893   40432 state_mem.go:36] "Initialized new in-memory state store"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.314820   40432 config.go:317] "Starting service config controller"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.314958   40432 shared_informer.go:255] Waiting for caches to sync for service config
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.315147   40432 config.go:226] "Starting endpoint slice config controller"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.315211   40432 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.317793   40432 config.go:444] "Starting node config controller"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.317917   40432 shared_informer.go:255] Waiting for caches to sync for node config
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.332471   40432 kubelet.go:376] "Attempting to sync node with API server"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.332608   40432 kubelet.go:267] "Adding static pod path" path="/var/lib/rancher/k3s/agent/pod-manifests"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.332771   40432 kubelet.go:278] "Adding apiserver pod source"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.332904   40432 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.334868   40432 kuberuntime_manager.go:239] "Container runtime initialized" containerRuntime="containerd" version="v1.6.15-k3s1" apiVersion="v1"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.337358   40432 server.go:1179] "Started kubelet"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.337739   40432 server.go:150] "Starting to listen" address="0.0.0.0" port=10250
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: E0307 18:15:51.339116   40432 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: E0307 18:15:51.339305   40432 kubelet.go:1298] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.339924   40432 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.345663   40432 server.go:410] "Adding debug handlers to kubelet server"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.346065   40432 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.346730   40432 volume_manager.go:292] "Starting Kubelet Volume Manager"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dbytes=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dios=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dbytes=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dios=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dbytes=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dios=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dbytes=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dios=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dbytes=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dios=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dbytes=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dios=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dbytes=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dios=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dbytes=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dios=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dbytes=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dios=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dbytes=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dios=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.415891   40432 shared_informer.go:262] Caches are synced for endpoint slice config
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.415987   40432 shared_informer.go:262] Caches are synced for service config
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.418099   40432 shared_informer.go:262] Caches are synced for node config
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dbytes=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dios=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dbytes=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dios=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.444094   40432 kubelet_node_status.go:70] "Attempting to register node" node="k8s-lab-3"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.445374   40432 kubelet_network_linux.go:76] "Initialized protocol iptables rules." protocol=IPv4
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.450777   40432 cpu_manager.go:213] "Starting CPU manager" policy="none"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.450861   40432 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.450973   40432 state_mem.go:36] "Initialized new in-memory state store"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.452077   40432 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.452191   40432 state_mem.go:96] "Updated CPUSet assignments" assignments=map[]
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.452259   40432 policy_none.go:49] "None policy: Start"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.455514   40432 memory_manager.go:168] "Starting memorymanager" policy="None"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.455734   40432 state_mem.go:35] "Initializing new in-memory state store"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.456416   40432 state_mem.go:75] "Updated machine memory state"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.468192   40432 kubelet_node_status.go:108] "Node was previously registered" node="k8s-lab-3"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.468642   40432 kubelet_node_status.go:73] "Successfully registered node" node="k8s-lab-3"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.475935   40432 setters.go:532] "Node became not ready" node="k8s-lab-3" condition={Type:Ready Status:False LastHeartbeatTime:2023-03-07 18:15:51.475718236 +0800 CST m=+2.630194419 LastTransitionTime:2023-03-07 18:15:51.475718236 +0800 CST m=+2.630194419 Reason:KubeletNotReady Message:[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]}
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.504043   40432 manager.go:611] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.505609   40432 plugin_manager.go:114] "Starting Kubelet Plugin Manager"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dbytes=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dios=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dbytes=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dios=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.527302   40432 kubelet_network_linux.go:76] "Initialized protocol iptables rules." protocol=IPv6
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.527411   40432 status_manager.go:161] "Starting to sync pod status with apiserver"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: I0307 18:15:51.527494   40432 kubelet.go:1989] "Starting kubelet main sync loop"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: E0307 18:15:51.527727   40432 kubelet.go:2013] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dbytes=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dios=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dbytes=0 entry"
Mar 07 18:15:51 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:51+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dios=0 entry"
Mar 07 18:15:52 k8s-lab-3 k3s[40432]: I0307 18:15:52.334280   40432 apiserver.go:52] "Watching apiserver"
Mar 07 18:15:52 k8s-lab-3 k3s[40432]: I0307 18:15:52.344259   40432 topology_manager.go:200] "Topology Admit Handler"
Mar 07 18:15:52 k8s-lab-3 k3s[40432]: I0307 18:15:52.355327   40432 reconciler.go:352] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80450dd3-f489-4e94-a9f1-d34344ddbe42-config-volume\") pod \"coredns-7b5bbc6644-8x248\" (UID: \"80450dd3-f489-4e94-a9f1-d34344ddbe42\") " pod="kube-system/coredns-7b5bbc6644-8x248"
Mar 07 18:15:52 k8s-lab-3 k3s[40432]: I0307 18:15:52.355568   40432 reconciler.go:352] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-config-volume\" (UniqueName: \"kubernetes.io/configmap/80450dd3-f489-4e94-a9f1-d34344ddbe42-custom-config-volume\") pod \"coredns-7b5bbc6644-8x248\" (UID: \"80450dd3-f489-4e94-a9f1-d34344ddbe42\") " pod="kube-system/coredns-7b5bbc6644-8x248"
Mar 07 18:15:52 k8s-lab-3 k3s[40432]: I0307 18:15:52.355683   40432 reconciler.go:352] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zczsp\" (UniqueName: \"kubernetes.io/projected/80450dd3-f489-4e94-a9f1-d34344ddbe42-kube-api-access-zczsp\") pod \"coredns-7b5bbc6644-8x248\" (UID: \"80450dd3-f489-4e94-a9f1-d34344ddbe42\") " pod="kube-system/coredns-7b5bbc6644-8x248"
Mar 07 18:15:52 k8s-lab-3 k3s[40432]: I0307 18:15:52.355735   40432 reconciler.go:169] "Reconciler: start to sync state"
Mar 07 18:15:52 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:52.694061722+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7b5bbc6644-8x248,Uid:80450dd3-f489-4e94-a9f1-d34344ddbe42,Namespace:kube-system,Attempt:0,}"
Mar 07 18:15:52 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:52.769785348+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7b5bbc6644-8x248,Uid:80450dd3-f489-4e94-a9f1-d34344ddbe42,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ff1642d30455e9d190eb1cdeed68fe3856eae7ff18bd58e4fdef8665cbfe73d4\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory"
Mar 07 18:15:52 k8s-lab-3 k3s[40432]: E0307 18:15:52.771019   40432 remote_runtime.go:201] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff1642d30455e9d190eb1cdeed68fe3856eae7ff18bd58e4fdef8665cbfe73d4\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory"
Mar 07 18:15:52 k8s-lab-3 k3s[40432]: E0307 18:15:52.771326   40432 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff1642d30455e9d190eb1cdeed68fe3856eae7ff18bd58e4fdef8665cbfe73d4\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7b5bbc6644-8x248"
Mar 07 18:15:52 k8s-lab-3 k3s[40432]: E0307 18:15:52.771465   40432 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff1642d30455e9d190eb1cdeed68fe3856eae7ff18bd58e4fdef8665cbfe73d4\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7b5bbc6644-8x248"
Mar 07 18:15:52 k8s-lab-3 k3s[40432]: E0307 18:15:52.771778   40432 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7b5bbc6644-8x248_kube-system(80450dd3-f489-4e94-a9f1-d34344ddbe42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7b5bbc6644-8x248_kube-system(80450dd3-f489-4e94-a9f1-d34344ddbe42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff1642d30455e9d190eb1cdeed68fe3856eae7ff18bd58e4fdef8665cbfe73d4\\\": plugin type=\\\"flannel\\\" failed (add): open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7b5bbc6644-8x248" podUID=80450dd3-f489-4e94-a9f1-d34344ddbe42
Mar 07 18:15:56 k8s-lab-3 k3s[40432]: time="2023-03-07T18:15:56+08:00" level=debug msg="Wrote ping"
Mar 07 18:16:01 k8s-lab-3 k3s[40432]: time="2023-03-07T18:16:01+08:00" level=debug msg="Wrote ping"
Mar 07 18:16:01 k8s-lab-3 k3s[40432]: time="2023-03-07T18:16:01+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dbytes=0 entry"
Mar 07 18:16:01 k8s-lab-3 k3s[40432]: time="2023-03-07T18:16:01+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dios=0 entry"
Mar 07 18:16:01 k8s-lab-3 k3s[40432]: time="2023-03-07T18:16:01+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dbytes=0 entry"
Mar 07 18:16:01 k8s-lab-3 k3s[40432]: time="2023-03-07T18:16:01+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dios=0 entry"
Mar 07 18:16:01 k8s-lab-3 k3s[40432]: time="2023-03-07T18:16:01+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dbytes=0 entry"
Mar 07 18:16:01 k8s-lab-3 k3s[40432]: time="2023-03-07T18:16:01+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dios=0 entry"
Mar 07 18:16:01 k8s-lab-3 k3s[40432]: time="2023-03-07T18:16:01+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dbytes=0 entry"
Mar 07 18:16:01 k8s-lab-3 k3s[40432]: time="2023-03-07T18:16:01+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dios=0 entry"
Mar 07 18:16:02 k8s-lab-3 k3s[40432]: time="2023-03-07T18:16:02+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dbytes=0 entry"
Mar 07 18:16:02 k8s-lab-3 k3s[40432]: time="2023-03-07T18:16:02+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dios=0 entry"
Mar 07 18:16:02 k8s-lab-3 k3s[40432]: time="2023-03-07T18:16:02+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dbytes=0 entry"
Mar 07 18:16:02 k8s-lab-3 k3s[40432]: time="2023-03-07T18:16:02+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dios=0 entry"
Mar 07 18:16:03 k8s-lab-3 k3s[40432]: time="2023-03-07T18:16:03+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dbytes=0 entry"
Mar 07 18:16:03 k8s-lab-3 k3s[40432]: time="2023-03-07T18:16:03+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dios=0 entry"
Mar 07 18:16:03 k8s-lab-3 k3s[40432]: time="2023-03-07T18:16:03+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dbytes=0 entry"
Mar 07 18:16:03 k8s-lab-3 k3s[40432]: time="2023-03-07T18:16:03+08:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dios=0 entry"
yinheli commented 1 year ago

Maybe i found why this happened!

server node log:

failed to allocate cidr from cluster cidr at idx:0: CIDR allocation failed; there are no remaining CIDRs left to allocate in the accepted range
yinheli commented 1 year ago

reference issue: https://github.com/k3s-io/k3s/issues/697