Closed cwayne18 closed 2 years ago
We don't quite have enough information to recreate the issue with Docker installed and version 1.21.6.
ps aux
output.rke2-killall
?netstat -tulpn
Client: Docker Engine - Community
Version: 20.10.11
API version: 1.41
Go version: go1.16.9
Git commit: dea9396
Built: Thu Nov 18 00:37:06 2021
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community Engine: Version: 20.10.11 API version: 1.41 (minimum version 1.12) Go version: go1.16.9 Git commit: 847da18 Built: Thu Nov 18 00:35:15 2021 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.4.12 GitCommit: 7b11cfaabd73bb80907dd23182b9347b4245eb5d runc: Version: 1.0.2 GitCommit: v1.0.2-0-g52b36a2 docker-init: Version: 0.19.0 GitCommit: de40ad0
2. ps aux taken before any actions
ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.3 0.1 170596 13120 ? Ss 04:00 0:02 /sbin/init root 2 0.0 0.0 0 0 ? S 04:00 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? I< 04:00 0:00 [rcu_gp] root 4 0.0 0.0 0 0 ? I< 04:00 0:00 [rcu_par_gp] root 6 0.0 0.0 0 0 ? I< 04:00 0:00 [kworker/0:0H-kblockd] root 8 0.0 0.0 0 0 ? I 04:00 0:00 [kworker/u8:0-events_unbound] root 9 0.0 0.0 0 0 ? I< 04:00 0:00 [mm_percpu_wq] root 10 0.0 0.0 0 0 ? S 04:00 0:00 [ksoftirqd/0] root 11 0.0 0.0 0 0 ? I 04:00 0:00 [rcu_sched] root 12 0.0 0.0 0 0 ? S 04:00 0:00 [migration/0] root 13 0.0 0.0 0 0 ? S 04:00 0:00 [idle_inject/0] root 14 0.0 0.0 0 0 ? S 04:00 0:00 [cpuhp/0] root 15 0.0 0.0 0 0 ? S 04:00 0:00 [cpuhp/1] root 16 0.0 0.0 0 0 ? S 04:00 0:00 [idle_inject/1] root 17 0.0 0.0 0 0 ? S 04:00 0:00 [migration/1] root 18 0.0 0.0 0 0 ? S 04:00 0:00 [ksoftirqd/1] root 19 0.0 0.0 0 0 ? I 04:00 0:00 [kworker/1:0-mm_percpu_wq] root 20 0.0 0.0 0 0 ? I< 04:00 0:00 [kworker/1:0H-kblockd] root 21 0.0 0.0 0 0 ? S 04:00 0:00 [cpuhp/2] root 22 0.0 0.0 0 0 ? S 04:00 0:00 [idle_inject/2] root 23 0.0 0.0 0 0 ? S 04:00 0:00 [migration/2] root 24 0.0 0.0 0 0 ? S 04:00 0:00 [ksoftirqd/2] root 26 0.0 0.0 0 0 ? I< 04:00 0:00 [kworker/2:0H-kblockd] root 27 0.0 0.0 0 0 ? S 04:00 0:00 [cpuhp/3] root 28 0.0 0.0 0 0 ? S 04:00 0:00 [idle_inject/3] root 29 0.0 0.0 0 0 ? S 04:00 0:00 [migration/3] root 30 0.0 0.0 0 0 ? S 04:00 0:00 [ksoftirqd/3] root 32 0.0 0.0 0 0 ? I< 04:00 0:00 [kworker/3:0H-kblockd] root 33 0.0 0.0 0 0 ? S 04:00 0:00 [kdevtmpfs] root 34 0.0 0.0 0 0 ? I< 04:00 0:00 [netns] root 35 0.0 0.0 0 0 ? S 04:00 0:00 [rcu_tasks_kthre] root 36 0.0 0.0 0 0 ? S 04:00 0:00 [kauditd] root 37 0.0 0.0 0 0 ? S 04:00 0:00 [khungtaskd] root 38 0.0 0.0 0 0 ? S 04:00 0:00 [oom_reaper] root 39 0.0 0.0 0 0 ? I< 04:00 0:00 [writeback] root 40 0.0 0.0 0 0 ? S 04:00 0:00 [kcompactd0] root 41 0.0 0.0 0 0 ? SN 04:00 0:00 [ksmd] root 42 0.0 0.0 0 0 ? SN 04:00 0:00 [khugepaged] root 89 0.0 0.0 0 0 ? I< 04:00 0:00 [kintegrityd] root 90 0.0 0.0 0 0 ? I< 04:00 0:00 [kblockd] root 91 0.0 0.0 0 0 ? I< 04:00 0:00 [blkcg_punt_bio] root 92 0.0 0.0 0 0 ? I< 04:00 0:00 [tpm_dev_wq] root 93 0.0 0.0 0 0 ? I< 04:00 0:00 [ata_sff] root 94 0.0 0.0 0 0 ? I< 04:00 0:00 [md] root 95 0.0 0.0 0 0 ? I< 04:00 0:00 [edac-poller] root 96 0.0 0.0 0 0 ? I< 04:00 0:00 [devfreq_wq] root 97 0.0 0.0 0 0 ? S 04:00 0:00 [watchdogd] root 99 0.0 0.0 0 0 ? I 04:00 0:00 [kworker/u8:1-events_unbound] root 101 0.0 0.0 0 0 ? S 04:00 0:00 [kswapd0] root 102 0.0 0.0 0 0 ? S 04:00 0:00 [ecryptfs-kthrea] root 104 0.0 0.0 0 0 ? I< 04:00 0:00 [kthrotld] root 105 0.0 0.0 0 0 ? I< 04:00 0:00 [acpi_thermal_pm] root 106 0.0 0.0 0 0 ? S 04:00 0:00 [scsi_eh_0] root 107 0.0 0.0 0 0 ? I< 04:00 0:00 [scsi_tmf_0] root 108 0.0 0.0 0 0 ? S 04:00 0:00 [scsi_eh_1] root 109 0.0 0.0 0 0 ? I< 04:00 0:00 [scsi_tmf_1] root 111 0.0 0.0 0 0 ? I< 04:00 0:00 [vfio-irqfd-clea] root 112 0.0 0.0 0 0 ? I< 04:00 0:00 [ipv6_addrconf] root 123 0.0 0.0 0 0 ? I< 04:00 0:00 [kstrp] root 126 0.0 0.0 0 0 ? I< 04:00 0:00 [kworker/u9:0] root 139 0.0 0.0 0 0 ? I< 04:00 0:00 [charger_manager] root 188 0.0 0.0 0 0 ? S 04:00 0:00 [scsi_eh_2] root 189 0.0 0.0 0 0 ? I< 04:00 0:00 [scsi_tmf_2] root 190 0.0 0.0 0 0 ? I< 04:00 0:00 [cryptd] root 207 0.0 0.0 0 0 ? I< 04:00 0:00 [kworker/0:1H-kblockd] root 255 0.0 0.0 0 0 ? I< 04:00 0:00 [raid5wq] root 295 0.0 0.0 0 0 ? I< 04:00 0:00 [kworker/2:1H-kblockd] root 296 0.0 0.0 0 0 ? S 04:00 0:00 [jbd2/vda1-8] root 297 0.0 0.0 0 0 ? I< 04:00 0:00 [ext4-rsv-conver] root 325 0.0 0.0 0 0 ? I< 04:00 0:00 [kworker/1:1H-kblockd] root 339 0.0 0.0 0 0 ? I< 04:00 0:00 [kworker/3:1H-kblockd] root 372 0.0 0.1 35100 10940 ? S<s 04:00 0:00 /lib/systemd/systemd-journald root 502 0.0 0.0 0 0 ? I< 04:00 0:00 [kaluad] root 503 0.0 0.0 0 0 ? I< 04:00 0:00 [kmpath_rdacd] root 504 0.0 0.0 0 0 ? I< 04:00 0:00 [kmpathd] root 505 0.0 0.0 0 0 ? I< 04:00 0:00 [kmpath_handlerd] root 506 0.0 0.2 280136 17940 ? SLsl 04:00 0:00 /sbin/multipathd -d -s root 517 0.0 0.0 0 0 ? S< 04:00 0:00 [loop0] root 521 0.0 0.0 0 0 ? S< 04:00 0:00 [loop1] root 522 0.0 0.0 0 0 ? S< 04:00 0:00 [loop2] systemd+ 546 0.0 0.0 90228 6080 ? Ssl 04:00 0:00 /lib/systemd/systemd-timesyncd systemd+ 583 0.0 0.0 18532 7864 ? Ss 04:00 0:00 /lib/systemd/systemd-networkd systemd+ 602 0.0 0.1 23920 12168 ? Ss 04:00 0:00 /lib/systemd/systemd-resolved root 626 0.0 0.0 18948 5344 ? Ss 04:00 0:00 /lib/systemd/systemd-udevd root 713 0.0 0.1 241036 9360 ? Ssl 04:00 0:00 /usr/lib/accountsservice/accounts-daemon root 718 0.0 0.0 8536 2792 ? Ss 04:00 0:00 /usr/sbin/cron -f message+ 720 0.0 0.0 7424 4568 ? Ss 04:00 0:00 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only root 725 0.0 0.0 1301048 6620 ? Ssl 04:00 0:00 /opt/digitalocean/bin/droplet-agent -syslog root 728 0.0 0.0 81896 3808 ? Ssl 04:00 0:00 /usr/sbin/irqbalance --foreground root 731 0.0 0.2 29196 18132 ? Ss 04:00 0:00 /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers syslog 732 0.0 0.0 224348 4736 ? Ssl 04:00 0:00 /usr/sbin/rsyslogd -n -iNONE root 736 0.4 0.4 1151116 36272 ? Ssl 04:00 0:03 /usr/lib/snapd/snapd root 739 0.0 0.0 16644 7748 ? Ss 04:00 0:00 /lib/systemd/systemd-logind root 752 0.0 0.1 394820 13708 ? Ssl 04:00 0:00 /usr/lib/udisks2/udisksd daemon 754 0.0 0.0 3792 2260 ? Ss 04:00 0:00 /usr/sbin/atd -f root 755 0.1 0.5 1418808 41360 ? Ssl 04:00 0:01 /usr/bin/containerd root 778 0.0 0.0 7352 2288 ttyS0 Ss+ 04:00 0:00 /sbin/agetty -o -p -- \u --keep-baud 115200,38400,9600 ttyS0 vt220 root 780 0.0 0.0 5828 1852 tty1 Ss+ 04:00 0:00 /sbin/agetty -o -p -- \u --noclear tty1 linux root 781 0.0 0.0 12176 6868 ? Ss 04:00 0:00 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups root 783 0.0 0.0 0 0 ? I 04:00 0:00 [kworker/3:4-events] root 792 0.0 0.1 236424 9084 ? Ssl 04:00 0:00 /usr/lib/policykit-1/polkitd --no-debug root 796 0.0 0.2 108108 20592 ? Ssl 04:00 0:00 /usr/bin/python3 /usr/share/unattended-upgrades/unattended-upgrade-shutdown --wait-for-signal root 822 0.0 0.9 1531108 79752 ? Ssl 04:00 0:00 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock root 849 0.0 0.0 2488 512 ? S 04:00 0:00 bpfilter_umh root 1126 0.0 0.1 13808 9072 ? Ss 04:02 0:00 sshd: root@pts/0 root 1142 0.1 0.1 18784 9932 ? Ss 04:03 0:00 /lib/systemd/systemd --user root 1147 0.0 0.0 168864 3340 ? S 04:03 0:00 (sd-pam) root 1246 0.0 0.0 10296 5472 pts/0 Ss 04:03 0:00 -bash root 1409 16.6 1.7 933996 138988 ? Ssl 04:04 1:24 /usr/local/bin/rke2 server root 1420 10.7 0.6 812956 56300 ? Sl 04:04 0:54 containerd -c /var/lib/rancher/rke2/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /r root 1440 3.1 1.4 823516 118860 ? Sl 04:05 0:14 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0. root 1750 0.0 0.1 716472 11292 ? Sl 04:05 0:00 /var/lib/rancher/rke2/data/v1.21.6-rke2r1-fd8a733b61b5/bin/containerd-shim-runc-v2 -namespace k8s.io -id 0feb78eb81726 65535 1770 0.0 0.0 964 4 ? Ss 04:05 0:00 /pause root 1819 1.8 0.8 11215960 65772 ? Ssl 04:05 0:08 etcd --config-file=/var/lib/rancher/rke2/server/db/etcd/config root 1962 0.0 0.1 716472 10284 ? Sl 04:05 0:00 /var/lib/rancher/rke2/data/v1.21.6-rke2r1-fd8a733b61b5/bin/containerd-shim-runc-v2 -namespace k8s.io -id b6b5d45d4cb02 65535 1984 0.0 0.0 964 4 ? Ss 04:05 0:00 /pause root 2032 8.9 4.9 1308984 399696 ? Ssl 04:05 0:39 kube-apiserver --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --allow-privileged=true --anonymous-au root 2190 0.0 0.1 716332 10724 ? Sl 04:06 0:00 /var/lib/rancher/rke2/data/v1.21.6-rke2r1-fd8a733b61b5/bin/containerd-shim-runc-v2 -namespace k8s.io -id ef97cf52f7e28 root 2218 0.0 0.1 716332 10812 ? Sl 04:06 0:00 /var/lib/rancher/rke2/data/v1.21.6-rke2r1-fd8a733b61b5/bin/containerd-shim-runc-v2 -namespace k8s.io -id 56d689c93b5e1 65535 2235 0.0 0.0 964 4 ? Ss 04:06 0:00 /pause 65535 2257 0.0 0.0 964 4 ? Ss 04:06 0:00 /pause root 2323 1.6 1.3 820404 114092 ? Ssl 04:06 0:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --p root 2332 0.2 0.6 753752 53044 ? Ssl 04:06 0:00 kube-scheduler --permit-port-sharing=true --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/r root 2410 0.0 0.1 716332 10404 ? Sl 04:06 0:00 /var/lib/rancher/rke2/data/v1.21.6-rke2r1-fd8a733b61b5/bin/containerd-shim-runc-v2 -namespace k8s.io -id beb2c58e5ff8e 65535 2433 0.0 0.0 964 4 ? Ss 04:06 0:00 /pause root 2498 0.3 0.5 750560 46400 ? Ssl 04:06 0:01 cloud-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cloud-provider=rke2 --cluster-cidr=10.4 root 2621 0.0 0.1 716332 10676 ? Sl 04:06 0:00 /var/lib/rancher/rke2/data/v1.21.6-rke2r1-fd8a733b61b5/bin/containerd-shim-runc-v2 -namespace k8s.io -id 6cc8a2549356b 65535 2641 0.0 0.0 964 4 ? Ss 04:06 0:00 /pause root 2689 0.0 0.5 749860 42656 ? Ssl 04:06 0:00 kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tc root 3274 0.0 0.0 0 0 ? I 04:06 0:00 [kworker/1:5-events] root 3294 0.0 0.0 0 0 ? I 04:06 0:00 [kworker/2:4-mm_percpu_wq] root 3295 0.0 0.0 0 0 ? I 04:06 0:00 [kworker/2:5-events] root 3308 0.1 0.1 716472 11892 ? Sl 04:06 0:00 /var/lib/rancher/rke2/data/v1.21.6-rke2r1-fd8a733b61b5/bin/containerd-shim-runc-v2 -namespace k8s.io -id 336bb40dd5f7c 65535 3330 0.0 0.0 964 4 ? Ss 04:06 0:00 /pause root 3428 0.0 0.0 0 0 ? I 04:06 0:00 [kworker/0:3-cgroup_pidlist_destroy] root 3451 0.0 0.0 0 0 ? I 04:06 0:00 [kworker/0:11-cgroup_pidlist_destroy] root 3971 0.0 0.0 4392 1180 ? Ss 04:07 0:00 /usr/sbin/runsvdir -P /etc/service/enabled root 4021 0.0 0.0 4240 644 ? Ss 04:07 0:00 runsv felix root 4022 0.0 0.0 4240 720 ? Ss 04:07 0:00 runsv cni root 4023 0.0 0.0 4240 728 ? Ss 04:07 0:00 runsv monitor-addresses root 4024 0.0 0.0 4240 636 ? Ss 04:07 0:00 runsv allocate-tunnel-addrs root 4025 1.4 0.7 770340 60748 ? Sl 04:07 0:05 calico-node -felix root 4026 0.0 0.5 768976 45128 ? Sl 04:07 0:00 calico-node -monitor-token root 4027 0.0 0.5 768720 46864 ? Sl 04:07 0:00 calico-node -allocate-tunnel-addrs root 4028 0.0 0.6 769120 50624 ? Sl 04:07 0:00 calico-node -monitor-addresses root 4246 0.0 0.4 748076 36416 ? Ssl 04:07 0:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgr root 4457 0.0 0.1 716332 10196 ? Sl 04:07 0:00 /var/lib/rancher/rke2/data/v1.21.6-rke2r1-fd8a733b61b5/bin/containerd-shim-runc-v2 -namespace k8s.io -id e00f4383e57d2 root 4470 0.0 0.1 716472 10012 ? Sl 04:07 0:00 /var/lib/rancher/rke2/data/v1.21.6-rke2r1-fd8a733b61b5/bin/containerd-shim-runc-v2 -namespace k8s.io -id dd90f3ba2b19d 65535 4510 0.0 0.0 964 4 ? Ss 04:07 0:00 /pause 65535 4524 0.0 0.0 964 4 ? Ss 04:07 0:00 /pause root 5045 0.0 0.0 0 0 ? I 04:07 0:00 [kworker/3:1-mm_percpu_wq] root 5114 0.0 0.1 716588 10792 ? Sl 04:07 0:00 /var/lib/rancher/rke2/data/v1.21.6-rke2r1-fd8a733b61b5/bin/containerd-shim-runc-v2 -namespace k8s.io -id 1e7f465813c97 65535 5137 0.0 0.0 964 4 ? Ss 04:07 0:00 /pause root 5319 0.1 0.5 755880 45848 ? Ssl 04:07 0:00 /coredns -conf /etc/coredns/Corefile 10001 5416 0.2 0.5 740952 41628 ? Ssl 04:07 0:00 /metrics-server --cert-dir=/tmp --logtostderr --secure-port=8443 --kubelet-preferred-address-types=InternalIP root 5457 0.0 0.2 729376 22024 ? Ssl 04:07 0:00 /cluster-proportional-autoscaler --namespace=kube-system --configmap=rke2-coredns-rke2-coredns-autoscaler --target=Dep root 5678 0.0 0.1 716588 10416 ? Sl 04:07 0:00 /var/lib/rancher/rke2/data/v1.21.6-rke2r1-fd8a733b61b5/bin/containerd-shim-runc-v2 -namespace k8s.io -id c06c45d8ef413 65535 5702 0.0 0.0 964 4 ? Ss 04:07 0:00 /pause systemd+ 6389 0.0 0.0 204 4 ? Ss 04:07 0:00 /usr/bin/dumb-init -- /nginx-ingress-controller --election-id=ingress-controller-leader --controller-class=k8s.io/ingr systemd+ 6410 0.2 0.5 748672 43296 ? Ssl 04:07 0:00 /nginx-ingress-controller --election-id=ingress-controller-leader --controller-class=k8s.io/ingress-nginx --configmap= systemd+ 6458 0.0 0.4 180192 39780 ? S 04:07 0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /etc/nginx/nginx.conf systemd+ 6467 0.0 0.5 448108 43604 ? Sl 04:07 0:00 nginx: worker process systemd+ 6468 0.0 0.5 448108 43852 ? Sl 04:07 0:00 nginx: worker process systemd+ 6469 0.0 0.5 448108 43740 ? Sl 04:07 0:00 nginx: worker process systemd+ 6470 0.0 0.5 448108 43604 ? Sl 04:07 0:00 nginx: worker process systemd+ 6471 0.0 0.3 173760 31596 ? S 04:07 0:00 nginx: cache manager process root 6821 0.0 0.0 0 0 ? S< 04:07 0:00 [loop3] root 7014 0.0 0.0 0 0 ? S< 04:07 0:00 [loop4] root 7136 0.0 0.0 7108 3852 ? Ss 04:08 0:00 /usr/bin/dbus-daemon --session --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only root 10131 0.0 0.0 0 0 ? I 04:12 0:00 [kworker/2:0-events] root 10414 0.0 0.0 10856 3628 pts/0 R+ 04:13 0:00 ps aux
3. The is no kubelet running in my test it is just a master
NAME STATUS ROLES AGE VERSION rke2repro Ready control-plane,etcd,master 9m38s v1.21.6+rke2r1 root@rke2repro:~#
root@rke2repro:~# rke2-
rke2-killall.sh rke2-uninstall.sh
root@rke2repro:~# rke2-killall.sh
4.
netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 781/sshd: /usr/sbin
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 602/systemd-resolve
tcp6 0 0 :::22 :::* LISTEN 781/sshd: /usr/sbin
udp 0 0 127.0.0.53:53 0.0.0.0:* 602/systemd-resolve
Yes I do.
cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.3 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.3 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
8GB, 2 vCPU, 80GB Digital Ocean droplet
FWIW, all nodes run the kubelet; you can see it in the ps
output.
The error message from the current failure is different than was reported previously. @briandowns I saw this once before at https://github.com/rancher/rke2/issues/1572#issuecomment-894936650 and I wasn't able to root-cause it there, but I suspect it stems from the fact that we don't wait for containerd to finish starting up before starting the temporary kubelet, as we usually do when starting the full kubelet - we just spin them both up simultaneously in goroutines: https://github.com/rancher/rke2/blob/ba061424ae4b9772387c4ab74ae5c487bd04c2f3/pkg/rke2/rke2.go#L209
I think this is a separate issue, but should be fixed regardless since we've seen it reproduced.
Recreated the error again, this time additional messages as i waited a little longer and ran the cluster reset twice.
root@rkerepro:~# curl -sfL https://get.rke2.io | sh -
[INFO] finding release for channel stable
[INFO] using v1.21.6+rke2r1 as release
[INFO] downloading checksums at https://github.com/rancher/rke2/releases/download/v1.21.6+rke2r1/sha256sum-amd64.txt
[INFO] downloading tarball at https://github.com/rancher/rke2/releases/download/v1.21.6+rke2r1/rke2.linux-amd64.tar.gz
[INFO] verifying tarball
[INFO] unpacking tarball file to /usr/local
root@rkerepro:~# systemctl enable rke2-server.service
Created symlink /etc/systemd/system/multi-user.target.wants/rke2-server.service → /usr/local/lib/systemd/system/rke2-server.service.
root@rkerepro:~# systemctl start rke2-server.service
root@rkerepro:~# systemctl stop rke2-server.service
root@rkerepro:~# rke2-killall.sh
+ systemctl stop rke2-server.service
+ systemctl stop rke2-agent.service
+ killtree 1707 1882 2077 2092 2294 2420 3022 4063 4080 4863 5544
+ kill -9 1707 1729 1778 1882 1903 1954 2077 2128 2194 2092 2129 2201 2294 2316 2379 2420 2439 2483 3022 3042 3713 3770 3775 3771 3774 3772 3776 3773 3777 4191 4063 4125 5109 4080 4127 4704 4863 4915 5227 5544 5565 6080 6103 6150 6154 6155 6156 6157 6158
+ do_unmount /run/k3s
+ umount /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/d5f7305a262de480b9e9155b1c9369f6cc8a1efc7a70fca62354aff1029d5222/rootfs /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/baecbe835e1ee84c3b8d1e182822d44e0a9dc176fddb6f0297b41394ac199c5d/rootfs /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/b9405adc533fff06dea89874978f8ec5e14aba820c2ab5c105b4a2a7b5a6265c/rootfs /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/b379b1f54e47e2faa90c2034fa3ebf3c1d2052bf2173e94ec04f1f497f51dc19/rootfs /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/b03287c822c7248e8e2d199c49ab7aaf47d080852ed5f2bd015d4f481c9d4b04/rootfs /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/9bc0d95bfe6719589687e000e477356823fd2aaddfb7f3a05dafcf2200a88f49/rootfs /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/94db30cc27f5637634f50322f03e932ac0e0c75d4a5030c1a1e7d546eef133e6/rootfs /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/8dd3373a115cc288077b66374b780c2045d073be8e508411271c6fd3b0ea7fcc/rootfs /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/880cd05d9321f6ece6a762a0c5ee3f85bad621c20f43975d448bba019f2fb3df/rootfs /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/702b2de2f87a5b44d23ca261a2989e144f53d7970aab7b72d770d93c308ac5b8/rootfs /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/6ebc09569af366ee498b1b8de3e2fa17b0d9722d6fe014935b79c7fd5833ff86/rootfs /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/6133fcdf0d46c87d0af025c94eab373a7f52bf2e7d0148d87f807e5c23ba7d15/rootfs /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/5a2c9fdcbc9d404e13133dc8f28aa9231c9066b5933578e95052b33dd215e47a/rootfs /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/52961604420c8beb32f95f332ba8cfc63644d0acea19a1a0f9d4d5bade90de12/rootfs /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/4b79da5046045c6cd9406f762e8908f30ff3a2f5a4dddf764e4bad6cf204c562/rootfs /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/47c8674c84f7ced35651f47ba810e7e931f418664d72a687680a89ab92088d88/rootfs /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/43f0e0e1c4b9f19d52db59666dc8b323f4b9dc6ac9f5115bcc5f0a510158b552/rootfs /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/40bf1a575c99a8078bb54c5f250f5ff6cb14d607f7a3d1f7f3e8d99d15f45190/rootfs /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/393e50960dd3d38472f5c1fa74056edc3468d157dfbe8aa7fe2c6414c0671e0a/rootfs /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/3746a43f686b437e04c55253b58d73f175dd6e16eddecec49e23a073eaafea3f/rootfs /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/314fd37876dc847355c16f1e288acaacd644b5bd7987595d90cb3634d9fd7848/rootfs /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/20aac186e15c5c0d06e88a7ff43cbc00be77c7cb68f264b9f83eb00ffe2b3094/rootfs /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/04641ac18bbce2c220c9ddd1b64431d418b81dc1ab324b5f21ba83b39c8a25d2/rootfs /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/b379b1f54e47e2faa90c2034fa3ebf3c1d2052bf2173e94ec04f1f497f51dc19/shm /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/9bc0d95bfe6719589687e000e477356823fd2aaddfb7f3a05dafcf2200a88f49/shm /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/94db30cc27f5637634f50322f03e932ac0e0c75d4a5030c1a1e7d546eef133e6/shm /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/880cd05d9321f6ece6a762a0c5ee3f85bad621c20f43975d448bba019f2fb3df/shm /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/702b2de2f87a5b44d23ca261a2989e144f53d7970aab7b72d770d93c308ac5b8/shm /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/52961604420c8beb32f95f332ba8cfc63644d0acea19a1a0f9d4d5bade90de12/shm /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/43f0e0e1c4b9f19d52db59666dc8b323f4b9dc6ac9f5115bcc5f0a510158b552/shm /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/40bf1a575c99a8078bb54c5f250f5ff6cb14d607f7a3d1f7f3e8d99d15f45190/shm /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/393e50960dd3d38472f5c1fa74056edc3468d157dfbe8aa7fe2c6414c0671e0a/shm /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/314fd37876dc847355c16f1e288acaacd644b5bd7987595d90cb3634d9fd7848/shm /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/04641ac18bbce2c220c9ddd1b64431d418b81dc1ab324b5f21ba83b39c8a25d2/shm
+ do_unmount /var/lib/rancher/rke2
+ do_unmount /var/lib/kubelet/pods
+ umount /var/lib/kubelet/pods/f650aa93-28e4-49a4-82a2-1202acfe998a/volumes/kubernetes.io~projected/kube-api-access-xsdcr /var/lib/kubelet/pods/eb3e1761-57af-4915-93ca-d3b41d2c79a8/volumes/kubernetes.io~secret/webhook-cert /var/lib/kubelet/pods/eb3e1761-57af-4915-93ca-d3b41d2c79a8/volumes/kubernetes.io~projected/kube-api-access-fzf6w /var/lib/kubelet/pods/d3b0b85e-b9fa-48ae-88ad-983bcf3cccff/volumes/kubernetes.io~projected/kube-api-access-chzhf /var/lib/kubelet/pods/67c0c232-e26c-4acc-9e94-badc34311adc/volumes/kubernetes.io~projected/kube-api-access-6gjgq /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~projected/kube-api-access-knlct
+ do_unmount /run/netns/cni-
+ umount /run/netns/cni-c8879227-9ee3-4b41-9a90-70d769c1d772 /run/netns/cni-b5c83953-39f2-d2bd-013f-dbe78c8d89ee /run/netns/cni-ac65089a-0738-35a3-d6ef-43db5804e518
+ ip link show
+ read ignore iface ignore
+ grep master cni0
+ ip link delete cni0
Cannot find device "cni0"
+ ip link delete flannel.1
+ ip link delete vxlan.calico
Cannot find device "vxlan.calico"
+ ip link delete cilium_vxlan
Cannot find device "cilium_vxlan"
+ ip link delete cilium_net
Cannot find device "cilium_net"
+ [ -d /sys/class/net/nodelocaldns ]
+ rm -rf /var/lib/cni/
+ iptables-save
+ grep -v KUBE-
+ grep -v CNI-
+ grep -v cali:
+ grep -v CILIUM_
+ iptables-restore
+ grep -v cali-
root@rkerepro:~# rke2 server --cluster-reset
WARN[0000] not running in CIS mode
INFO[0000] Running temporary containerd /var/lib/rancher/rke2/bin/containerd -c /var/lib/rancher/rke2/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/rke2/agent/containerd
INFO[0000] Running temporary kubelet /var/lib/rancher/rke2/bin/kubelet --fail-swap-on=false --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests
INFO[2021-11-30T00:00:26.464706282Z] starting containerd revision=f672363350c1d5eb06a11543efeaaaf0c64af989 version=v1.4.11-k3s1
INFO[2021-11-30T00:00:26.489168575Z] loading plugin "io.containerd.content.v1.content"... type=io.containerd.content.v1
INFO[2021-11-30T00:00:26.489255679Z] loading plugin "io.containerd.snapshotter.v1.btrfs"... type=io.containerd.snapshotter.v1
INFO[2021-11-30T00:00:26.489601776Z] skip loading plugin "io.containerd.snapshotter.v1.btrfs"... error="path /var/lib/rancher/rke2/agent/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
INFO[2021-11-30T00:00:26.489633542Z] loading plugin "io.containerd.snapshotter.v1.devmapper"... type=io.containerd.snapshotter.v1
WARN[2021-11-30T00:00:26.489669337Z] failed to load plugin io.containerd.snapshotter.v1.devmapper error="devmapper not configured"
INFO[2021-11-30T00:00:26.489681110Z] loading plugin "io.containerd.snapshotter.v1.native"... type=io.containerd.snapshotter.v1
INFO[2021-11-30T00:00:26.489705151Z] loading plugin "io.containerd.snapshotter.v1.overlayfs"... type=io.containerd.snapshotter.v1
INFO[2021-11-30T00:00:26.489826607Z] loading plugin "io.containerd.metadata.v1.bolt"... type=io.containerd.metadata.v1
WARN[2021-11-30T00:00:26.489849942Z] could not use snapshotter devmapper in metadata plugin error="devmapper not configured"
INFO[2021-11-30T00:00:26.489861084Z] metadata content store policy set policy=shared
INFO[2021-11-30T00:00:26.489959713Z] loading plugin "io.containerd.differ.v1.walking"... type=io.containerd.differ.v1
INFO[2021-11-30T00:00:26.489977967Z] loading plugin "io.containerd.gc.v1.scheduler"... type=io.containerd.gc.v1
INFO[2021-11-30T00:00:26.490011056Z] loading plugin "io.containerd.service.v1.introspection-service"... type=io.containerd.service.v1
INFO[2021-11-30T00:00:26.490060863Z] loading plugin "io.containerd.service.v1.containers-service"... type=io.containerd.service.v1
INFO[2021-11-30T00:00:26.490076194Z] loading plugin "io.containerd.service.v1.content-service"... type=io.containerd.service.v1
INFO[2021-11-30T00:00:26.490089014Z] loading plugin "io.containerd.service.v1.diff-service"... type=io.containerd.service.v1
INFO[2021-11-30T00:00:26.490102502Z] loading plugin "io.containerd.service.v1.images-service"... type=io.containerd.service.v1
INFO[2021-11-30T00:00:26.490116772Z] loading plugin "io.containerd.service.v1.leases-service"... type=io.containerd.service.v1
INFO[2021-11-30T00:00:26.490136085Z] loading plugin "io.containerd.service.v1.namespaces-service"... type=io.containerd.service.v1
INFO[2021-11-30T00:00:26.490151324Z] loading plugin "io.containerd.service.v1.snapshots-service"... type=io.containerd.service.v1
INFO[2021-11-30T00:00:26.490169838Z] loading plugin "io.containerd.runtime.v1.linux"... type=io.containerd.runtime.v1
INFO[2021-11-30T00:00:26.490211775Z] loading plugin "io.containerd.runtime.v2.task"... type=io.containerd.runtime.v2
WARN[2021-11-30T00:00:26.490562238Z] cleaning up after shim disconnected id=04641ac18bbce2c220c9ddd1b64431d418b81dc1ab324b5f21ba83b39c8a25d2 namespace=k8s.io
INFO[2021-11-30T00:00:26.490577579Z] cleaning up dead shim
WARN[2021-11-30T00:00:26.518668316Z] cleanup warnings time="2021-11-30T00:00:26Z" level=info msg="starting signal loop" namespace=k8s.io pid=11024
WARN[2021-11-30T00:00:26.519221193Z] cleaning up after shim disconnected id=20aac186e15c5c0d06e88a7ff43cbc00be77c7cb68f264b9f83eb00ffe2b3094 namespace=k8s.io
INFO[2021-11-30T00:00:26.519247208Z] cleaning up dead shim
Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
WARN[2021-11-30T00:00:26.555559410Z] cleanup warnings time="2021-11-30T00:00:26Z" level=info msg="starting signal loop" namespace=k8s.io pid=11039
WARN[2021-11-30T00:00:26.556052043Z] cleaning up after shim disconnected id=314fd37876dc847355c16f1e288acaacd644b5bd7987595d90cb3634d9fd7848 namespace=k8s.io
INFO[2021-11-30T00:00:26.556075224Z] cleaning up dead shim
WARN[2021-11-30T00:00:26.609764028Z] cleanup warnings time="2021-11-30T00:00:26Z" level=info msg="starting signal loop" namespace=k8s.io pid=11054
WARN[2021-11-30T00:00:26.610365309Z] cleaning up after shim disconnected id=3746a43f686b437e04c55253b58d73f175dd6e16eddecec49e23a073eaafea3f namespace=k8s.io
INFO[2021-11-30T00:00:26.610389544Z] cleaning up dead shim
WARN[2021-11-30T00:00:26.660206216Z] cleanup warnings time="2021-11-30T00:00:26Z" level=info msg="starting signal loop" namespace=k8s.io pid=11091
WARN[2021-11-30T00:00:26.660775924Z] cleaning up after shim disconnected id=393e50960dd3d38472f5c1fa74056edc3468d157dfbe8aa7fe2c6414c0671e0a namespace=k8s.io
INFO[2021-11-30T00:00:26.660799394Z] cleaning up dead shim
WARN[2021-11-30T00:00:26.680481695Z] cleanup warnings time="2021-11-30T00:00:26Z" level=info msg="starting signal loop" namespace=k8s.io pid=11107
WARN[2021-11-30T00:00:26.681112321Z] cleaning up after shim disconnected id=40bf1a575c99a8078bb54c5f250f5ff6cb14d607f7a3d1f7f3e8d99d15f45190 namespace=k8s.io
INFO[2021-11-30T00:00:26.681136820Z] cleaning up dead shim
WARN[2021-11-30T00:00:26.711896122Z] cleanup warnings time="2021-11-30T00:00:26Z" level=info msg="starting signal loop" namespace=k8s.io pid=11121
WARN[2021-11-30T00:00:26.712464906Z] cleaning up after shim disconnected id=43f0e0e1c4b9f19d52db59666dc8b323f4b9dc6ac9f5115bcc5f0a510158b552 namespace=k8s.io
INFO[2021-11-30T00:00:26.712494158Z] cleaning up dead shim
WARN[2021-11-30T00:00:26.746088369Z] cleanup warnings time="2021-11-30T00:00:26Z" level=info msg="starting signal loop" namespace=k8s.io pid=11145
WARN[2021-11-30T00:00:26.746681397Z] cleaning up after shim disconnected id=47c8674c84f7ced35651f47ba810e7e931f418664d72a687680a89ab92088d88 namespace=k8s.io
INFO[2021-11-30T00:00:26.746707581Z] cleaning up dead shim
WARN[2021-11-30T00:00:26.783810696Z] cleanup warnings time="2021-11-30T00:00:26Z" level=info msg="starting signal loop" namespace=k8s.io pid=11163
WARN[2021-11-30T00:00:26.784266398Z] cleaning up after shim disconnected id=4b79da5046045c6cd9406f762e8908f30ff3a2f5a4dddf764e4bad6cf204c562 namespace=k8s.io
INFO[2021-11-30T00:00:26.784294231Z] cleaning up dead shim
WARN[2021-11-30T00:00:26.825565752Z] cleanup warnings time="2021-11-30T00:00:26Z" level=info msg="starting signal loop" namespace=k8s.io pid=11177
WARN[2021-11-30T00:00:26.826118361Z] cleaning up after shim disconnected id=52961604420c8beb32f95f332ba8cfc63644d0acea19a1a0f9d4d5bade90de12 namespace=k8s.io
INFO[2021-11-30T00:00:26.826215734Z] cleaning up dead shim
WARN[2021-11-30T00:00:26.853455859Z] cleanup warnings time="2021-11-30T00:00:26Z" level=info msg="starting signal loop" namespace=k8s.io pid=11191
WARN[2021-11-30T00:00:26.853973331Z] cleaning up after shim disconnected id=5a2c9fdcbc9d404e13133dc8f28aa9231c9066b5933578e95052b33dd215e47a namespace=k8s.io
INFO[2021-11-30T00:00:26.854075174Z] cleaning up dead shim
I1130 00:00:26.882172 11008 server.go:440] "Kubelet version" kubeletVersion="v1.21.6+rke2r1"
I1130 00:00:26.882555 11008 server.go:573] "Standalone mode, no API client"
I1130 00:00:26.882663 11008 server.go:629] "Failed to get the kubelet's cgroup. Kubelet system container metrics may be missing." err="cpu and memory cgroup hierarchy not unified. cpu: /user.slice, memory: /user.slice/user-0.slice/session-1.scope"
WARN[2021-11-30T00:00:26.889710857Z] cleanup warnings time="2021-11-30T00:00:26Z" level=info msg="starting signal loop" namespace=k8s.io pid=11207
WARN[2021-11-30T00:00:26.890185321Z] cleaning up after shim disconnected id=6133fcdf0d46c87d0af025c94eab373a7f52bf2e7d0148d87f807e5c23ba7d15 namespace=k8s.io
INFO[2021-11-30T00:00:26.890224184Z] cleaning up dead shim
WARN[2021-11-30T00:00:26.922098092Z] cleanup warnings time="2021-11-30T00:00:26Z" level=info msg="starting signal loop" namespace=k8s.io pid=11229
WARN[2021-11-30T00:00:26.922519447Z] cleaning up after shim disconnected id=6ebc09569af366ee498b1b8de3e2fa17b0d9722d6fe014935b79c7fd5833ff86 namespace=k8s.io
INFO[2021-11-30T00:00:26.922538022Z] cleaning up dead shim
WARN[2021-11-30T00:00:26.958237097Z] cleanup warnings time="2021-11-30T00:00:26Z" level=info msg="starting signal loop" namespace=k8s.io pid=11252
WARN[2021-11-30T00:00:26.958763215Z] cleaning up after shim disconnected id=702b2de2f87a5b44d23ca261a2989e144f53d7970aab7b72d770d93c308ac5b8 namespace=k8s.io
INFO[2021-11-30T00:00:26.958784509Z] cleaning up dead shim
WARN[2021-11-30T00:00:26.987478222Z] cleanup warnings time="2021-11-30T00:00:26Z" level=info msg="starting signal loop" namespace=k8s.io pid=11283
WARN[2021-11-30T00:00:26.988064625Z] cleaning up after shim disconnected id=880cd05d9321f6ece6a762a0c5ee3f85bad621c20f43975d448bba019f2fb3df namespace=k8s.io
INFO[2021-11-30T00:00:26.988148931Z] cleaning up dead shim
I1130 00:00:27.001721 11008 server.go:488] "No api server defined - no events will be sent to API server"
I1130 00:00:27.001750 11008 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /"
I1130 00:00:27.026264 11008 container_manager_linux.go:278] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
I1130 00:00:27.026369 11008 container_manager_linux.go:283] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
I1130 00:00:27.026403 11008 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
I1130 00:00:27.026420 11008 container_manager_linux.go:314] "Initializing Topology Manager" policy="none" scope="container"
I1130 00:00:27.026444 11008 container_manager_linux.go:319] "Creating device plugin manager" devicePluginEnabled=true
I1130 00:00:27.026556 11008 remote_runtime.go:62] parsed scheme: ""
I1130 00:00:27.026568 11008 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
I1130 00:00:27.026597 11008 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/k3s/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}
I1130 00:00:27.026609 11008 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1130 00:00:27.026679 11008 remote_image.go:50] parsed scheme: ""
I1130 00:00:27.026692 11008 remote_image.go:50] scheme "" not registered, fallback to default scheme
I1130 00:00:27.026704 11008 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/k3s/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}
I1130 00:00:27.026713 11008 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1130 00:00:27.026785 11008 kubelet.go:410] "Kubelet is running in standalone mode, will skip API server sync"
I1130 00:00:27.026824 11008 kubelet.go:272] "Adding static pod path" path="/var/lib/rancher/rke2/agent/pod-manifests"
W1130 00:00:27.027058 11008 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/run/k3s/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused". Reconnecting...
E1130 00:00:27.027212 11008 remote_runtime.go:86] "Version from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused\""
E1130 00:00:27.027254 11008 kuberuntime_manager.go:208] "Get runtime version failed" err="get remote runtime typed version failed: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused\""
E1130 00:00:27.027289 11008 server.go:292] "Failed to run kubelet" err="failed to run Kubelet: failed to create kubelet: get remote runtime typed version failed: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused\""
WARN[2021-11-30T00:00:27.027884925Z] cleanup warnings time="2021-11-30T00:00:27Z" level=info msg="starting signal loop" namespace=k8s.io pid=11316
WARN[2021-11-30T00:00:27.028425803Z] cleaning up after shim disconnected id=8dd3373a115cc288077b66374b780c2045d073be8e508411271c6fd3b0ea7fcc namespace=k8s.io
INFO[2021-11-30T00:00:27.028551269Z] cleaning up dead shim
FATA[0000] temporary kubelet process exited unexpectedly: exit status 1
root@rkerepro:~# rke2 server --cluster-reset
WARN[0000] not running in CIS mode
INFO[0000] Running temporary containerd /var/lib/rancher/rke2/bin/containerd -c /var/lib/rancher/rke2/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/rke2/agent/containerd
INFO[0000] Running temporary kubelet /var/lib/rancher/rke2/bin/kubelet --fail-swap-on=false --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests
INFO[2021-11-30T00:02:42.775779690Z] starting containerd revision=f672363350c1d5eb06a11543efeaaaf0c64af989 version=v1.4.11-k3s1
INFO[2021-11-30T00:02:42.798538310Z] loading plugin "io.containerd.content.v1.content"... type=io.containerd.content.v1
INFO[2021-11-30T00:02:42.798628222Z] loading plugin "io.containerd.snapshotter.v1.btrfs"... type=io.containerd.snapshotter.v1
INFO[2021-11-30T00:02:42.798949776Z] skip loading plugin "io.containerd.snapshotter.v1.btrfs"... error="path /var/lib/rancher/rke2/agent/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
INFO[2021-11-30T00:02:42.798980961Z] loading plugin "io.containerd.snapshotter.v1.devmapper"... type=io.containerd.snapshotter.v1
WARN[2021-11-30T00:02:42.799006805Z] failed to load plugin io.containerd.snapshotter.v1.devmapper error="devmapper not configured"
INFO[2021-11-30T00:02:42.799016971Z] loading plugin "io.containerd.snapshotter.v1.native"... type=io.containerd.snapshotter.v1
INFO[2021-11-30T00:02:42.799039093Z] loading plugin "io.containerd.snapshotter.v1.overlayfs"... type=io.containerd.snapshotter.v1
INFO[2021-11-30T00:02:42.799136674Z] loading plugin "io.containerd.metadata.v1.bolt"... type=io.containerd.metadata.v1
WARN[2021-11-30T00:02:42.799159110Z] could not use snapshotter devmapper in metadata plugin error="devmapper not configured"
INFO[2021-11-30T00:02:42.799172633Z] metadata content store policy set policy=shared
INFO[2021-11-30T00:02:42.799316791Z] loading plugin "io.containerd.differ.v1.walking"... type=io.containerd.differ.v1
INFO[2021-11-30T00:02:42.799337277Z] loading plugin "io.containerd.gc.v1.scheduler"... type=io.containerd.gc.v1
INFO[2021-11-30T00:02:42.799367219Z] loading plugin "io.containerd.service.v1.introspection-service"... type=io.containerd.service.v1
INFO[2021-11-30T00:02:42.799394774Z] loading plugin "io.containerd.service.v1.containers-service"... type=io.containerd.service.v1
INFO[2021-11-30T00:02:42.799407512Z] loading plugin "io.containerd.service.v1.content-service"... type=io.containerd.service.v1
INFO[2021-11-30T00:02:42.799425146Z] loading plugin "io.containerd.service.v1.diff-service"... type=io.containerd.service.v1
INFO[2021-11-30T00:02:42.799440938Z] loading plugin "io.containerd.service.v1.images-service"... type=io.containerd.service.v1
INFO[2021-11-30T00:02:42.799454488Z] loading plugin "io.containerd.service.v1.leases-service"... type=io.containerd.service.v1
INFO[2021-11-30T00:02:42.799465056Z] loading plugin "io.containerd.service.v1.namespaces-service"... type=io.containerd.service.v1
INFO[2021-11-30T00:02:42.799477746Z] loading plugin "io.containerd.service.v1.snapshots-service"... type=io.containerd.service.v1
INFO[2021-11-30T00:02:42.799490858Z] loading plugin "io.containerd.runtime.v1.linux"... type=io.containerd.runtime.v1
INFO[2021-11-30T00:02:42.799544505Z] loading plugin "io.containerd.runtime.v2.task"... type=io.containerd.runtime.v2
WARN[2021-11-30T00:02:42.799837124Z] cleaning up after shim disconnected id=8dd3373a115cc288077b66374b780c2045d073be8e508411271c6fd3b0ea7fcc namespace=k8s.io
INFO[2021-11-30T00:02:42.799852026Z] cleaning up dead shim
WARN[2021-11-30T00:02:42.826497347Z] cleanup warnings time="2021-11-30T00:02:42Z" level=info msg="starting signal loop" namespace=k8s.io pid=11388
WARN[2021-11-30T00:02:42.827021235Z] cleaning up after shim disconnected id=94db30cc27f5637634f50322f03e932ac0e0c75d4a5030c1a1e7d546eef133e6 namespace=k8s.io
INFO[2021-11-30T00:02:42.827039490Z] cleaning up dead shim
Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
I1130 00:02:42.847063 11370 server.go:440] "Kubelet version" kubeletVersion="v1.21.6+rke2r1"
I1130 00:02:42.847339 11370 server.go:573] "Standalone mode, no API client"
I1130 00:02:42.847425 11370 server.go:629] "Failed to get the kubelet's cgroup. Kubelet system container metrics may be missing." err="cpu and memory cgroup hierarchy not unified. cpu: /user.slice, memory: /user.slice/user-0.slice/session-1.scope"
WARN[2021-11-30T00:02:42.857624243Z] cleanup warnings time="2021-11-30T00:02:42Z" level=info msg="starting signal loop" namespace=k8s.io pid=11402
WARN[2021-11-30T00:02:42.858069498Z] cleaning up after shim disconnected id=9bc0d95bfe6719589687e000e477356823fd2aaddfb7f3a05dafcf2200a88f49 namespace=k8s.io
INFO[2021-11-30T00:02:42.858083827Z] cleaning up dead shim
WARN[2021-11-30T00:02:42.891354682Z] cleanup warnings time="2021-11-30T00:02:42Z" level=info msg="starting signal loop" namespace=k8s.io pid=11432
WARN[2021-11-30T00:02:42.891839559Z] cleaning up after shim disconnected id=b03287c822c7248e8e2d199c49ab7aaf47d080852ed5f2bd015d4f481c9d4b04 namespace=k8s.io
INFO[2021-11-30T00:02:42.891860570Z] cleaning up dead shim
WARN[2021-11-30T00:02:42.914474002Z] cleanup warnings time="2021-11-30T00:02:42Z" level=info msg="starting signal loop" namespace=k8s.io pid=11463
WARN[2021-11-30T00:02:42.914928993Z] cleaning up after shim disconnected id=b379b1f54e47e2faa90c2034fa3ebf3c1d2052bf2173e94ec04f1f497f51dc19 namespace=k8s.io
INFO[2021-11-30T00:02:42.914948267Z] cleaning up dead shim
WARN[2021-11-30T00:02:42.939844688Z] cleanup warnings time="2021-11-30T00:02:42Z" level=info msg="starting signal loop" namespace=k8s.io pid=11499
WARN[2021-11-30T00:02:42.940261683Z] cleaning up after shim disconnected id=b9405adc533fff06dea89874978f8ec5e14aba820c2ab5c105b4a2a7b5a6265c namespace=k8s.io
INFO[2021-11-30T00:02:42.940282780Z] cleaning up dead shim
I1130 00:02:42.947516 11370 server.go:488] "No api server defined - no events will be sent to API server"
I1130 00:02:42.947549 11370 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /"
I1130 00:02:42.947906 11370 container_manager_linux.go:278] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
I1130 00:02:42.948073 11370 container_manager_linux.go:283] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
I1130 00:02:42.948158 11370 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
I1130 00:02:42.948215 11370 container_manager_linux.go:314] "Initializing Topology Manager" policy="none" scope="container"
I1130 00:02:42.948261 11370 container_manager_linux.go:319] "Creating device plugin manager" devicePluginEnabled=true
I1130 00:02:42.948374 11370 remote_runtime.go:62] parsed scheme: ""
I1130 00:02:42.948409 11370 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
I1130 00:02:42.948463 11370 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/k3s/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}
I1130 00:02:42.948497 11370 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1130 00:02:42.948584 11370 remote_image.go:50] parsed scheme: ""
I1130 00:02:42.948617 11370 remote_image.go:50] scheme "" not registered, fallback to default scheme
I1130 00:02:42.948655 11370 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/k3s/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}
I1130 00:02:42.948686 11370 clientconn.go:948] ClientConn switching balancer to "pick_first"
W1130 00:02:42.948783 11370 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/run/k3s/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused". Reconnecting...
I1130 00:02:42.948840 11370 kubelet.go:410] "Kubelet is running in standalone mode, will skip API server sync"
I1130 00:02:42.948884 11370 kubelet.go:272] "Adding static pod path" path="/var/lib/rancher/rke2/agent/pod-manifests"
E1130 00:02:42.949147 11370 remote_runtime.go:86] "Version from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused\""
E1130 00:02:42.949330 11370 kuberuntime_manager.go:208] "Get runtime version failed" err="get remote runtime typed version failed: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused\""
E1130 00:02:42.949391 11370 server.go:292] "Failed to run kubelet" err="failed to run Kubelet: failed to create kubelet: get remote runtime typed version failed: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused\""
FATA[0000] temporary kubelet process exited unexpectedly: exit status 1
i tried the cluster reset one more time after waiting 10 minutes and it seemed to work
root@rkerepro:~# rke2 server --cluster-reset
WARN[0000] not running in CIS mode
INFO[0000] Running temporary containerd /var/lib/rancher/rke2/bin/containerd -c /var/lib/rancher/rke2/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/rke2/agent/containerd
INFO[0000] Running temporary kubelet /var/lib/rancher/rke2/bin/kubelet --fail-swap-on=false --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests
INFO[2021-11-30T00:07:07.109447581Z] starting containerd revision=f672363350c1d5eb06a11543efeaaaf0c64af989 version=v1.4.11-k3s1
INFO[2021-11-30T00:07:07.132824657Z] loading plugin "io.containerd.content.v1.content"... type=io.containerd.content.v1
INFO[2021-11-30T00:07:07.132910302Z] loading plugin "io.containerd.snapshotter.v1.btrfs"... type=io.containerd.snapshotter.v1
INFO[2021-11-30T00:07:07.133209739Z] skip loading plugin "io.containerd.snapshotter.v1.btrfs"... error="path /var/lib/rancher/rke2/agent/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
INFO[2021-11-30T00:07:07.133242365Z] loading plugin "io.containerd.snapshotter.v1.devmapper"... type=io.containerd.snapshotter.v1
WARN[2021-11-30T00:07:07.133270771Z] failed to load plugin io.containerd.snapshotter.v1.devmapper error="devmapper not configured"
INFO[2021-11-30T00:07:07.133287346Z] loading plugin "io.containerd.snapshotter.v1.native"... type=io.containerd.snapshotter.v1
INFO[2021-11-30T00:07:07.133311765Z] loading plugin "io.containerd.snapshotter.v1.overlayfs"... type=io.containerd.snapshotter.v1
INFO[2021-11-30T00:07:07.133417125Z] loading plugin "io.containerd.metadata.v1.bolt"... type=io.containerd.metadata.v1
WARN[2021-11-30T00:07:07.133446472Z] could not use snapshotter devmapper in metadata plugin error="devmapper not configured"
INFO[2021-11-30T00:07:07.133461004Z] metadata content store policy set policy=shared
INFO[2021-11-30T00:07:07.133582786Z] loading plugin "io.containerd.differ.v1.walking"... type=io.containerd.differ.v1
INFO[2021-11-30T00:07:07.133612705Z] loading plugin "io.containerd.gc.v1.scheduler"... type=io.containerd.gc.v1
INFO[2021-11-30T00:07:07.133673259Z] loading plugin "io.containerd.service.v1.introspection-service"... type=io.containerd.service.v1
INFO[2021-11-30T00:07:07.133731519Z] loading plugin "io.containerd.service.v1.containers-service"... type=io.containerd.service.v1
INFO[2021-11-30T00:07:07.133751726Z] loading plugin "io.containerd.service.v1.content-service"... type=io.containerd.service.v1
INFO[2021-11-30T00:07:07.133767745Z] loading plugin "io.containerd.service.v1.diff-service"... type=io.containerd.service.v1
INFO[2021-11-30T00:07:07.133785424Z] loading plugin "io.containerd.service.v1.images-service"... type=io.containerd.service.v1
INFO[2021-11-30T00:07:07.133806239Z] loading plugin "io.containerd.service.v1.leases-service"... type=io.containerd.service.v1
INFO[2021-11-30T00:07:07.133823777Z] loading plugin "io.containerd.service.v1.namespaces-service"... type=io.containerd.service.v1
INFO[2021-11-30T00:07:07.133842714Z] loading plugin "io.containerd.service.v1.snapshots-service"... type=io.containerd.service.v1
INFO[2021-11-30T00:07:07.133859545Z] loading plugin "io.containerd.runtime.v1.linux"... type=io.containerd.runtime.v1
INFO[2021-11-30T00:07:07.133910685Z] loading plugin "io.containerd.runtime.v2.task"... type=io.containerd.runtime.v2
INFO[2021-11-30T00:07:07.134014357Z] loading plugin "io.containerd.monitor.v1.cgroups"... type=io.containerd.monitor.v1
INFO[2021-11-30T00:07:07.134470686Z] loading plugin "io.containerd.service.v1.tasks-service"... type=io.containerd.service.v1
INFO[2021-11-30T00:07:07.134520361Z] loading plugin "io.containerd.internal.v1.restart"... type=io.containerd.internal.v1
INFO[2021-11-30T00:07:07.134583449Z] loading plugin "io.containerd.grpc.v1.containers"... type=io.containerd.grpc.v1
INFO[2021-11-30T00:07:07.134603315Z] loading plugin "io.containerd.grpc.v1.content"... type=io.containerd.grpc.v1
INFO[2021-11-30T00:07:07.134620682Z] loading plugin "io.containerd.grpc.v1.diff"... type=io.containerd.grpc.v1
INFO[2021-11-30T00:07:07.134659825Z] loading plugin "io.containerd.grpc.v1.events"... type=io.containerd.grpc.v1
INFO[2021-11-30T00:07:07.134676679Z] loading plugin "io.containerd.grpc.v1.healthcheck"... type=io.containerd.grpc.v1
INFO[2021-11-30T00:07:07.134696029Z] loading plugin "io.containerd.grpc.v1.images"... type=io.containerd.grpc.v1
INFO[2021-11-30T00:07:07.134713699Z] loading plugin "io.containerd.grpc.v1.leases"... type=io.containerd.grpc.v1
INFO[2021-11-30T00:07:07.134730412Z] loading plugin "io.containerd.grpc.v1.namespaces"... type=io.containerd.grpc.v1
INFO[2021-11-30T00:07:07.134747761Z] loading plugin "io.containerd.internal.v1.opt"... type=io.containerd.internal.v1
INFO[2021-11-30T00:07:07.134802021Z] loading plugin "io.containerd.grpc.v1.snapshots"... type=io.containerd.grpc.v1
INFO[2021-11-30T00:07:07.134821958Z] loading plugin "io.containerd.grpc.v1.tasks"... type=io.containerd.grpc.v1
INFO[2021-11-30T00:07:07.134841731Z] loading plugin "io.containerd.grpc.v1.version"... type=io.containerd.grpc.v1
INFO[2021-11-30T00:07:07.134866459Z] loading plugin "io.containerd.grpc.v1.cri"... type=io.containerd.grpc.v1
INFO[2021-11-30T00:07:07.135001313Z] Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} UntrustedWorkloadRuntime:{Type: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io] Rewrites:map[]}] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:index.docker.io/rancher/pause:3.5 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true IgnoreImageDefinedVolumes:false} ContainerdRootDir:/var/lib/rancher/rke2/agent/containerd ContainerdEndpoint:/run/k3s/containerd/containerd.sock RootDir:/var/lib/rancher/rke2/agent/containerd/io.containerd.grpc.v1.cri StateDir:/run/k3s/containerd/io.containerd.grpc.v1.cri}
INFO[2021-11-30T00:07:07.135051890Z] Connect containerd service
INFO[2021-11-30T00:07:07.135119121Z] Get image filesystem path "/var/lib/rancher/rke2/agent/containerd/io.containerd.snapshotter.v1.overlayfs"
INFO[2021-11-30T00:07:07.135715217Z] loading plugin "io.containerd.grpc.v1.introspection"... type=io.containerd.grpc.v1
INFO[2021-11-30T00:07:07.135778967Z] Start subscribing containerd event
INFO[2021-11-30T00:07:07.135849712Z] Start recovering state
INFO[2021-11-30T00:07:07.136797419Z] serving... address=/run/k3s/containerd/containerd.sock.ttrpc
INFO[2021-11-30T00:07:07.136849436Z] serving... address=/run/k3s/containerd/containerd.sock
INFO[2021-11-30T00:07:07.136871262Z] containerd successfully booted in 0.028078s
Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
I1130 00:07:07.189431 11754 server.go:440] "Kubelet version" kubeletVersion="v1.21.6+rke2r1"
I1130 00:07:07.190004 11754 server.go:573] "Standalone mode, no API client"
I1130 00:07:07.190160 11754 server.go:629] "Failed to get the kubelet's cgroup. Kubelet system container metrics may be missing." err="cpu and memory cgroup hierarchy not unified. cpu: /user.slice, memory: /user.slice/user-0.slice/session-1.scope"
INFO[2021-11-30T00:07:07.223094682Z] Start event monitor
INFO[2021-11-30T00:07:07.223143321Z] Start snapshots syncer
INFO[2021-11-30T00:07:07.223154816Z] Start cni network conf syncer
INFO[2021-11-30T00:07:07.223162299Z] Start streaming server
I1130 00:07:07.282226 11754 server.go:488] "No api server defined - no events will be sent to API server"
I1130 00:07:07.282257 11754 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /"
I1130 00:07:07.282502 11754 container_manager_linux.go:278] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
I1130 00:07:07.282605 11754 container_manager_linux.go:283] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
I1130 00:07:07.282638 11754 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
I1130 00:07:07.282655 11754 container_manager_linux.go:314] "Initializing Topology Manager" policy="none" scope="container"
I1130 00:07:07.282664 11754 container_manager_linux.go:319] "Creating device plugin manager" devicePluginEnabled=true
I1130 00:07:07.282756 11754 remote_runtime.go:62] parsed scheme: ""
I1130 00:07:07.282771 11754 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
I1130 00:07:07.282865 11754 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/k3s/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}
I1130 00:07:07.282878 11754 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1130 00:07:07.282945 11754 remote_image.go:50] parsed scheme: ""
I1130 00:07:07.282954 11754 remote_image.go:50] scheme "" not registered, fallback to default scheme
I1130 00:07:07.282961 11754 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/k3s/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}
I1130 00:07:07.282965 11754 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1130 00:07:07.283018 11754 kubelet.go:410] "Kubelet is running in standalone mode, will skip API server sync"
I1130 00:07:07.283032 11754 kubelet.go:272] "Adding static pod path" path="/var/lib/rancher/rke2/agent/pod-manifests"
I1130 00:07:07.284489 11754 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="containerd" version="v1.4.11-k3s1" apiVersion="v1alpha2"
E1130 00:07:07.614766 11754 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I1130 00:07:07.615062 11754 volume_host.go:75] "KubeClient is nil. Skip initialization of CSIDriverLister"
W1130 00:07:07.615404 11754 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
W1130 00:07:07.615832 11754 csi_plugin.go:189] kubernetes.io/csi: kubeclient not set, assuming standalone kubelet
W1130 00:07:07.615858 11754 csi_plugin.go:262] Skipping CSINode initialization, kubelet running in standalone mode
I1130 00:07:07.616220 11754 server.go:1190] "Started kubelet"
I1130 00:07:07.616269 11754 kubelet.go:1414] "No API server defined - no node status update will be sent"
I1130 00:07:07.616284 11754 server.go:149] "Starting to listen" address="0.0.0.0" port=10250
I1130 00:07:07.616880 11754 server.go:176] "Starting to listen read-only" address="0.0.0.0" port=10255
I1130 00:07:07.617329 11754 server.go:409] "Adding debug handlers to kubelet server"
I1130 00:07:07.617393 11754 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
I1130 00:07:07.617976 11754 volume_manager.go:271] "Starting Kubelet Volume Manager"
I1130 00:07:07.618034 11754 desired_state_of_world_populator.go:141] "Desired state populator starts to run"
I1130 00:07:07.618801 11754 client.go:86] parsed scheme: "unix"
I1130 00:07:07.618822 11754 client.go:86] scheme "unix" not registered, fallback to default scheme
I1130 00:07:07.618897 11754 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/k3s/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}
I1130 00:07:07.618911 11754 clientconn.go:948] ClientConn switching balancer to "pick_first"
E1130 00:07:07.623021 11754 cri_stats_provider.go:369] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/rancher/rke2/agent/containerd/io.containerd.snapshotter.v1.overlayfs"
E1130 00:07:07.623071 11754 kubelet.go:1306] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
I1130 00:07:07.680172 11754 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv4
I1130 00:07:07.697289 11754 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv6
I1130 00:07:07.697340 11754 status_manager.go:153] "Kubernetes client is nil, not starting status manager"
I1130 00:07:07.697359 11754 kubelet.go:1846] "Starting kubelet main sync loop"
E1130 00:07:07.697429 11754 kubelet.go:1870] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
I1130 00:07:07.766383 11754 cpu_manager.go:199] "Starting CPU manager" policy="none"
I1130 00:07:07.766408 11754 cpu_manager.go:200] "Reconciling" reconcilePeriod="10s"
I1130 00:07:07.766441 11754 state_mem.go:36] "Initialized new in-memory state store"
I1130 00:07:07.766607 11754 state_mem.go:88] "Updated default CPUSet" cpuSet=""
I1130 00:07:07.766625 11754 state_mem.go:96] "Updated CPUSet assignments" assignments=map[]
I1130 00:07:07.766633 11754 policy_none.go:44] "None policy: Start"
I1130 00:07:07.767429 11754 manager.go:600] "Failed to retrieve checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
I1130 00:07:07.767727 11754 plugin_manager.go:114] "Starting Kubelet Plugin Manager"
E1130 00:07:07.767977 11754 container_manager_linux.go:549] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified. cpu: /user.slice, memory: /user.slice/user-0.slice/session-1.scope"
I1130 00:07:07.798148 11754 topology_manager.go:187] "Topology Admit Handler"
I1130 00:07:07.815886 11754 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="b379b1f54e47e2faa90c2034fa3ebf3c1d2052bf2173e94ec04f1f497f51dc19"
I1130 00:07:07.815918 11754 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="f9ba9f64a47964ab0d5ca6f531c152350c6996a4c0cd2a6d4beb4a08d69344c0"
I1130 00:07:07.816040 11754 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="880cd05d9321f6ece6a762a0c5ee3f85bad621c20f43975d448bba019f2fb3df"
I1130 00:07:07.816066 11754 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="702b2de2f87a5b44d23ca261a2989e144f53d7970aab7b72d770d93c308ac5b8"
I1130 00:07:07.816102 11754 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="9bc0d95bfe6719589687e000e477356823fd2aaddfb7f3a05dafcf2200a88f49"
I1130 00:07:07.816141 11754 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="94db30cc27f5637634f50322f03e932ac0e0c75d4a5030c1a1e7d546eef133e6"
I1130 00:07:07.816180 11754 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="43f0e0e1c4b9f19d52db59666dc8b323f4b9dc6ac9f5115bcc5f0a510158b552"
I1130 00:07:07.816214 11754 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="314fd37876dc847355c16f1e288acaacd644b5bd7987595d90cb3634d9fd7848"
I1130 00:07:07.816250 11754 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="7aec53429cc276bfb15dbfe2e037f20777096e1c94f6bbf3046df0ef745bc4b3"
I1130 00:07:07.816283 11754 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="2a03c0f49a63243be12d2fe3ec25b5149f4b82a9586d2fd5832a060f020c92c7"
I1130 00:07:07.816315 11754 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="52961604420c8beb32f95f332ba8cfc63644d0acea19a1a0f9d4d5bade90de12"
I1130 00:07:07.816394 11754 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="393e50960dd3d38472f5c1fa74056edc3468d157dfbe8aa7fe2c6414c0671e0a"
I1130 00:07:07.816432 11754 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="40bf1a575c99a8078bb54c5f250f5ff6cb14d607f7a3d1f7f3e8d99d15f45190"
I1130 00:07:07.816471 11754 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="04641ac18bbce2c220c9ddd1b64431d418b81dc1ab324b5f21ba83b39c8a25d2"
I1130 00:07:07.816504 11754 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="930395507ac9385be8131bff50e36bbe24c389b819aeaef1c041f004ff55636e"
I1130 00:07:07.921516 11754 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file0\" (UniqueName: \"kubernetes.io/host-path/d9421707e6935a631cb645aa00cd0c22-file0\") pod \"kube-proxy-rkerepro\" (UID: \"d9421707e6935a631cb645aa00cd0c22\") "
I1130 00:07:07.921597 11754 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file1\" (UniqueName: \"kubernetes.io/host-path/d9421707e6935a631cb645aa00cd0c22-file1\") pod \"kube-proxy-rkerepro\" (UID: \"d9421707e6935a631cb645aa00cd0c22\") "
I1130 00:07:07.921791 11754 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file2\" (UniqueName: \"kubernetes.io/host-path/d9421707e6935a631cb645aa00cd0c22-file2\") pod \"kube-proxy-rkerepro\" (UID: \"d9421707e6935a631cb645aa00cd0c22\") "
I1130 00:07:07.921825 11754 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file3\" (UniqueName: \"kubernetes.io/host-path/d9421707e6935a631cb645aa00cd0c22-file3\") pod \"kube-proxy-rkerepro\" (UID: \"d9421707e6935a631cb645aa00cd0c22\") "
I1130 00:07:07.921839 11754 reconciler.go:157] "Reconciler: start to sync state"
E1130 00:07:07.922651 11754 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 249 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x437fce0, 0x7281460)
/go/src/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/src/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
panic(0x437fce0, 0x7281460)
/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.(*reconciler).updateDevicePath(0xc00082c0c0, 0xc001155c88)
/go/src/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler.go:598 +0x41
k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.(*reconciler).updateStates(0xc00082c0c0, 0xc001155c88, 0xc0018c0780, 0x52)
/go/src/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler.go:629 +0x50
k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.(*reconciler).syncStates(0xc00082c0c0)
/go/src/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler.go:434 +0x3d7
k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.(*reconciler).sync(0xc00082c0c0)
/go/src/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler.go:348 +0x4e
k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.(*reconciler).reconciliationLoopFunc.func1()
/go/src/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler.go:158 +0x118
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000775580)
/go/src/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000775580, 0x534c8c0, 0xc000736300, 0x451b801, 0xc0000a60c0)
/go/src/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000775580, 0x5f5e100, 0x0, 0x1, 0xc0000a60c0)
/go/src/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(...)
/go/src/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.(*reconciler).Run(0xc00082c0c0, 0xc0000a60c0)
/go/src/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler.go:146 +0x5f
created by k8s.io/kubernetes/pkg/kubelet/volumemanager.(*volumeManager).Run
/go/src/kubernetes/pkg/kubelet/volumemanager/volume_manager.go:272 +0x1df
INFO[2021-11-30T00:07:08.134990272Z] StopPodSandbox for "393e50960dd3d38472f5c1fa74056edc3468d157dfbe8aa7fe2c6414c0671e0a"
INFO[2021-11-30T00:07:08.135097642Z] Container to stop "baecbe835e1ee84c3b8d1e182822d44e0a9dc176fddb6f0297b41394ac199c5d" must be in running or unknown state, current state "CONTAINER_EXITED"
INFO[2021-11-30T00:07:08.135201144Z] TearDown network for sandbox "393e50960dd3d38472f5c1fa74056edc3468d157dfbe8aa7fe2c6414c0671e0a" successfully
INFO[2021-11-30T00:07:08.135219456Z] StopPodSandbox for "393e50960dd3d38472f5c1fa74056edc3468d157dfbe8aa7fe2c6414c0671e0a" returns successfully
INFO[2021-11-30T00:07:08.135817669Z] RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-rkerepro,Uid:d9421707e6935a631cb645aa00cd0c22,Namespace:kube-system,Attempt:1,}
time="2021-11-30T00:07:08.165787057Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/62a8ed8f6c4482ea640ddc7a707100e59fdc2e9860eebc52d0591dcf8bcd5a42 pid=11972
INFO[2021-11-30T00:07:08.247075166Z] RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rkerepro,Uid:d9421707e6935a631cb645aa00cd0c22,Namespace:kube-system,Attempt:1,} returns sandbox id "62a8ed8f6c4482ea640ddc7a707100e59fdc2e9860eebc52d0591dcf8bcd5a42"
INFO[2021-11-30T00:07:08.250092300Z] CreateContainer within sandbox "62a8ed8f6c4482ea640ddc7a707100e59fdc2e9860eebc52d0591dcf8bcd5a42" for container &ContainerMetadata{Name:kube-proxy,Attempt:1,}
INFO[2021-11-30T00:07:08.283204822Z] CreateContainer within sandbox "62a8ed8f6c4482ea640ddc7a707100e59fdc2e9860eebc52d0591dcf8bcd5a42" for &ContainerMetadata{Name:kube-proxy,Attempt:1,} returns container id "b6e973e0b383f6be4b3a4cfe773210eb5f55751e3f9fa5f45696fe332566e92d"
INFO[2021-11-30T00:07:08.283882543Z] StartContainer for "b6e973e0b383f6be4b3a4cfe773210eb5f55751e3f9fa5f45696fe332566e92d"
INFO[2021-11-30T00:07:08.408457071Z] StartContainer for "b6e973e0b383f6be4b3a4cfe773210eb5f55751e3f9fa5f45696fe332566e92d" returns successfully
I1130 00:07:09.701469 11754 kubelet_volumes.go:140] "Cleaned up orphaned pod volumes dir" podUID=0bf7bd73f1fc6642cc3bbaa3d4669206 path="/var/lib/kubelet/pods/0bf7bd73f1fc6642cc3bbaa3d4669206/volumes"
I1130 00:07:09.701833 11754 kubelet_volumes.go:140] "Cleaned up orphaned pod volumes dir" podUID=29f8866f-d16f-4d87-97c5-7725e37c1ded path="/var/lib/kubelet/pods/29f8866f-d16f-4d87-97c5-7725e37c1ded/volumes"
I1130 00:07:09.702092 11754 kubelet_volumes.go:140] "Cleaned up orphaned pod volumes dir" podUID=2a71fe40132090246b0a438b5154cb5c path="/var/lib/kubelet/pods/2a71fe40132090246b0a438b5154cb5c/volumes"
I1130 00:07:09.702391 11754 kubelet_volumes.go:140] "Cleaned up orphaned pod volumes dir" podUID=2c0fbdfa-38a1-478c-a104-30d55a26d61d path="/var/lib/kubelet/pods/2c0fbdfa-38a1-478c-a104-30d55a26d61d/volumes"
I1130 00:07:09.702701 11754 kubelet_volumes.go:113] "Cleaned up orphaned volume from pod" podUID=39cfa982-dd90-403c-993e-e0223db5c110 path="/var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~projected/kube-api-access-knlct"
I1130 00:07:09.702861 11754 kubelet_volumes.go:140] "Cleaned up orphaned pod volumes dir" podUID=666f76e0a2fef8ee3bf1a360b0032269 path="/var/lib/kubelet/pods/666f76e0a2fef8ee3bf1a360b0032269/volumes"
I1130 00:07:09.703102 11754 kubelet_volumes.go:113] "Cleaned up orphaned volume from pod" podUID=67c0c232-e26c-4acc-9e94-badc34311adc path="/var/lib/kubelet/pods/67c0c232-e26c-4acc-9e94-badc34311adc/volumes/kubernetes.io~projected/kube-api-access-6gjgq"
I1130 00:07:09.703397 11754 kubelet_volumes.go:140] "Cleaned up orphaned pod volumes dir" podUID=a01e344d-5fce-4c5d-b6ef-0acc6a3952e1 path="/var/lib/kubelet/pods/a01e344d-5fce-4c5d-b6ef-0acc6a3952e1/volumes"
I1130 00:07:09.703668 11754 kubelet_volumes.go:140] "Cleaned up orphaned pod volumes dir" podUID=bd1071897270512a5a3288d373e9f568 path="/var/lib/kubelet/pods/bd1071897270512a5a3288d373e9f568/volumes"
I1130 00:07:09.703922 11754 kubelet_volumes.go:140] "Cleaned up orphaned pod volumes dir" podUID=c09b1d53-f653-4fd1-9460-e242428a289e path="/var/lib/kubelet/pods/c09b1d53-f653-4fd1-9460-e242428a289e/volumes"
I1130 00:07:09.704187 11754 kubelet_volumes.go:113] "Cleaned up orphaned volume from pod" podUID=d3b0b85e-b9fa-48ae-88ad-983bcf3cccff path="/var/lib/kubelet/pods/d3b0b85e-b9fa-48ae-88ad-983bcf3cccff/volumes/kubernetes.io~projected/kube-api-access-chzhf"
I1130 00:07:09.704336 11754 kubelet_volumes.go:140] "Cleaned up orphaned pod volumes dir" podUID=d5d54b87157d42af3a6ce636074f476e path="/var/lib/kubelet/pods/d5d54b87157d42af3a6ce636074f476e/volumes"
I1130 00:07:09.704580 11754 kubelet_volumes.go:113] "Cleaned up orphaned volume from pod" podUID=eb3e1761-57af-4915-93ca-d3b41d2c79a8 path="/var/lib/kubelet/pods/eb3e1761-57af-4915-93ca-d3b41d2c79a8/volumes/kubernetes.io~projected/kube-api-access-fzf6w"
I1130 00:07:09.704610 11754 kubelet_volumes.go:113] "Cleaned up orphaned volume from pod" podUID=eb3e1761-57af-4915-93ca-d3b41d2c79a8 path="/var/lib/kubelet/pods/eb3e1761-57af-4915-93ca-d3b41d2c79a8/volumes/kubernetes.io~secret/webhook-cert"
I1130 00:07:09.704684 11754 kubelet_volumes.go:140] "Cleaned up orphaned pod volumes dir" podUID=eb3e1761-57af-4915-93ca-d3b41d2c79a8 path="/var/lib/kubelet/pods/eb3e1761-57af-4915-93ca-d3b41d2c79a8/volumes"
I1130 00:07:09.705009 11754 kubelet_volumes.go:113] "Cleaned up orphaned volume from pod" podUID=f650aa93-28e4-49a4-82a2-1202acfe998a path="/var/lib/kubelet/pods/f650aa93-28e4-49a4-82a2-1202acfe998a/volumes/kubernetes.io~projected/kube-api-access-xsdcr"
I1130 00:07:09.705069 11754 kubelet_volumes.go:140] "Cleaned up orphaned pod volumes dir" podUID=f650aa93-28e4-49a4-82a2-1202acfe998a path="/var/lib/kubelet/pods/f650aa93-28e4-49a4-82a2-1202acfe998a/volumes"
E1130 00:07:09.705287 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
I1130 00:07:09.705316 11754 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/eb3e1761-57af-4915-93ca-d3b41d2c79a8/volumes"
I1130 00:07:09.705367 11754 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/f650aa93-28e4-49a4-82a2-1202acfe998a/volumes"
I1130 00:07:09.705385 11754 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/bd1071897270512a5a3288d373e9f568/volumes"
I1130 00:07:09.705398 11754 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/0bf7bd73f1fc6642cc3bbaa3d4669206/volumes"
I1130 00:07:09.705415 11754 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/2a71fe40132090246b0a438b5154cb5c/volumes"
I1130 00:07:09.705474 11754 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/666f76e0a2fef8ee3bf1a360b0032269/volumes"
I1130 00:07:09.705537 11754 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/d5d54b87157d42af3a6ce636074f476e/volumes"
E1130 00:07:11.700845 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
E1130 00:07:13.701447 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
E1130 00:07:15.701624 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
INFO[0010] Waiting for deletion of kube-scheduler static pod
E1130 00:07:17.700716 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
E1130 00:07:19.704453 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
E1130 00:07:21.701240 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
E1130 00:07:23.701180 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
E1130 00:07:25.701835 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
INFO[0020] Waiting for deletion of cloud-controller-manager static pod
E1130 00:07:27.702560 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
E1130 00:07:29.701299 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
E1130 00:07:31.701394 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
E1130 00:07:33.701620 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
E1130 00:07:35.700970 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
INFO[0030] Waiting for deletion of kube-apiserver static pod
E1130 00:07:37.701455 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
E1130 00:07:39.701613 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
E1130 00:07:41.700916 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
E1130 00:07:43.701336 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
E1130 00:07:45.702195 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
INFO[0040] Waiting for deletion of cloud-controller-manager static pod
E1130 00:07:47.700689 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
E1130 00:07:49.701918 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
E1130 00:07:51.700503 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
E1130 00:07:53.702573 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
E1130 00:07:55.700780 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
INFO[0050] Waiting for deletion of etcd static pod
E1130 00:07:57.701183 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
E1130 00:07:59.701881 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
E1130 00:08:01.702011 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
E1130 00:08:03.700918 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
E1130 00:08:05.702477 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
INFO[0060] Waiting for deletion of kube-apiserver static pod
I1130 00:08:07.626308 11754 scope.go:111] "RemoveContainer" containerID="81e54c77b4001e389505d201aa5fb645ebed7761205905e0bb2c13bfc248103e"
INFO[2021-11-30T00:08:07.627175710Z] RemoveContainer for "81e54c77b4001e389505d201aa5fb645ebed7761205905e0bb2c13bfc248103e"
INFO[2021-11-30T00:08:07.630786723Z] RemoveContainer for "81e54c77b4001e389505d201aa5fb645ebed7761205905e0bb2c13bfc248103e" returns successfully
I1130 00:08:07.631086 11754 scope.go:111] "RemoveContainer" containerID="d5f7305a262de480b9e9155b1c9369f6cc8a1efc7a70fca62354aff1029d5222"
INFO[2021-11-30T00:08:07.631912945Z] RemoveContainer for "d5f7305a262de480b9e9155b1c9369f6cc8a1efc7a70fca62354aff1029d5222"
INFO[2021-11-30T00:08:07.634390611Z] RemoveContainer for "d5f7305a262de480b9e9155b1c9369f6cc8a1efc7a70fca62354aff1029d5222" returns successfully
I1130 00:08:07.634618 11754 scope.go:111] "RemoveContainer" containerID="8dd3373a115cc288077b66374b780c2045d073be8e508411271c6fd3b0ea7fcc"
INFO[2021-11-30T00:08:07.635259688Z] RemoveContainer for "8dd3373a115cc288077b66374b780c2045d073be8e508411271c6fd3b0ea7fcc"
INFO[2021-11-30T00:08:07.637239906Z] RemoveContainer for "8dd3373a115cc288077b66374b780c2045d073be8e508411271c6fd3b0ea7fcc" returns successfully
I1130 00:08:07.637380 11754 scope.go:111] "RemoveContainer" containerID="5a2c9fdcbc9d404e13133dc8f28aa9231c9066b5933578e95052b33dd215e47a"
INFO[2021-11-30T00:08:07.638100214Z] RemoveContainer for "5a2c9fdcbc9d404e13133dc8f28aa9231c9066b5933578e95052b33dd215e47a"
INFO[2021-11-30T00:08:07.639854644Z] RemoveContainer for "5a2c9fdcbc9d404e13133dc8f28aa9231c9066b5933578e95052b33dd215e47a" returns successfully
I1130 00:08:07.639971 11754 scope.go:111] "RemoveContainer" containerID="5ba8e9bbec48c1cb6e107ddaf61987a8f706f9df00dfea429bfc11decd15797c"
INFO[2021-11-30T00:08:07.640662614Z] RemoveContainer for "5ba8e9bbec48c1cb6e107ddaf61987a8f706f9df00dfea429bfc11decd15797c"
INFO[2021-11-30T00:08:07.642524439Z] RemoveContainer for "5ba8e9bbec48c1cb6e107ddaf61987a8f706f9df00dfea429bfc11decd15797c" returns successfully
I1130 00:08:07.642660 11754 scope.go:111] "RemoveContainer" containerID="b9405adc533fff06dea89874978f8ec5e14aba820c2ab5c105b4a2a7b5a6265c"
INFO[2021-11-30T00:08:07.643248268Z] RemoveContainer for "b9405adc533fff06dea89874978f8ec5e14aba820c2ab5c105b4a2a7b5a6265c"
INFO[2021-11-30T00:08:07.645113476Z] RemoveContainer for "b9405adc533fff06dea89874978f8ec5e14aba820c2ab5c105b4a2a7b5a6265c" returns successfully
I1130 00:08:07.645234 11754 scope.go:111] "RemoveContainer" containerID="1e76637d953b0a1651cad0de0ed86f42134c97c912e810684909059141940a40"
INFO[2021-11-30T00:08:07.650613991Z] RemoveContainer for "1e76637d953b0a1651cad0de0ed86f42134c97c912e810684909059141940a40"
INFO[2021-11-30T00:08:07.653389152Z] RemoveContainer for "1e76637d953b0a1651cad0de0ed86f42134c97c912e810684909059141940a40" returns successfully
I1130 00:08:07.653749 11754 scope.go:111] "RemoveContainer" containerID="b03287c822c7248e8e2d199c49ab7aaf47d080852ed5f2bd015d4f481c9d4b04"
INFO[2021-11-30T00:08:07.654837796Z] RemoveContainer for "b03287c822c7248e8e2d199c49ab7aaf47d080852ed5f2bd015d4f481c9d4b04"
INFO[2021-11-30T00:08:07.657354006Z] RemoveContainer for "b03287c822c7248e8e2d199c49ab7aaf47d080852ed5f2bd015d4f481c9d4b04" returns successfully
I1130 00:08:07.657541 11754 scope.go:111] "RemoveContainer" containerID="6133fcdf0d46c87d0af025c94eab373a7f52bf2e7d0148d87f807e5c23ba7d15"
INFO[2021-11-30T00:08:07.658403628Z] RemoveContainer for "6133fcdf0d46c87d0af025c94eab373a7f52bf2e7d0148d87f807e5c23ba7d15"
INFO[2021-11-30T00:08:07.660639073Z] RemoveContainer for "6133fcdf0d46c87d0af025c94eab373a7f52bf2e7d0148d87f807e5c23ba7d15" returns successfully
I1130 00:08:07.660826 11754 scope.go:111] "RemoveContainer" containerID="4b79da5046045c6cd9406f762e8908f30ff3a2f5a4dddf764e4bad6cf204c562"
INFO[2021-11-30T00:08:07.661764569Z] RemoveContainer for "4b79da5046045c6cd9406f762e8908f30ff3a2f5a4dddf764e4bad6cf204c562"
INFO[2021-11-30T00:08:07.664269520Z] RemoveContainer for "4b79da5046045c6cd9406f762e8908f30ff3a2f5a4dddf764e4bad6cf204c562" returns successfully
I1130 00:08:07.664535 11754 scope.go:111] "RemoveContainer" containerID="47c8674c84f7ced35651f47ba810e7e931f418664d72a687680a89ab92088d88"
INFO[2021-11-30T00:08:07.665476663Z] RemoveContainer for "47c8674c84f7ced35651f47ba810e7e931f418664d72a687680a89ab92088d88"
INFO[2021-11-30T00:08:07.667589130Z] RemoveContainer for "47c8674c84f7ced35651f47ba810e7e931f418664d72a687680a89ab92088d88" returns successfully
I1130 00:08:07.667811 11754 scope.go:111] "RemoveContainer" containerID="6ebc09569af366ee498b1b8de3e2fa17b0d9722d6fe014935b79c7fd5833ff86"
INFO[2021-11-30T00:08:07.668608762Z] RemoveContainer for "6ebc09569af366ee498b1b8de3e2fa17b0d9722d6fe014935b79c7fd5833ff86"
INFO[2021-11-30T00:08:07.670528580Z] RemoveContainer for "6ebc09569af366ee498b1b8de3e2fa17b0d9722d6fe014935b79c7fd5833ff86" returns successfully
I1130 00:08:07.670694 11754 scope.go:111] "RemoveContainer" containerID="3d4a60d5917bb573c78599559a91e79848e82c32509fb58fdf7891d1bba2edb2"
INFO[2021-11-30T00:08:07.671489905Z] RemoveContainer for "3d4a60d5917bb573c78599559a91e79848e82c32509fb58fdf7891d1bba2edb2"
INFO[2021-11-30T00:08:07.673583654Z] RemoveContainer for "3d4a60d5917bb573c78599559a91e79848e82c32509fb58fdf7891d1bba2edb2" returns successfully
I1130 00:08:07.673780 11754 scope.go:111] "RemoveContainer" containerID="20aac186e15c5c0d06e88a7ff43cbc00be77c7cb68f264b9f83eb00ffe2b3094"
INFO[2021-11-30T00:08:07.674487124Z] RemoveContainer for "20aac186e15c5c0d06e88a7ff43cbc00be77c7cb68f264b9f83eb00ffe2b3094"
INFO[2021-11-30T00:08:07.676717613Z] RemoveContainer for "20aac186e15c5c0d06e88a7ff43cbc00be77c7cb68f264b9f83eb00ffe2b3094" returns successfully
I1130 00:08:07.676890 11754 scope.go:111] "RemoveContainer" containerID="c17ef3b0017b28592f4601984f14bb43dba43fcdad0566ab51c3e6d29cf71e84"
INFO[2021-11-30T00:08:07.677678199Z] RemoveContainer for "c17ef3b0017b28592f4601984f14bb43dba43fcdad0566ab51c3e6d29cf71e84"
INFO[2021-11-30T00:08:07.679743994Z] RemoveContainer for "c17ef3b0017b28592f4601984f14bb43dba43fcdad0566ab51c3e6d29cf71e84" returns successfully
I1130 00:08:07.679890 11754 scope.go:111] "RemoveContainer" containerID="61751c5fff72e6d9f0a122be58ed1b53c257faa279da844ef806cbbae80ad2ad"
INFO[2021-11-30T00:08:07.680588475Z] RemoveContainer for "61751c5fff72e6d9f0a122be58ed1b53c257faa279da844ef806cbbae80ad2ad"
INFO[2021-11-30T00:08:07.682360327Z] RemoveContainer for "61751c5fff72e6d9f0a122be58ed1b53c257faa279da844ef806cbbae80ad2ad" returns successfully
I1130 00:08:07.682513 11754 scope.go:111] "RemoveContainer" containerID="3746a43f686b437e04c55253b58d73f175dd6e16eddecec49e23a073eaafea3f"
INFO[2021-11-30T00:08:07.683330450Z] RemoveContainer for "3746a43f686b437e04c55253b58d73f175dd6e16eddecec49e23a073eaafea3f"
INFO[2021-11-30T00:08:07.685475413Z] RemoveContainer for "3746a43f686b437e04c55253b58d73f175dd6e16eddecec49e23a073eaafea3f" returns successfully
INFO[2021-11-30T00:08:07.686358702Z] StopPodSandbox for "9bc0d95bfe6719589687e000e477356823fd2aaddfb7f3a05dafcf2200a88f49"
INFO[2021-11-30T00:08:07.686432475Z] TearDown network for sandbox "9bc0d95bfe6719589687e000e477356823fd2aaddfb7f3a05dafcf2200a88f49" successfully
INFO[2021-11-30T00:08:07.686444005Z] StopPodSandbox for "9bc0d95bfe6719589687e000e477356823fd2aaddfb7f3a05dafcf2200a88f49" returns successfully
INFO[2021-11-30T00:08:07.686643603Z] RemovePodSandbox for "9bc0d95bfe6719589687e000e477356823fd2aaddfb7f3a05dafcf2200a88f49"
INFO[2021-11-30T00:08:07.688697708Z] RemovePodSandbox "9bc0d95bfe6719589687e000e477356823fd2aaddfb7f3a05dafcf2200a88f49" returns successfully
INFO[2021-11-30T00:08:07.689014810Z] StopPodSandbox for "b379b1f54e47e2faa90c2034fa3ebf3c1d2052bf2173e94ec04f1f497f51dc19"
INFO[2021-11-30T00:08:07.689116908Z] TearDown network for sandbox "b379b1f54e47e2faa90c2034fa3ebf3c1d2052bf2173e94ec04f1f497f51dc19" successfully
INFO[2021-11-30T00:08:07.689131689Z] StopPodSandbox for "b379b1f54e47e2faa90c2034fa3ebf3c1d2052bf2173e94ec04f1f497f51dc19" returns successfully
INFO[2021-11-30T00:08:07.689338551Z] RemovePodSandbox for "b379b1f54e47e2faa90c2034fa3ebf3c1d2052bf2173e94ec04f1f497f51dc19"
INFO[2021-11-30T00:08:07.691379621Z] RemovePodSandbox "b379b1f54e47e2faa90c2034fa3ebf3c1d2052bf2173e94ec04f1f497f51dc19" returns successfully
INFO[2021-11-30T00:08:07.691646072Z] StopPodSandbox for "52961604420c8beb32f95f332ba8cfc63644d0acea19a1a0f9d4d5bade90de12"
INFO[2021-11-30T00:08:07.691710325Z] TearDown network for sandbox "52961604420c8beb32f95f332ba8cfc63644d0acea19a1a0f9d4d5bade90de12" successfully
INFO[2021-11-30T00:08:07.691719741Z] StopPodSandbox for "52961604420c8beb32f95f332ba8cfc63644d0acea19a1a0f9d4d5bade90de12" returns successfully
INFO[2021-11-30T00:08:07.691888822Z] RemovePodSandbox for "52961604420c8beb32f95f332ba8cfc63644d0acea19a1a0f9d4d5bade90de12"
INFO[2021-11-30T00:08:07.693847437Z] RemovePodSandbox "52961604420c8beb32f95f332ba8cfc63644d0acea19a1a0f9d4d5bade90de12" returns successfully
INFO[2021-11-30T00:08:07.694109606Z] StopPodSandbox for "702b2de2f87a5b44d23ca261a2989e144f53d7970aab7b72d770d93c308ac5b8"
INFO[2021-11-30T00:08:07.694242179Z] TearDown network for sandbox "702b2de2f87a5b44d23ca261a2989e144f53d7970aab7b72d770d93c308ac5b8" successfully
INFO[2021-11-30T00:08:07.694256627Z] StopPodSandbox for "702b2de2f87a5b44d23ca261a2989e144f53d7970aab7b72d770d93c308ac5b8" returns successfully
INFO[2021-11-30T00:08:07.694415905Z] RemovePodSandbox for "702b2de2f87a5b44d23ca261a2989e144f53d7970aab7b72d770d93c308ac5b8"
INFO[2021-11-30T00:08:07.696301789Z] RemovePodSandbox "702b2de2f87a5b44d23ca261a2989e144f53d7970aab7b72d770d93c308ac5b8" returns successfully
INFO[2021-11-30T00:08:07.696570217Z] StopPodSandbox for "40bf1a575c99a8078bb54c5f250f5ff6cb14d607f7a3d1f7f3e8d99d15f45190"
INFO[2021-11-30T00:08:07.696647191Z] TearDown network for sandbox "40bf1a575c99a8078bb54c5f250f5ff6cb14d607f7a3d1f7f3e8d99d15f45190" successfully
INFO[2021-11-30T00:08:07.696659041Z] StopPodSandbox for "40bf1a575c99a8078bb54c5f250f5ff6cb14d607f7a3d1f7f3e8d99d15f45190" returns successfully
INFO[2021-11-30T00:08:07.696831458Z] RemovePodSandbox for "40bf1a575c99a8078bb54c5f250f5ff6cb14d607f7a3d1f7f3e8d99d15f45190"
INFO[2021-11-30T00:08:07.701096330Z] RemovePodSandbox "40bf1a575c99a8078bb54c5f250f5ff6cb14d607f7a3d1f7f3e8d99d15f45190" returns successfully
INFO[2021-11-30T00:08:07.701537366Z] StopPodSandbox for "7aec53429cc276bfb15dbfe2e037f20777096e1c94f6bbf3046df0ef745bc4b3"
INFO[2021-11-30T00:08:07.701628878Z] TearDown network for sandbox "7aec53429cc276bfb15dbfe2e037f20777096e1c94f6bbf3046df0ef745bc4b3" successfully
INFO[2021-11-30T00:08:07.701661477Z] StopPodSandbox for "7aec53429cc276bfb15dbfe2e037f20777096e1c94f6bbf3046df0ef745bc4b3" returns successfully
INFO[2021-11-30T00:08:07.708486395Z] RemovePodSandbox for "7aec53429cc276bfb15dbfe2e037f20777096e1c94f6bbf3046df0ef745bc4b3"
INFO[2021-11-30T00:08:07.711133324Z] RemovePodSandbox "7aec53429cc276bfb15dbfe2e037f20777096e1c94f6bbf3046df0ef745bc4b3" returns successfully
INFO[2021-11-30T00:08:07.711522384Z] StopPodSandbox for "880cd05d9321f6ece6a762a0c5ee3f85bad621c20f43975d448bba019f2fb3df"
E1130 00:08:07.714296 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
E1130 00:08:09.702342 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
E1130 00:08:11.701161 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
E1130 00:08:13.701616 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
E1130 00:08:15.700784 11754 kubelet_volumes.go:225] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"39cfa982-dd90-403c-993e-e0223db5c110\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/39cfa982-dd90-403c-993e-e0223db5c110/volumes/kubernetes.io~empty-dir/tmp: directory not empty" numErrs=6
INFO[0070] Static pod cleanup completed successfully
INFO[0070] Starting rke2 v1.21.6+rke2r1 (b915fc986e84582458af7131fe7f4e686f2af493)
INFO[0070] Managed etcd cluster bootstrap already complete and initialized
INFO[0070] Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
INFO[0070] Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --port=10251 --profiling=false --secure-port=0
INFO[0070] Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --use-service-account-credentials=true
INFO[0070] Running cloud-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cloud-provider=rke2 --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --kubeconfig=/var/lib/rancher/rke2/server/cred/cloud-controller.kubeconfig --node-status-update-frequency=1m0s --port=0 --profiling=false
INFO[0070] Node token is available at /var/lib/rancher/rke2/server/token
INFO[0070] To join node to cluster: rke2 agent -s https://174.138.18.211:9345 -t ${NODE_TOKEN}
INFO[0070] Wrote kubeconfig /etc/rancher/rke2/rke2.yaml
INFO[0070] Run: rke2 kubectl
INFO[0070] certificate CN=rkerepro signed by CN=rke2-server-ca@1638229830: notBefore=2021-11-29 23:50:30 +0000 UTC notAfter=2022-11-30 00:08:17 +0000 UTC
INFO[0070] certificate CN=system:node:rkerepro,O=system:nodes signed by CN=rke2-client-ca@1638229830: notBefore=2021-11-29 23:50:30 +0000 UTC notAfter=2022-11-30 00:08:17 +0000 UTC
INFO[0070] Module overlay was already loaded
INFO[0070] Module nf_conntrack was already loaded
INFO[0070] Module br_netfilter was already loaded
INFO[0070] Module iptable_nat was already loaded
INFO[0070] Runtime image index.docker.io/rancher/rke2-runtime:v1.21.6-rke2r1 bin and charts directories already exist; skipping extract
INFO[0070] Updated HelmChart /var/lib/rancher/rke2/server/manifests/rke2-multus.yaml to set cluster configuration values
INFO[0070] Updated HelmChart /var/lib/rancher/rke2/server/manifests/harvester-cloud-provider.yaml to set cluster configuration values
INFO[0070] Updated HelmChart /var/lib/rancher/rke2/server/manifests/harvester-csi-driver.yaml to set cluster configuration values
INFO[0070] Updated HelmChart /var/lib/rancher/rke2/server/manifests/rke2-kube-proxy.yaml to set cluster configuration values
INFO[0070] Updated HelmChart /var/lib/rancher/rke2/server/manifests/rke2-metrics-server.yaml to set cluster configuration values
INFO[0070] Updated HelmChart /var/lib/rancher/rke2/server/manifests/rke2-canal.yaml to set cluster configuration values
INFO[0070] Updated HelmChart /var/lib/rancher/rke2/server/manifests/rke2-cilium.yaml to set cluster configuration values
INFO[0070] Updated HelmChart /var/lib/rancher/rke2/server/manifests/rke2-coredns.yaml to set cluster configuration values
INFO[0070] Updated HelmChart /var/lib/rancher/rke2/server/manifests/rke2-ingress-nginx.yaml to set cluster configuration values
INFO[0070] Updated HelmChart /var/lib/rancher/rke2/server/manifests/rancher-vsphere-cpi.yaml to set cluster configuration values
INFO[0070] Updated HelmChart /var/lib/rancher/rke2/server/manifests/rancher-vsphere-csi.yaml to set cluster configuration values
INFO[0070] Updated HelmChart /var/lib/rancher/rke2/server/manifests/rke2-calico-crd.yaml to set cluster configuration values
INFO[0070] Updated HelmChart /var/lib/rancher/rke2/server/manifests/rke2-calico.yaml to set cluster configuration values
INFO[0070] Logging containerd to /var/lib/rancher/rke2/agent/containerd/containerd.log
INFO[0070] Running containerd -c /var/lib/rancher/rke2/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/rke2/agent/containerd
INFO[0071] Containerd is now running
INFO[0071] Connecting to proxy url="wss://127.0.0.1:9345/v1-rke2/connect"
INFO[0071] Handling backend connection request [rkerepro]
INFO[0071] Running kubelet --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=rkerepro --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --kubelet-cgroups=/rke2 --log-file-max-size=50 --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --logtostderr=false --node-labels= --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
INFO[0071] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error
Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --file-check-frequency has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --sync-frequency has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --anonymous-auth has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --authentication-token-webhook has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Flag --eviction-hard has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --eviction-minimum-reclaim has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --healthz-bind-address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --kubelet-cgroups has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --read-only-port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --serialize-image-pulls has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cert-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-private-key-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
INFO[0076] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error
INFO[0078] etcd data store connection OK
INFO[0078] Waiting for API server to become available
WARN[0078] bootstrap key already exists
INFO[0078] Saving current etcd snapshot set to rke2-etcd-snapshots ConfigMap
INFO[0078] Etcd is running, restart without --cluster-reset flag now. Backup and delete ${datadir}/server/db on each peer etcd server and rejoin the nodes
For QA - we do not believe this is related to Docker being present or running on the host, as originally reported. It seems to be related to containerd not starting up fast enough for the kubelet. See https://github.com/rancher/rke2/issues/2083#issuecomment-977563685
I was able to reproduce the issue on one setup on a DO node/ Ubuntu/Docker installed using rke2 version v1.21.6+rke2r1. Attached log of install as well as cluster-reset output. @brandond as you mentioned the issue may not be related to docker running on the host as it was not reproducible in other setups with docker running as well as without docker on AWS/DO/vsphere nodes. rke2.log
root@shyubu:~# docker version
Client:
Version: 20.10.7
API version: 1.41
Go version: go1.13.8
Git commit: 20.10.7-0ubuntu5~20.04.2
Built: Mon Nov 1 00:34:17 2021
OS/Arch: linux/amd64
Context: default
Experimental: true
Server:
Engine:
Version: 20.10.7
API version: 1.41 (minimum version 1.12)
Go version: go1.13.8
Git commit: 20.10.7-0ubuntu5~20.04.2
Built: Fri Oct 22 00:45:53 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.5.5-0ubuntu3~20.04.1
GitCommit:
runc:
Version: 1.0.1-0ubuntu2~20.04.1
GitCommit:
docker-init:
Version: 0.19.0
GitCommit:
curl -sfL https://get.rke2.io | sudo INSTALL_RKE2_COMMIT=d6f3c1f5176fad6e8cecca2865e2d540aeed77da sh -s - server
...
INFO[0154] Waiting for API server to become available
INFO[0154] Reconciling bootstrap data between datastore and disk
WARN[0154] bootstrap key already exists
INFO[0154] Reconciling etcd snapshot data in rke2-etcd-snapshots ConfigMap
INFO[0154] Cluster reset: backing up certificates directory to /var/lib/rancher/rke2/server/tls-1644906144
INFO[0154] Etcd is running, restart without --cluster-reset flag now. Backup and delete ${datadir}/server/db on each peer etcd server and rejoin the nodes
root@shyubu:~#
Validated cluster-reset functionality on release-1.22 branch using commit id b5c144c33a2ee013aab94262cc94a34a0313a1c1
Validated cluster-reset functionality on master branch using commit id 578156a9e267b5af45f7e2b886a217c27e32aef2
Issue description:
RKE2 wont run the cluster-reset command and fails with message on image attached Kubelet failed to initialise have ran rke2_killall on all nodes tried the cluster reset command without the snapshot restore for the same result different messages on different versions
v1.20.10
v1.21.6
Business impact:
customers have developers that cannot work and are down. They need to recover their Cluster registration tokens for their fleet as Rancher Backup did not recover this. We are trying to extract the tokens from the restore to minimize the rebuild effort.
Troubleshooting steps:
Tried to rke2_killall
rebooting
tried running on all nodes same error
deep inspection of the host
RKE2 starts up and is ok
kubelet just wont initialise
FOUND workaround by stopping docker service
Repro steps:
install docker on instance reboot setup single node rke2 cluster ensure all is well stop rke2-server run 'rke2-killall' rke2 server --cluster-reset, will trigger the error stop docker service try again and it will work Workaround: Is workararound available and implemented? yes What is the workaround:
Stop docker and things work
Actual behavior:
kubelet will not initialise
Expected behavior: cluster reset command should make a single cluster