rancher / rancher

Complete container management platform
http://rancher.com
Apache License 2.0
23.49k stars 2.98k forks source link

Investigate impact of AggregationController API lag / metrics server Rate Limited Requeue #17248

Closed superseb closed 2 years ago

superseb commented 5 years ago

What kind of request is this (question/bug/enhancement/feature request): bug

Steps to reproduce (least amount of steps as possible):

Other symptoms seen are high CPU usage of kube-apiserver.

Result: Applying the workaround from https://github.com/kubernetes/kubernetes/issues/56430 (kubectl --namespace kube-system delete apiservice v1beta1.metrics.k8s.io) speeds things up, need to investigate what the actual root cause is and if we can apply the fix by default or possibly to some other mitigation (or wait for upstream).

proegssilb commented 5 years ago

Something involving metrics is definitely slowing things down way more than it should. The STR doesn't work for me, because my cluster's low-powered enough that it's always seeing the Rate Limited Requeue message, and the workaround makes the Rancher UI stop issuing timeout warnings/errors all the time.

stefanlasiewski commented 5 years ago

@superseb in your experience is this still an issue with Kubernetes 1.12 & 1.13? I'm asking since kubernetes/kubernetes#56430 was opened for v1.11 and was closed due to staleness.

proegssilb commented 5 years ago

I'm now running Kubernetes 1.12 (RKE v0.2.0-rc3), and still seeing high CPU usage until I follow the workaround.

mboudet commented 5 years ago

We have the same issue following the quickstart manual install

We use the 1.13.4 kubernetes version.

We use 4 nodes, 3 of them as as an etcd/control plane/worker, and the last one as a worker. The kube-apiserver containers seem to have random cpu spikes on all three nodes where they are launched.

On the node where the metricserver is up, we have this :

image

On the other two nodes, something like this :

image

The metrics container does not show any logs since we added the nodes.

We do not have any application launched.

daanemanz commented 5 years ago

We're seeing this on v1.12.4 as well. 3 nodes on GCP recently updated with RKE v0.2.2. We're seeing that some clusters are intermittently unavailable in the Rancher UI - it just times out. Nodes are running RancherOS 1.5.1.

rajha-korithrien commented 5 years ago

I believe we are also seeing a related aspect to this problem. Putting information here in the hope it helps and possibly get a question clarified. This is mostly a development/experiment cluster, so we are willing to use it to help debug the problem if such would be useful.

We are using:

When the cluster exhibiting this behavior is first brought up the Rancher UI interaction with it works correctly for perhaps a day. Then the UI becomes unresponsive and at time displays the following error.

image

We found that we can bring the cluster back into a "working" state by restarting the kube-apiserver on each node (which is a very bad solution)

# docker restart kube-apiserver
kube-apiserver

After reading through this issue and the seemingly related kubernetes/kubernetes#56430 we also did the suggested workaround.

# Run kubectl commands inside here
# e.g. kubectl get all
> kubectl --namespace kube-system delete apiservice v1beta1.metrics.k8s.io
apiservice.apiregistration.k8s.io "v1beta1.metrics.k8s.io" deleted
>

Our logs are no longer being spammed and have an interesting final entry having to do with v1beta1.metrics.k8s.io

I0428 14:41:22.797971       1 controller.go:105] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
I0428 14:41:28.509677       1 controller.go:116] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Requeue.
I0428 14:42:28.510041       1 controller.go:105] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
I0428 14:42:32.319257       1 controller.go:122] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).

After taking the step of deleting the apiservice, the RancherUI has worked correctly for 48 hours now, and we have not had to restart the kube-apiservers.

Question We did not get a good feel for the operational impact of deleting apiservice v1beta1.metrics.k8s.io from the kubernetes issue. Does this impact our ability to use Rancher to collect metrics about our cluster?

Thanks!

muyufan commented 5 years ago

@superseb do you know how to add it back after run "kubectl --namespace kube-system delete apiservice v1beta1.metrics.k8s.io"? my error seems not caused by this

this one? "kubectl edit deploy -n kube-system apiservice v1beta1.metrics.k8s.io"?

elacy commented 3 years ago

Seems weird that this is happening on vanilla rke in the most basic configuration and this was originally raised 3 years ago.

superseb commented 3 years ago

Not sure what the intent of the comment is, it did not get a comment for almost 2 years. If you are experiencing this issue and are not running into any of the other issues linked, please supply information about your setup, logs and steps to reproduce so we can investigate.

elacy commented 3 years ago

I am running ubuntu 20, docker 20, rke v1.3.1 on darwin amd64. I've tried 3 nodes with all roles, and three nodes with etcd + control plane and 2 worker nodes. In both cases kubectl is slow to respond, namespaces are stuck in terminating when I try to delete them and when I type kubectl top nodes it complains about service unavailable from metrics server.

If I run kubectl --namespace kube-system delete apiservice v1beta1.metrics.k8s.io it fixes the issue but seems weird that I have to do that given it's a super basic install, is there any way to deploy RKE so that the metrics service works?

superseb commented 3 years ago

Please share exact info (docker info and the logging from the metrics-server pod, if possible also docker logs kubelet). Does it make a difference if you use just one node with all roles?

elacy commented 3 years ago

docker info

Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.6.1-docker)
  scan: Docker Scan (Docker Inc., v0.8.0)

Server:
 Containers: 36
  Running: 20
  Paused: 0
  Stopped: 16
 Images: 14
 Server Version: 20.10.8
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc io.containerd.runc.v2 io.containerd.runtime.v1.linux
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: e25210fe30a0a703442421b0f60afac609f950a3
 runc version: v1.0.1-0-g4144b63
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.4.0-84-generic
 Operating System: Ubuntu 20.04.3 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 7.59GiB
 Name: rke-1
 ID: PZQK:VIJO:GLRG:WOOD:4N26:32OM:MIYX:YNQF:QDVR:ENT2:57QS:TRAR
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support
elacy commented 3 years ago

OK the problem seems to have gone away which is weird cause I was using terraform to build everything and have gone through multiple cycles of recreating everything. The settings above are for the working version I believe.

elacy commented 3 years ago

OK it happened again, not sure what's causing this.

root@rke-1:~# docker info

Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.6.1-docker)
  scan: Docker Scan (Docker Inc., v0.8.0)

Server:
 Containers: 32
  Running: 18
  Paused: 0
  Stopped: 14
 Images: 13
 Server Version: 20.10.8
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: e25210fe30a0a703442421b0f60afac609f950a3
 runc version: v1.0.1-0-g4144b63
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.4.0-84-generic
 Operating System: Ubuntu 20.04.3 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 7.59GiB
 Name: rke-1
 ID: YJXH:VYTJ:B3BK:GUJT:CDWL:PM63:FQUB:S7XN:NZIG:TMS4:66K2:TV5W
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support
elacy commented 3 years ago

docker logs kubelet

+ echo kubelet --cloud-provider= --pod-infra-container-image=rancher/mirrored-pause:3.4.1 --event-qps=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --v=2 --authentication-token-webhook=true --network-plugin=cni --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --cluster-dns=10.43.0.10 --hostname-override=192.168.70.2 --root-dir=/var/lib/kubelet --anonymous-auth=false --cgroups-per-qos=True --address=0.0.0.0 --resolv-conf=/etc/resolv.conf --streaming-connection-idle-timeout=30m --cluster-domain=cluster.local --make-iptables-util-chains=true --authorization-mode=Webhook --cni-conf-dir=/etc/cni/net.d --fail-swap-on=false --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cni-bin-dir=/opt/cni/bin
+ grep -q cloud-provider=azure
+ '[' kubelet = kubelet ']'
++ grep -i 'docker root dir'
++ cut -f2 -d:
++ DOCKER_API_VERSION=1.24
++ /opt/rke-tools/bin/docker info
+ DOCKER_ROOT=' /var/lib/docker'
++ find -O1 /var/lib/docker -maxdepth 1
+ DOCKER_DIRS='/var/lib/docker
/var/lib/docker/image
/var/lib/docker/volumes
/var/lib/docker/plugins
/var/lib/docker/containers
/var/lib/docker/swarm
/var/lib/docker/trust
/var/lib/docker/runtimes
/var/lib/docker/network
/var/lib/docker/buildkit
/var/lib/docker/tmp
/var/lib/docker/overlay2'
+ for i in $DOCKER_ROOT /var/lib/docker /run /var/run
++ tac /proc/mounts
++ awk '{print $2}'
++ grep '^/var/lib/docker/'
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/var/log/pods '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/var/log/pods '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/var/log/pods
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/var/log/pods
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/var/lib/rancher '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/var/lib/rancher '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/var/lib/rancher
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/var/lib/rancher
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/var/log/containers '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/var/log/containers '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/var/log/containers
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/var/log/containers
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/var/lib/calico '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/var/lib/calico '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/var/lib/calico
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/var/lib/calico
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/var/lib/kubelet '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/var/lib/kubelet '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/var/lib/kubelet
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/var/lib/kubelet
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/var/lib/cni '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/var/lib/cni '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/var/lib/cni
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/var/lib/cni
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/etc/hosts '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/etc/hosts '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/etc/hosts
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/etc/hosts
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/etc/cni '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/etc/cni '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/etc/cni
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/etc/cni
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/etc/hostname '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/etc/hostname '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/etc/hostname
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/etc/hostname
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/dev/mqueue '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/dev/mqueue '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/dev/mqueue
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/dev/mqueue
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/dev/hugepages '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/dev/hugepages '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/dev/hugepages
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/dev/hugepages
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/dev/shm '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/dev/shm '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/dev/shm
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/dev/shm
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/dev/pts '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/dev/pts '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/dev/pts
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/dev/pts
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/dev '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/dev '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/dev
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/dev
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/usr '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/usr '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/usr
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/usr
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/opt/cni '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/opt/cni '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/opt/cni
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/opt/cni
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/etc/ceph '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/etc/ceph '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/etc/ceph
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/etc/ceph
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/etc '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/etc '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/etc
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/host/etc
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/etc/kubernetes '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/etc/kubernetes '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/etc/kubernetes
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/etc/kubernetes
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/opt/rke-tools '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/opt/rke-tools '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/opt/rke-tools
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/opt/rke-tools
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/etc/resolv.conf '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/etc/resolv.conf '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/etc/resolv.conf
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/etc/resolv.conf
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/kernel/config '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/kernel/config '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/kernel/config
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/kernel/config
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/fuse/connections '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/fuse/connections '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/fuse/connections
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/fuse/connections
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/kernel/tracing '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/kernel/tracing '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/kernel/tracing
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/kernel/tracing
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/kernel/debug '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/kernel/debug '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/kernel/debug
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/kernel/debug
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/bpf '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/bpf '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/bpf
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/bpf
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/pstore '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/pstore '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/pstore
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/pstore
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/devices '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/devices '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/devices
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/devices
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/hugetlb '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/hugetlb '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/hugetlb
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/hugetlb
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/blkio '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/blkio '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/blkio
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/blkio
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/perf_event '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/perf_event '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/perf_event
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/perf_event
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/rdma '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/rdma '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/rdma
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/rdma
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/pids '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/pids '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/pids
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/pids
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/cpuset '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/cpuset '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/cpuset
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/cpuset
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/freezer '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/freezer '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/freezer
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/freezer
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/net_cls,net_prio '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/net_cls,net_prio '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/net_cls,net_prio
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/net_cls,net_prio
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/memory '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/memory '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/memory
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/memory
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/cpu,cpuacct '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/cpu,cpuacct '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/cpu,cpuacct
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/cpu,cpuacct
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/systemd '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/systemd '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/systemd
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/systemd
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/unified '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/unified '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/unified
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/unified
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/kernel/security '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/kernel/security '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/kernel/security
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/kernel/security
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run/docker/netns/default '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run/docker/netns/default '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run/docker/netns/default
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run/docker/netns/default
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run/user/0 '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run/user/0 '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run/user/0
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run/user/0
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run/lock '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run/lock '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run/lock
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run/lock
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run/docker/netns/default '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run/docker/netns/default '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run/docker/netns/default
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run/docker/netns/default
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run/user/0 '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run/user/0 '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run/user/0
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run/user/0
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run/lock '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run/lock '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run/lock
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run/lock
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/run
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/devices '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/devices '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/devices
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/devices
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/hugetlb '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/hugetlb '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/hugetlb
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/hugetlb
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/blkio '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/blkio '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/blkio
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/blkio
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/perf_event '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/perf_event '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/perf_event
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/perf_event
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/rdma '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/rdma '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/rdma
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/rdma
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/pids '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/pids '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/pids
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/pids
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/cpuset '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/cpuset '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/cpuset
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/cpuset
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/freezer '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/freezer '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/freezer
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/freezer
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/net_cls,net_prio '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/net_cls,net_prio '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/net_cls,net_prio
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/net_cls,net_prio
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/memory '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/memory '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/memory
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/memory
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/cpu,cpuacct '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/cpu,cpuacct '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/cpu,cpuacct
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/cpu,cpuacct
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/systemd '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/systemd '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/systemd
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup/systemd
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/sys/fs/cgroup
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/dev/shm '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/dev/shm '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/dev/shm
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/dev/shm
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/dev/mqueue '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/dev/mqueue '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/dev/mqueue
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/dev/mqueue
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/dev/pts '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/dev/pts '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/dev/pts
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/dev/pts
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/dev '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/dev '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/dev
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/dev
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/proc '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/proc '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/proc
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged/proc
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/6ad508da98c3472a807c9f9c8e15bcebf3818d49d2758cc126e7fb6284da30e4/merged
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/a43b1d787917b0e05f128d48187491947c7cb63977bfad476d3b6dcfb46cf2f7/merged '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/a43b1d787917b0e05f128d48187491947c7cb63977bfad476d3b6dcfb46cf2f7/merged '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/a43b1d787917b0e05f128d48187491947c7cb63977bfad476d3b6dcfb46cf2f7/merged
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/a43b1d787917b0e05f128d48187491947c7cb63977bfad476d3b6dcfb46cf2f7/merged
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/9c88992c7dabc3f1bbde6938cabeb7fa40d20d218fbbf796181d4e8f16450ca3/merged '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/9c88992c7dabc3f1bbde6938cabeb7fa40d20d218fbbf796181d4e8f16450ca3/merged '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/9c88992c7dabc3f1bbde6938cabeb7fa40d20d218fbbf796181d4e8f16450ca3/merged
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/9c88992c7dabc3f1bbde6938cabeb7fa40d20d218fbbf796181d4e8f16450ca3/merged
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/f158f234b45ba4f5c3b1aef1d884c28005f8619559b0953a578049c7ced62b2b/merged '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/f158f234b45ba4f5c3b1aef1d884c28005f8619559b0953a578049c7ced62b2b/merged '!=' /run/nscd ']'
+ grep -qF /var/lib/docker/overlay2/f158f234b45ba4f5c3b1aef1d884c28005f8619559b0953a578049c7ced62b2b/merged
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /var/lib/docker/overlay2/f158f234b45ba4f5c3b1aef1d884c28005f8619559b0953a578049c7ced62b2b/merged
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/7d9057734ab9987ab53b565c4511ab71cc4253a5eb9a2446b31e04427b474ee5/merged '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/7d9057734ab9987ab53b565c4511ab71cc4253a5eb9a2446b31e04427b474ee5/merged '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/7d9057734ab9987ab53b565c4511ab71cc4253a5eb9a2446b31e04427b474ee5/merged
+ umount /var/lib/docker/overlay2/7d9057734ab9987ab53b565c4511ab71cc4253a5eb9a2446b31e04427b474ee5/merged
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /var/lib/docker/overlay2/9ca36b70f20d7cf0bf31eda0c8e9fcea2e16daf8464aca1b294aa6955760db2b/merged '!=' /var/run/nscd ']'
+ '[' /var/lib/docker/overlay2/9ca36b70f20d7cf0bf31eda0c8e9fcea2e16daf8464aca1b294aa6955760db2b/merged '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /var/lib/docker/overlay2/9ca36b70f20d7cf0bf31eda0c8e9fcea2e16daf8464aca1b294aa6955760db2b/merged
+ umount /var/lib/docker/overlay2/9ca36b70f20d7cf0bf31eda0c8e9fcea2e16daf8464aca1b294aa6955760db2b/merged
+ for i in $DOCKER_ROOT /var/lib/docker /run /var/run
++ tac /proc/mounts
++ awk '{print $2}'
++ grep '^/var/lib/docker/'
+ for i in $DOCKER_ROOT /var/lib/docker /run /var/run
++ tac /proc/mounts
++ awk '{print $2}'
++ grep '^/run/'
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /run/docker/netns/default '!=' /var/run/nscd ']'
+ '[' /run/docker/netns/default '!=' /run/nscd ']'
+ grep -qF /run/docker/netns/default
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /run/docker/netns/default
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /run/user/0 '!=' /var/run/nscd ']'
+ '[' /run/user/0 '!=' /run/nscd ']'
+ grep -qF /run/user/0
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /run/user/0
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /run/lock '!=' /var/run/nscd ']'
+ '[' /run/lock '!=' /run/nscd ']'
+ grep -qF /run/lock
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /run/lock
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /run/docker/netns/default '!=' /var/run/nscd ']'
+ '[' /run/docker/netns/default '!=' /run/nscd ']'
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ grep -qF /run/docker/netns/default
+ umount /run/docker/netns/default
umount: /run/docker/netns/default: not mounted.
+ true
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /run/user/0 '!=' /var/run/nscd ']'
+ '[' /run/user/0 '!=' /run/nscd ']'
+ grep -qF /run/user/0
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /run/user/0
umount: /run/user/0: not mounted.
+ true
+ for m in $(tac /proc/mounts | awk '{print $2}' | grep ^${i}/)
+ '[' /run/lock '!=' /var/run/nscd ']'
+ '[' /run/lock '!=' /run/nscd ']'
+ grep -qF /run/lock
+ echo /var/lib/docker /var/lib/docker/image /var/lib/docker/volumes /var/lib/docker/plugins /var/lib/docker/containers /var/lib/docker/swarm /var/lib/docker/trust /var/lib/docker/runtimes /var/lib/docker/network /var/lib/docker/buildkit /var/lib/docker/tmp /var/lib/docker/overlay2
+ umount /run/lock
umount: /run/lock: not mounted.
+ true
+ for i in $DOCKER_ROOT /var/lib/docker /run /var/run
++ tac /proc/mounts
++ awk '{print $2}'
++ grep '^/var/run/'
+ mount --rbind /host/dev /dev
+ mount -o rw,remount /sys/fs/cgroup
+ for i in /sys/fs/cgroup/*
+ '[' -d /sys/fs/cgroup/blkio ']'
+ mkdir -p /sys/fs/cgroup/blkio/kubepods
+ for i in /sys/fs/cgroup/*
+ '[' -d /sys/fs/cgroup/cpu ']'
+ mkdir -p /sys/fs/cgroup/cpu/kubepods
+ for i in /sys/fs/cgroup/*
+ '[' -d /sys/fs/cgroup/cpu,cpuacct ']'
+ mkdir -p /sys/fs/cgroup/cpu,cpuacct/kubepods
+ for i in /sys/fs/cgroup/*
+ '[' -d /sys/fs/cgroup/cpuacct ']'
+ mkdir -p /sys/fs/cgroup/cpuacct/kubepods
+ for i in /sys/fs/cgroup/*
+ '[' -d /sys/fs/cgroup/cpuset ']'
+ mkdir -p /sys/fs/cgroup/cpuset/kubepods
+ for i in /sys/fs/cgroup/*
+ '[' -d /sys/fs/cgroup/devices ']'
+ mkdir -p /sys/fs/cgroup/devices/kubepods
+ for i in /sys/fs/cgroup/*
+ '[' -d /sys/fs/cgroup/freezer ']'
+ mkdir -p /sys/fs/cgroup/freezer/kubepods
+ for i in /sys/fs/cgroup/*
+ '[' -d /sys/fs/cgroup/hugetlb ']'
+ mkdir -p /sys/fs/cgroup/hugetlb/kubepods
+ for i in /sys/fs/cgroup/*
+ '[' -d /sys/fs/cgroup/memory ']'
+ mkdir -p /sys/fs/cgroup/memory/kubepods
+ for i in /sys/fs/cgroup/*
+ '[' -d /sys/fs/cgroup/net_cls ']'
+ mkdir -p /sys/fs/cgroup/net_cls/kubepods
+ for i in /sys/fs/cgroup/*
+ '[' -d /sys/fs/cgroup/net_cls,net_prio ']'
+ mkdir -p /sys/fs/cgroup/net_cls,net_prio/kubepods
+ for i in /sys/fs/cgroup/*
+ '[' -d /sys/fs/cgroup/net_prio ']'
+ mkdir -p /sys/fs/cgroup/net_prio/kubepods
+ for i in /sys/fs/cgroup/*
+ '[' -d /sys/fs/cgroup/perf_event ']'
+ mkdir -p /sys/fs/cgroup/perf_event/kubepods
+ for i in /sys/fs/cgroup/*
+ '[' -d /sys/fs/cgroup/pids ']'
+ mkdir -p /sys/fs/cgroup/pids/kubepods
+ for i in /sys/fs/cgroup/*
+ '[' -d /sys/fs/cgroup/rdma ']'
+ mkdir -p /sys/fs/cgroup/rdma/kubepods
+ for i in /sys/fs/cgroup/*
+ '[' -d /sys/fs/cgroup/systemd ']'
+ mkdir -p /sys/fs/cgroup/systemd/kubepods
+ for i in /sys/fs/cgroup/*
+ '[' -d /sys/fs/cgroup/unified ']'
+ mkdir -p /sys/fs/cgroup/unified/kubepods
+ mkdir -p /sys/fs/cgroup/cpuacct,cpu/
+ mount --bind /sys/fs/cgroup/cpu,cpuacct/ /sys/fs/cgroup/cpuacct,cpu/
+ mkdir -p /sys/fs/cgroup/net_prio,net_cls/
+ mount --bind /sys/fs/cgroup/net_cls,net_prio/ /sys/fs/cgroup/net_prio,net_cls/
+ mkdir -p /opt/cni /etc/cni
+ chcon -Rt svirt_sandbox_file_t /etc/cni
+ true
+ chcon -Rt svirt_sandbox_file_t /opt/cni
+ true
+ sysctl -w net.bridge.bridge-nf-call-iptables=1
+ '[' -f /host/usr/lib/os-release ']'
+ ln -sf /host/usr/lib/os-release /usr/lib/os-release
+ grep -q -- --resolv-conf=/etc/resolv.conf
+ echo kubelet --cloud-provider= --pod-infra-container-image=rancher/mirrored-pause:3.4.1 --event-qps=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --v=2 --authentication-token-webhook=true --network-plugin=cni --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --cluster-dns=10.43.0.10 --hostname-override=192.168.70.2 --root-dir=/var/lib/kubelet --anonymous-auth=false --cgroups-per-qos=True --address=0.0.0.0 --resolv-conf=/etc/resolv.conf --streaming-connection-idle-timeout=30m --cluster-domain=cluster.local --make-iptables-util-chains=true --authorization-mode=Webhook --cni-conf-dir=/etc/cni/net.d --fail-swap-on=false --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cni-bin-dir=/opt/cni/bin
+ pgrep -f systemd-resolved
+ '[' -f /run/systemd/resolve/resolv.conf ']'
+ RESOLVCONF=--resolv-conf=/run/systemd/resolve/resolv.conf
+ '[' '!' -z '' ']'
++ /opt/rke-tools/bin/docker info
++ grep -i 'cgroup driver'
++ awk '{print $3}'
WARNING: No swap limit support
+ CGROUPDRIVER=cgroupfs
+ '[' '' == true ']'
+ exec kubelet --cloud-provider= --pod-infra-container-image=rancher/mirrored-pause:3.4.1 --event-qps=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --v=2 --authentication-token-webhook=true --network-plugin=cni --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --cluster-dns=10.43.0.10 --hostname-override=192.168.70.2 --root-dir=/var/lib/kubelet --anonymous-auth=false --cgroups-per-qos=True --address=0.0.0.0 --resolv-conf=/etc/resolv.conf --streaming-connection-idle-timeout=30m --cluster-domain=cluster.local --make-iptables-util-chains=true --authorization-mode=Webhook --cni-conf-dir=/etc/cni/net.d --fail-swap-on=false --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cni-bin-dir=/opt/cni/bin --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Flag --event-qps has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --authentication-token-webhook has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --network-plugin has been deprecated, will be removed along with dockershim.
Flag --read-only-port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --anonymous-auth has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --streaming-connection-idle-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --make-iptables-util-chains has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.
Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.
Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
I1025 15:05:33.044259    4045 flags.go:59] FLAG: --add-dir-header="false"
I1025 15:05:33.044344    4045 flags.go:59] FLAG: --address="0.0.0.0"
I1025 15:05:33.044348    4045 flags.go:59] FLAG: --allowed-unsafe-sysctls="[]"
I1025 15:05:33.044353    4045 flags.go:59] FLAG: --alsologtostderr="false"
I1025 15:05:33.044356    4045 flags.go:59] FLAG: --anonymous-auth="false"
I1025 15:05:33.044359    4045 flags.go:59] FLAG: --application-metrics-count-limit="100"
I1025 15:05:33.044362    4045 flags.go:59] FLAG: --authentication-token-webhook="true"
I1025 15:05:33.044364    4045 flags.go:59] FLAG: --authentication-token-webhook-cache-ttl="2m0s"
I1025 15:05:33.044368    4045 flags.go:59] FLAG: --authorization-mode="Webhook"
I1025 15:05:33.044372    4045 flags.go:59] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s"
I1025 15:05:33.044375    4045 flags.go:59] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s"
I1025 15:05:33.044377    4045 flags.go:59] FLAG: --azure-container-registry-config=""
I1025 15:05:33.044380    4045 flags.go:59] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
I1025 15:05:33.044383    4045 flags.go:59] FLAG: --bootstrap-kubeconfig=""
I1025 15:05:33.044385    4045 flags.go:59] FLAG: --cert-dir="/var/lib/kubelet/pki"
I1025 15:05:33.044388    4045 flags.go:59] FLAG: --cgroup-driver="cgroupfs"
I1025 15:05:33.044391    4045 flags.go:59] FLAG: --cgroup-root=""
I1025 15:05:33.044393    4045 flags.go:59] FLAG: --cgroups-per-qos="true"
I1025 15:05:33.044396    4045 flags.go:59] FLAG: --chaos-chance="0"
I1025 15:05:33.044399    4045 flags.go:59] FLAG: --client-ca-file="/etc/kubernetes/ssl/kube-ca.pem"
I1025 15:05:33.044403    4045 flags.go:59] FLAG: --cloud-config=""
I1025 15:05:33.044405    4045 flags.go:59] FLAG: --cloud-provider=""
I1025 15:05:33.044408    4045 flags.go:59] FLAG: --cluster-dns="[10.43.0.10]"
I1025 15:05:33.044411    4045 flags.go:59] FLAG: --cluster-domain="cluster.local"
I1025 15:05:33.044414    4045 flags.go:59] FLAG: --cni-bin-dir="/opt/cni/bin"
I1025 15:05:33.044416    4045 flags.go:59] FLAG: --cni-cache-dir="/var/lib/cni/cache"
I1025 15:05:33.044418    4045 flags.go:59] FLAG: --cni-conf-dir="/etc/cni/net.d"
I1025 15:05:33.044421    4045 flags.go:59] FLAG: --config=""
I1025 15:05:33.044423    4045 flags.go:59] FLAG: --container-hints="/etc/cadvisor/container_hints.json"
I1025 15:05:33.044426    4045 flags.go:59] FLAG: --container-log-max-files="5"
I1025 15:05:33.044429    4045 flags.go:59] FLAG: --container-log-max-size="10Mi"
I1025 15:05:33.044431    4045 flags.go:59] FLAG: --container-runtime="docker"
I1025 15:05:33.044433    4045 flags.go:59] FLAG: --container-runtime-endpoint="unix:///var/run/dockershim.sock"
I1025 15:05:33.044436    4045 flags.go:59] FLAG: --containerd="/run/containerd/containerd.sock"
I1025 15:05:33.044439    4045 flags.go:59] FLAG: --containerd-namespace="k8s.io"
I1025 15:05:33.044441    4045 flags.go:59] FLAG: --contention-profiling="false"
I1025 15:05:33.044443    4045 flags.go:59] FLAG: --cpu-cfs-quota="true"
I1025 15:05:33.044445    4045 flags.go:59] FLAG: --cpu-cfs-quota-period="100ms"
I1025 15:05:33.044448    4045 flags.go:59] FLAG: --cpu-manager-policy="none"
I1025 15:05:33.044450    4045 flags.go:59] FLAG: --cpu-manager-reconcile-period="10s"
I1025 15:05:33.044452    4045 flags.go:59] FLAG: --docker="unix:///var/run/docker.sock"
I1025 15:05:33.044455    4045 flags.go:59] FLAG: --docker-endpoint="unix:///var/run/docker.sock"
I1025 15:05:33.044457    4045 flags.go:59] FLAG: --docker-env-metadata-whitelist=""
I1025 15:05:33.044459    4045 flags.go:59] FLAG: --docker-only="false"
I1025 15:05:33.044462    4045 flags.go:59] FLAG: --docker-root="/var/lib/docker"
I1025 15:05:33.044464    4045 flags.go:59] FLAG: --docker-tls="false"
I1025 15:05:33.044466    4045 flags.go:59] FLAG: --docker-tls-ca="ca.pem"
I1025 15:05:33.044469    4045 flags.go:59] FLAG: --docker-tls-cert="cert.pem"
I1025 15:05:33.044471    4045 flags.go:59] FLAG: --docker-tls-key="key.pem"
I1025 15:05:33.044473    4045 flags.go:59] FLAG: --dynamic-config-dir=""
I1025 15:05:33.044477    4045 flags.go:59] FLAG: --enable-controller-attach-detach="true"
I1025 15:05:33.044479    4045 flags.go:59] FLAG: --enable-debugging-handlers="true"
I1025 15:05:33.044481    4045 flags.go:59] FLAG: --enable-load-reader="false"
I1025 15:05:33.044483    4045 flags.go:59] FLAG: --enable-server="true"
I1025 15:05:33.044485    4045 flags.go:59] FLAG: --enforce-node-allocatable="[pods]"
I1025 15:05:33.044489    4045 flags.go:59] FLAG: --event-burst="10"
I1025 15:05:33.044492    4045 flags.go:59] FLAG: --event-qps="0"
I1025 15:05:33.044494    4045 flags.go:59] FLAG: --event-storage-age-limit="default=0"
I1025 15:05:33.044496    4045 flags.go:59] FLAG: --event-storage-event-limit="default=0"
I1025 15:05:33.044500    4045 flags.go:59] FLAG: --eviction-hard="imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%"
I1025 15:05:33.044513    4045 flags.go:59] FLAG: --eviction-max-pod-grace-period="0"
I1025 15:05:33.044516    4045 flags.go:59] FLAG: --eviction-minimum-reclaim=""
I1025 15:05:33.044519    4045 flags.go:59] FLAG: --eviction-pressure-transition-period="5m0s"
I1025 15:05:33.044522    4045 flags.go:59] FLAG: --eviction-soft=""
I1025 15:05:33.044524    4045 flags.go:59] FLAG: --eviction-soft-grace-period=""
I1025 15:05:33.044527    4045 flags.go:59] FLAG: --exit-on-lock-contention="false"
I1025 15:05:33.044529    4045 flags.go:59] FLAG: --experimental-allocatable-ignore-eviction="false"
I1025 15:05:33.044531    4045 flags.go:59] FLAG: --experimental-bootstrap-kubeconfig=""
I1025 15:05:33.044533    4045 flags.go:59] FLAG: --experimental-check-node-capabilities-before-mount="false"
I1025 15:05:33.044536    4045 flags.go:59] FLAG: --experimental-dockershim-root-directory="/var/lib/dockershim"
I1025 15:05:33.044539    4045 flags.go:59] FLAG: --experimental-kernel-memcg-notification="false"
I1025 15:05:33.044541    4045 flags.go:59] FLAG: --experimental-logging-sanitization="false"
I1025 15:05:33.044543    4045 flags.go:59] FLAG: --experimental-mounter-path=""
I1025 15:05:33.044545    4045 flags.go:59] FLAG: --fail-swap-on="false"
I1025 15:05:33.044547    4045 flags.go:59] FLAG: --feature-gates=""
I1025 15:05:33.044551    4045 flags.go:59] FLAG: --file-check-frequency="20s"
I1025 15:05:33.044554    4045 flags.go:59] FLAG: --global-housekeeping-interval="1m0s"
I1025 15:05:33.044556    4045 flags.go:59] FLAG: --hairpin-mode="promiscuous-bridge"
I1025 15:05:33.044559    4045 flags.go:59] FLAG: --healthz-bind-address="127.0.0.1"
I1025 15:05:33.044561    4045 flags.go:59] FLAG: --healthz-port="10248"
I1025 15:05:33.044564    4045 flags.go:59] FLAG: --help="false"
I1025 15:05:33.044566    4045 flags.go:59] FLAG: --hostname-override="192.168.70.2"
I1025 15:05:33.044568    4045 flags.go:59] FLAG: --housekeeping-interval="10s"
I1025 15:05:33.044570    4045 flags.go:59] FLAG: --http-check-frequency="20s"
I1025 15:05:33.044573    4045 flags.go:59] FLAG: --image-credential-provider-bin-dir=""
I1025 15:05:33.044575    4045 flags.go:59] FLAG: --image-credential-provider-config=""
I1025 15:05:33.044577    4045 flags.go:59] FLAG: --image-gc-high-threshold="85"
I1025 15:05:33.044579    4045 flags.go:59] FLAG: --image-gc-low-threshold="80"
I1025 15:05:33.044581    4045 flags.go:59] FLAG: --image-pull-progress-deadline="1m0s"
I1025 15:05:33.044584    4045 flags.go:59] FLAG: --image-service-endpoint=""
I1025 15:05:33.044586    4045 flags.go:59] FLAG: --iptables-drop-bit="15"
I1025 15:05:33.044588    4045 flags.go:59] FLAG: --iptables-masquerade-bit="14"
I1025 15:05:33.044590    4045 flags.go:59] FLAG: --keep-terminated-pod-volumes="false"
I1025 15:05:33.044593    4045 flags.go:59] FLAG: --kernel-memcg-notification="false"
I1025 15:05:33.044595    4045 flags.go:59] FLAG: --kube-api-burst="10"
I1025 15:05:33.044598    4045 flags.go:59] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
I1025 15:05:33.044600    4045 flags.go:59] FLAG: --kube-api-qps="5"
I1025 15:05:33.044604    4045 flags.go:59] FLAG: --kube-reserved=""
I1025 15:05:33.044606    4045 flags.go:59] FLAG: --kube-reserved-cgroup=""
I1025 15:05:33.044608    4045 flags.go:59] FLAG: --kubeconfig="/etc/kubernetes/ssl/kubecfg-kube-node.yaml"
I1025 15:05:33.044611    4045 flags.go:59] FLAG: --kubelet-cgroups=""
I1025 15:05:33.044613    4045 flags.go:59] FLAG: --lock-file=""
I1025 15:05:33.044615    4045 flags.go:59] FLAG: --log-backtrace-at=":0"
I1025 15:05:33.044618    4045 flags.go:59] FLAG: --log-cadvisor-usage="false"
I1025 15:05:33.044620    4045 flags.go:59] FLAG: --log-dir=""
I1025 15:05:33.044623    4045 flags.go:59] FLAG: --log-file=""
I1025 15:05:33.044625    4045 flags.go:59] FLAG: --log-file-max-size="1800"
I1025 15:05:33.044627    4045 flags.go:59] FLAG: --log-flush-frequency="5s"
I1025 15:05:33.044629    4045 flags.go:59] FLAG: --logging-format="text"
I1025 15:05:33.044631    4045 flags.go:59] FLAG: --logtostderr="true"
I1025 15:05:33.044634    4045 flags.go:59] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"
I1025 15:05:33.044637    4045 flags.go:59] FLAG: --make-iptables-util-chains="true"
I1025 15:05:33.044639    4045 flags.go:59] FLAG: --manifest-url=""
I1025 15:05:33.044641    4045 flags.go:59] FLAG: --manifest-url-header=""
I1025 15:05:33.044645    4045 flags.go:59] FLAG: --master-service-namespace="default"
I1025 15:05:33.044647    4045 flags.go:59] FLAG: --max-open-files="1000000"
I1025 15:05:33.044651    4045 flags.go:59] FLAG: --max-pods="110"
I1025 15:05:33.044653    4045 flags.go:59] FLAG: --maximum-dead-containers="-1"
I1025 15:05:33.044655    4045 flags.go:59] FLAG: --maximum-dead-containers-per-container="1"
I1025 15:05:33.044658    4045 flags.go:59] FLAG: --memory-manager-policy="None"
I1025 15:05:33.044660    4045 flags.go:59] FLAG: --minimum-container-ttl-duration="0s"
I1025 15:05:33.044662    4045 flags.go:59] FLAG: --minimum-image-ttl-duration="2m0s"
I1025 15:05:33.044664    4045 flags.go:59] FLAG: --network-plugin="cni"
I1025 15:05:33.044667    4045 flags.go:59] FLAG: --network-plugin-mtu="0"
I1025 15:05:33.044669    4045 flags.go:59] FLAG: --node-ip=""
I1025 15:05:33.044671    4045 flags.go:59] FLAG: --node-labels=""
I1025 15:05:33.044675    4045 flags.go:59] FLAG: --node-status-max-images="50"
I1025 15:05:33.044677    4045 flags.go:59] FLAG: --node-status-update-frequency="10s"
I1025 15:05:33.044679    4045 flags.go:59] FLAG: --non-masquerade-cidr="10.0.0.0/8"
I1025 15:05:33.044682    4045 flags.go:59] FLAG: --one-output="false"
I1025 15:05:33.044684    4045 flags.go:59] FLAG: --oom-score-adj="-999"
I1025 15:05:33.044686    4045 flags.go:59] FLAG: --pod-cidr=""
I1025 15:05:33.044688    4045 flags.go:59] FLAG: --pod-infra-container-image="rancher/mirrored-pause:3.4.1"
I1025 15:05:33.044691    4045 flags.go:59] FLAG: --pod-manifest-path=""
I1025 15:05:33.044693    4045 flags.go:59] FLAG: --pod-max-pids="-1"
I1025 15:05:33.044696    4045 flags.go:59] FLAG: --pods-per-core="0"
I1025 15:05:33.044699    4045 flags.go:59] FLAG: --port="10250"
I1025 15:05:33.044701    4045 flags.go:59] FLAG: --protect-kernel-defaults="false"
I1025 15:05:33.044703    4045 flags.go:59] FLAG: --provider-id=""
I1025 15:05:33.044706    4045 flags.go:59] FLAG: --qos-reserved=""
I1025 15:05:33.044708    4045 flags.go:59] FLAG: --read-only-port="0"
I1025 15:05:33.044710    4045 flags.go:59] FLAG: --really-crash-for-testing="false"
I1025 15:05:33.044712    4045 flags.go:59] FLAG: --redirect-container-streaming="false"
I1025 15:05:33.044715    4045 flags.go:59] FLAG: --register-node="true"
I1025 15:05:33.044717    4045 flags.go:59] FLAG: --register-schedulable="true"
I1025 15:05:33.044719    4045 flags.go:59] FLAG: --register-with-taints=""
I1025 15:05:33.044722    4045 flags.go:59] FLAG: --registry-burst="10"
I1025 15:05:33.044725    4045 flags.go:59] FLAG: --registry-qps="5"
I1025 15:05:33.044727    4045 flags.go:59] FLAG: --reserved-cpus=""
I1025 15:05:33.044729    4045 flags.go:59] FLAG: --reserved-memory=""
I1025 15:05:33.044732    4045 flags.go:59] FLAG: --resolv-conf="/run/systemd/resolve/resolv.conf"
I1025 15:05:33.044735    4045 flags.go:59] FLAG: --root-dir="/var/lib/kubelet"
I1025 15:05:33.044737    4045 flags.go:59] FLAG: --rotate-certificates="false"
I1025 15:05:33.044739    4045 flags.go:59] FLAG: --rotate-server-certificates="false"
I1025 15:05:33.044742    4045 flags.go:59] FLAG: --runonce="false"
I1025 15:05:33.044744    4045 flags.go:59] FLAG: --runtime-cgroups=""
I1025 15:05:33.044746    4045 flags.go:59] FLAG: --runtime-request-timeout="2m0s"
I1025 15:05:33.044748    4045 flags.go:59] FLAG: --seccomp-profile-root="/var/lib/kubelet/seccomp"
I1025 15:05:33.044751    4045 flags.go:59] FLAG: --serialize-image-pulls="true"
I1025 15:05:33.044753    4045 flags.go:59] FLAG: --skip-headers="false"
I1025 15:05:33.044755    4045 flags.go:59] FLAG: --skip-log-headers="false"
I1025 15:05:33.044758    4045 flags.go:59] FLAG: --stderrthreshold="2"
I1025 15:05:33.044760    4045 flags.go:59] FLAG: --storage-driver-buffer-duration="1m0s"
I1025 15:05:33.044762    4045 flags.go:59] FLAG: --storage-driver-db="cadvisor"
I1025 15:05:33.044765    4045 flags.go:59] FLAG: --storage-driver-host="localhost:8086"
I1025 15:05:33.044768    4045 flags.go:59] FLAG: --storage-driver-password="root"
I1025 15:05:33.044770    4045 flags.go:59] FLAG: --storage-driver-secure="false"
I1025 15:05:33.044772    4045 flags.go:59] FLAG: --storage-driver-table="stats"
I1025 15:05:33.044774    4045 flags.go:59] FLAG: --storage-driver-user="root"
I1025 15:05:33.044777    4045 flags.go:59] FLAG: --streaming-connection-idle-timeout="30m0s"
I1025 15:05:33.044780    4045 flags.go:59] FLAG: --sync-frequency="1m0s"
I1025 15:05:33.044782    4045 flags.go:59] FLAG: --system-cgroups=""
I1025 15:05:33.044784    4045 flags.go:59] FLAG: --system-reserved=""
I1025 15:05:33.044787    4045 flags.go:59] FLAG: --system-reserved-cgroup=""
I1025 15:05:33.044789    4045 flags.go:59] FLAG: --tls-cert-file=""
I1025 15:05:33.044792    4045 flags.go:59] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305]"
I1025 15:05:33.044803    4045 flags.go:59] FLAG: --tls-min-version=""
I1025 15:05:33.044806    4045 flags.go:59] FLAG: --tls-private-key-file=""
I1025 15:05:33.044808    4045 flags.go:59] FLAG: --topology-manager-policy="none"
I1025 15:05:33.044810    4045 flags.go:59] FLAG: --topology-manager-scope="container"
I1025 15:05:33.044813    4045 flags.go:59] FLAG: --v="2"
I1025 15:05:33.044815    4045 flags.go:59] FLAG: --version="false"
I1025 15:05:33.044819    4045 flags.go:59] FLAG: --vmodule=""
I1025 15:05:33.044821    4045 flags.go:59] FLAG: --volume-plugin-dir="/var/lib/kubelet/volumeplugins"
I1025 15:05:33.044824    4045 flags.go:59] FLAG: --volume-stats-agg-period="1m0s"
I1025 15:05:33.044863    4045 feature_gate.go:243] feature gates: &{map[]}
I1025 15:05:33.044912    4045 feature_gate.go:243] feature gates: &{map[]}
I1025 15:05:33.260971    4045 mount_linux.go:197] Detected OS without systemd
I1025 15:05:33.261178    4045 server.go:440] "Kubelet version" kubeletVersion="v1.21.5"
I1025 15:05:33.261275    4045 feature_gate.go:243] feature gates: &{map[]}
I1025 15:05:33.261497    4045 feature_gate.go:243] feature gates: &{map[]}
I1025 15:05:33.274796    4045 dynamic_cafile_content.go:129] Loaded a new CA Bundle and Verifier for "client-ca-bundle::/etc/kubernetes/ssl/kube-ca.pem"
I1025 15:05:33.274944    4045 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/ssl/kube-ca.pem
I1025 15:05:33.275313    4045 manager.go:165] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct"
I1025 15:05:33.329438    4045 fs.go:131] Filesystem UUIDs: map[0bf56f61-7d4e-4900-8328-dfb8aaf686f0:/dev/sda1 FED3-875D:/dev/sda15]
I1025 15:05:33.329467    4045 fs.go:132] Filesystem partitions: map[/dev:{mountpoint:/dev major:0 minor:103 fsType:tmpfs blockSize:0} /dev/sda1:{mountpoint:/var/lib/docker major:8 minor:1 fsType:ext4 blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:27 fsType:tmpfs blockSize:0} /etc/resolv.conf:{mountpoint:/etc/resolv.conf major:0 minor:26 fsType:tmpfs blockSize:0} /host/dev/shm:{mountpoint:/host/dev/shm major:0 minor:27 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:26 fsType:tmpfs blockSize:0} /run/lock:{mountpoint:/run/lock major:0 minor:28 fsType:tmpfs blockSize:0} /run/user/0:{mountpoint:/run/user/0 major:0 minor:50 fsType:tmpfs blockSize:0} /sys/fs/cgroup:{mountpoint:/sys/fs/cgroup major:0 minor:29 fsType:tmpfs blockSize:0} overlay_0-101:{mountpoint:/ major:0 minor:101 fsType:overlay blockSize:0}]
I1025 15:05:33.329936    4045 nvidia.go:61] NVIDIA setup failed: no NVIDIA devices found
I1025 15:05:33.332273    4045 manager.go:213] Machine: {Timestamp:2021-10-25 15:05:33.332030031 +0000 UTC m=+0.345600247 NumCores:4 NumPhysicalCores:4 NumSockets:1 CpuFrequency:2495314 MemoryCapacity:8149168128 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID: SystemUUID:cd44749b-7bc1-4ca6-8211-c5d06cd6c2f7 BootID:41103ae2-ab38-4f53-b017-2b7b698a5d64 Filesystems:[{Device:/run/user/0 DeviceMajor:0 DeviceMinor:50 Capacity:814919680 Type:vfs Inodes:994771 HasInodes:true} {Device:overlay_0-101 DeviceMajor:0 DeviceMinor:101 Capacity:161001639936 Type:vfs Inodes:9732496 HasInodes:true} {Device:/dev DeviceMajor:0 DeviceMinor:103 Capacity:4054175744 Type:vfs Inodes:989789 HasInodes:true} {Device:/sys/fs/cgroup DeviceMajor:0 DeviceMinor:29 Capacity:4074582016 Type:vfs Inodes:994771 HasInodes:true} {Device:/run/lock DeviceMajor:0 DeviceMinor:28 Capacity:814919680 Type:vfs Inodes:994771 HasInodes:true} {Device:/host/dev/shm DeviceMajor:0 DeviceMinor:27 Capacity:4074582016 Type:vfs Inodes:994771 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:27 Capacity:4074582016 Type:vfs Inodes:994771 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:26 Capacity:814919680 Type:vfs Inodes:994771 HasInodes:true} {Device:/etc/resolv.conf DeviceMajor:0 DeviceMinor:26 Capacity:814919680 Type:vfs Inodes:994771 HasInodes:true} {Device:/dev/sda1 DeviceMajor:8 DeviceMinor:1 Capacity:161001639936 Type:vfs Inodes:9732496 HasInodes:true}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:163842097152 Scheduler:mq-deadline}] NetworkDevices:[{Name:enp7s0 MacAddress:86:00:00:ea:af:7b Speed:-1 Mtu:1450} {Name:eth0 MacAddress:96:00:00:ea:af:6e Speed:-1 Mtu:1500}] Topology:[{Id:0 Memory:8149168128 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:524288 Type:Unified Level:2}] SocketID:0} {Id:1 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:524288 Type:Unified Level:2}] SocketID:0} {Id:2 Threads:[2] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:524288 Type:Unified Level:2}] SocketID:0} {Id:3 Threads:[3] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:524288 Type:Unified Level:2}] SocketID:0}] Caches:[{Size:16777216 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
I1025 15:05:33.332373    4045 manager_no_libpfm.go:28] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available.
I1025 15:05:33.350543    4045 manager.go:229] Version: {KernelVersion:5.4.0-84-generic ContainerOsVersion:Ubuntu 20.04.3 LTS DockerVersion:20.10.8 DockerAPIVersion:1.41 CadvisorVersion: CadvisorRevision:}
I1025 15:05:33.350710    4045 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
I1025 15:05:33.351233    4045 container_manager_linux.go:278] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
I1025 15:05:33.351459    4045 container_manager_linux.go:283] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
I1025 15:05:33.351588    4045 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
I1025 15:05:33.351614    4045 container_manager_linux.go:314] "Initializing Topology Manager" policy="none" scope="container"
I1025 15:05:33.351629    4045 container_manager_linux.go:319] "Creating device plugin manager" devicePluginEnabled=true
I1025 15:05:33.351668    4045 manager.go:136] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock"
I1025 15:05:33.351945    4045 kubelet.go:307] "Using dockershim is deprecated, please consider using a full-fledged CRI implementation"
I1025 15:05:33.352008    4045 client.go:78] "Connecting to docker on the dockerEndpoint" endpoint="unix:///var/run/docker.sock"
I1025 15:05:33.352047    4045 client.go:97] "Start docker client with request timeout" timeout="2m0s"
I1025 15:05:33.362885    4045 docker_service.go:566] "Hairpin mode is set but kubenet is not enabled, falling back to HairpinVeth" hairpinMode=promiscuous-bridge
I1025 15:05:33.362917    4045 docker_service.go:242] "Hairpin mode is set" hairpinMode=hairpin-veth
I1025 15:05:33.363062    4045 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
I1025 15:05:33.392423    4045 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
I1025 15:05:33.392476    4045 plugins.go:168] "Loaded network plugin" networkPluginName="cni"
I1025 15:05:33.392568    4045 docker_service.go:257] "Docker cri networking managed by the network plugin" networkPluginName="cni"
I1025 15:05:33.392593    4045 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
I1025 15:05:33.402848    4045 docker_service.go:264] "Docker Info" dockerInfo=&{ID:YJXH:VYTJ:B3BK:GUJT:CDWL:PM63:FQUB:S7XN:NZIG:TMS4:66K2:TV5W Containers:8 ContainersRunning:6 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:60 SystemTime:2021-10-25T17:05:33.393847078+02:00 LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:1 NEventsListener:0 KernelVersion:5.4.0-84-generic OperatingSystem:Ubuntu 20.04.3 LTS OSVersion:20.04 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00079a460 NCPU:4 MemTotal:8149168128 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:rke-1 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} io.containerd.runtime.v1.linux:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: DefaultAddressPools:[] Warnings:[WARNING: No swap limit support]}
I1025 15:05:33.402885    4045 docker_service.go:277] "Setting cgroupDriver" cgroupDriver="cgroupfs"
I1025 15:05:33.402977    4045 kubelet_dockershim.go:62] "Starting the GRPC server for the docker CRI shim."
I1025 15:05:33.403023    4045 docker_server.go:62] "Start dockershim grpc server"
I1025 15:05:33.416988    4045 remote_runtime.go:62] parsed scheme: ""
I1025 15:05:33.417015    4045 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
I1025 15:05:33.417051    4045 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <nil> <nil>}
I1025 15:05:33.417062    4045 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1025 15:05:33.417112    4045 remote_image.go:50] parsed scheme: ""
I1025 15:05:33.417120    4045 remote_image.go:50] scheme "" not registered, fallback to default scheme
I1025 15:05:33.417131    4045 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <nil> <nil>}
I1025 15:05:33.417136    4045 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1025 15:05:33.417177    4045 server.go:1131] "Using root directory" path="/var/lib/kubelet"
I1025 15:05:33.417223    4045 kubelet.go:404] "Attempting to sync node with API server"
I1025 15:05:33.417224    4045 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc000d08f00, {CONNECTING <nil>}
I1025 15:05:33.417240    4045 kubelet.go:283] "Adding apiserver pod source"
I1025 15:05:33.417250    4045 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc000d09090, {CONNECTING <nil>}
I1025 15:05:33.417275    4045 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
I1025 15:05:33.417901    4045 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc000d08f00, {READY <nil>}
I1025 15:05:33.417934    4045 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc000d09090, {READY <nil>}
I1025 15:05:33.436510    4045 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="docker" version="20.10.8" apiVersion="1.41.0"
E1025 15:05:33.737633    4045 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
    For verbose messaging see aws.Config.CredentialsChainVerboseErrors
W1025 15:05:33.738679    4045 probe.go:268] Flexvolume plugin directory at /var/lib/kubelet/volumeplugins does not exist. Recreating.
I1025 15:05:33.739742    4045 plugins.go:639] Loaded volume plugin "kubernetes.io/cinder"
I1025 15:05:33.739777    4045 plugins.go:639] Loaded volume plugin "kubernetes.io/azure-disk"
I1025 15:05:33.739790    4045 plugins.go:639] Loaded volume plugin "kubernetes.io/azure-file"
I1025 15:05:33.739802    4045 plugins.go:639] Loaded volume plugin "kubernetes.io/vsphere-volume"
I1025 15:05:33.739812    4045 plugins.go:639] Loaded volume plugin "kubernetes.io/aws-ebs"
I1025 15:05:33.739823    4045 plugins.go:639] Loaded volume plugin "kubernetes.io/gce-pd"
I1025 15:05:33.739873    4045 plugins.go:639] Loaded volume plugin "kubernetes.io/empty-dir"
I1025 15:05:33.739888    4045 plugins.go:639] Loaded volume plugin "kubernetes.io/git-repo"
I1025 15:05:33.739903    4045 plugins.go:639] Loaded volume plugin "kubernetes.io/host-path"
I1025 15:05:33.739915    4045 plugins.go:639] Loaded volume plugin "kubernetes.io/nfs"
I1025 15:05:33.739927    4045 plugins.go:639] Loaded volume plugin "kubernetes.io/secret"
I1025 15:05:33.739938    4045 plugins.go:639] Loaded volume plugin "kubernetes.io/iscsi"
I1025 15:05:33.739948    4045 plugins.go:639] Loaded volume plugin "kubernetes.io/glusterfs"
I1025 15:05:33.739980    4045 plugins.go:639] Loaded volume plugin "kubernetes.io/rbd"
I1025 15:05:33.740003    4045 plugins.go:639] Loaded volume plugin "kubernetes.io/quobyte"
I1025 15:05:33.740013    4045 plugins.go:639] Loaded volume plugin "kubernetes.io/cephfs"
I1025 15:05:33.740024    4045 plugins.go:639] Loaded volume plugin "kubernetes.io/downward-api"
I1025 15:05:33.740039    4045 plugins.go:639] Loaded volume plugin "kubernetes.io/fc"
I1025 15:05:33.740050    4045 plugins.go:639] Loaded volume plugin "kubernetes.io/flocker"
I1025 15:05:33.740061    4045 plugins.go:639] Loaded volume plugin "kubernetes.io/configmap"
I1025 15:05:33.740073    4045 plugins.go:639] Loaded volume plugin "kubernetes.io/projected"
I1025 15:05:33.740122    4045 plugins.go:639] Loaded volume plugin "kubernetes.io/portworx-volume"
I1025 15:05:33.740142    4045 plugins.go:639] Loaded volume plugin "kubernetes.io/scaleio"
I1025 15:05:33.740170    4045 plugins.go:639] Loaded volume plugin "kubernetes.io/local-volume"
I1025 15:05:33.740182    4045 plugins.go:639] Loaded volume plugin "kubernetes.io/storageos"
I1025 15:05:33.740233    4045 plugins.go:639] Loaded volume plugin "kubernetes.io/csi"
I1025 15:05:33.740478    4045 server.go:1190] "Started kubelet"
I1025 15:05:33.740645    4045 server.go:149] "Starting to listen" address="0.0.0.0" port=10250
E1025 15:05:33.741138    4045 kubelet.go:1306] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache"
I1025 15:05:33.742403    4045 server.go:409] "Adding debug handlers to kubelet server"
I1025 15:05:33.743381    4045 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
I1025 15:05:33.743691    4045 volume_manager.go:269] "The desired_state_of_world populator starts"
I1025 15:05:33.743732    4045 volume_manager.go:271] "Starting Kubelet Volume Manager"
I1025 15:05:33.743954    4045 desired_state_of_world_populator.go:141] "Desired state populator starts to run"
E1025 15:05:33.756366    4045 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
I1025 15:05:33.757771    4045 factory.go:55] Registering systemd factory
I1025 15:05:33.777217    4045 nodeinfomanager.go:401] Failed to publish CSINode: nodes "192.168.70.2" not found
E1025 15:05:33.780772    4045 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"192.168.70.2\" not found" node="192.168.70.2"
I1025 15:05:33.782396    4045 factory.go:372] Registering Docker factory
I1025 15:05:33.782487    4045 client.go:86] parsed scheme: "unix"
I1025 15:05:33.782494    4045 client.go:86] scheme "unix" not registered, fallback to default scheme
I1025 15:05:33.782513    4045 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
I1025 15:05:33.782520    4045 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1025 15:05:33.782660    4045 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc0010fdbf0, {CONNECTING <nil>}
I1025 15:05:33.783001    4045 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc0010fdbf0, {READY <nil>}
I1025 15:05:33.783623    4045 factory.go:137] Registering containerd factory
I1025 15:05:33.783772    4045 factory.go:101] Registering Raw factory
I1025 15:05:33.783826    4045 manager.go:1203] Started watching for new ooms in manager
I1025 15:05:33.784197    4045 manager.go:301] Starting recovery of all containers
I1025 15:05:33.800944    4045 nodeinfomanager.go:401] Failed to publish CSINode: nodes "192.168.70.2" not found
W1025 15:05:33.806187    4045 container.go:586] Failed to update stats for container "/kubepods": /sys/fs/cgroup/cpuset/kubepods/cpuset.cpus found to be empty, continuing to push stats
I1025 15:05:33.812363    4045 manager.go:306] Recovery completed
I1025 15:05:33.835892    4045 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
E1025 15:05:33.845475    4045 kubelet.go:2291] "Error getting node" err="node \"192.168.70.2\" not found"
I1025 15:05:33.845506    4045 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
I1025 15:05:33.857916    4045 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv4
I1025 15:05:33.864902    4045 nodeinfomanager.go:401] Failed to publish CSINode: nodes "192.168.70.2" not found
E1025 15:05:33.864916    4045 kubelet_network_linux.go:79] "Failed to ensure that nat chain exists KUBE-MARK-DROP chain" err="error creating chain \"KUBE-MARK-DROP\": exit status 3: modprobe: FATAL: Module ip6_tables not found in directory /lib/modules/5.4.0-84-generic\nip6tables v1.8.4 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n"
I1025 15:05:33.864937    4045 kubelet_network_linux.go:64] "Failed to initialize protocol iptables rules; some functionality may be missing." protocol=IPv6
I1025 15:05:33.864950    4045 status_manager.go:157] "Starting to sync pod status with apiserver"
I1025 15:05:33.864971    4045 kubelet.go:1846] "Starting kubelet main sync loop"
E1025 15:05:33.865020    4045 kubelet.go:1870] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
I1025 15:05:33.866495    4045 kubelet_node_status.go:554] "Recording event message for node" node="192.168.70.2" event="NodeHasSufficientMemory"
I1025 15:05:33.866520    4045 kubelet_node_status.go:554] "Recording event message for node" node="192.168.70.2" event="NodeHasNoDiskPressure"
I1025 15:05:33.866530    4045 kubelet_node_status.go:554] "Recording event message for node" node="192.168.70.2" event="NodeHasSufficientPID"
I1025 15:05:33.867497    4045 cpu_manager.go:199] "Starting CPU manager" policy="none"
I1025 15:05:33.867515    4045 cpu_manager.go:200] "Reconciling" reconcilePeriod="10s"
I1025 15:05:33.867529    4045 state_mem.go:36] "Initialized new in-memory state store"
I1025 15:05:33.869122    4045 policy_none.go:44] "None policy: Start"
I1025 15:05:33.869838    4045 container_manager_linux.go:455] "Updating kernel flag" flag="kernel/panic" expectedValue=10 actualValue=0
I1025 15:05:33.869986    4045 container_manager_linux.go:455] "Updating kernel flag" flag="kernel/panic_on_oops" expectedValue=1 actualValue=0
I1025 15:05:33.870216    4045 container_manager_linux.go:455] "Updating kernel flag" flag="vm/overcommit_memory" expectedValue=1 actualValue=0
I1025 15:05:33.893313    4045 manager.go:242] "Starting Device Plugin manager"
I1025 15:05:33.893421    4045 manager.go:600] "Failed to retrieve checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
I1025 15:05:33.893573    4045 manager.go:284] "Serving device plugin registration server on socket" path="/var/lib/kubelet/device-plugins/kubelet.sock"
I1025 15:05:33.893701    4045 plugin_watcher.go:52] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry"
I1025 15:05:33.893852    4045 plugin_manager.go:112] "The desired_state_of_world populator (plugin watcher) starts"
I1025 15:05:33.893880    4045 plugin_manager.go:114] "Starting Kubelet Plugin Manager"
E1025 15:05:33.894109    4045 eviction_manager.go:255] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"192.168.70.2\" not found"
I1025 15:05:33.894409    4045 container_manager_linux.go:510] "Discovered runtime cgroup name" cgroupName="/system.slice/docker.service"
I1025 15:05:33.896395    4045 kubelet_node_status.go:554] "Recording event message for node" node="192.168.70.2" event="NodeHasSufficientMemory"
I1025 15:05:33.896426    4045 kubelet_node_status.go:554] "Recording event message for node" node="192.168.70.2" event="NodeHasNoDiskPressure"
I1025 15:05:33.896437    4045 kubelet_node_status.go:554] "Recording event message for node" node="192.168.70.2" event="NodeHasSufficientPID"
I1025 15:05:33.896468    4045 kubelet_node_status.go:71] "Attempting to register node" node="192.168.70.2"
I1025 15:05:33.912383    4045 kubelet_node_status.go:74] "Successfully registered node" node="192.168.70.2"
I1025 15:05:34.417655    4045 apiserver.go:52] "Watching apiserver"
I1025 15:05:34.622200    4045 kubelet.go:1932] "SyncLoop ADD" source="api" pods=[]
I1025 15:05:34.649361    4045 reconciler.go:157] "Reconciler: start to sync state"
I1025 15:05:38.392835    4045 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
E1025 15:05:38.907423    4045 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
I1025 15:05:40.990966    4045 kuberuntime_manager.go:1044] "Updating runtime config through cri with podcidr" CIDR="10.42.2.0/24"
I1025 15:05:40.991560    4045 docker_service.go:359] "Docker cri received runtime config" runtimeConfig="&RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.42.2.0/24,},}"
I1025 15:05:40.991812    4045 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.2.0/24"
E1025 15:05:41.001098    4045 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
I1025 15:05:43.393490    4045 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
I1025 15:05:43.864647    4045 kubelet.go:1932] "SyncLoop ADD" source="api" pods=[kube-system/rke-network-plugin-deploy-job-ftnpq]
I1025 15:05:43.864739    4045 topology_manager.go:187] "Topology Admit Handler"
E1025 15:05:43.901191    4045 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods\": RecentStats: unable to find data in memory cache]"
I1025 15:05:43.904101    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f3fc6b3-446b-4962-a8dc-f0cdde91c403-config-volume\") pod \"rke-network-plugin-deploy-job-ftnpq\" (UID: \"3f3fc6b3-446b-4962-a8dc-f0cdde91c403\") "
I1025 15:05:43.904164    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t4fr\" (UniqueName: \"kubernetes.io/projected/3f3fc6b3-446b-4962-a8dc-f0cdde91c403-kube-api-access-6t4fr\") pod \"rke-network-plugin-deploy-job-ftnpq\" (UID: \"3f3fc6b3-446b-4962-a8dc-f0cdde91c403\") "
E1025 15:05:43.920943    4045 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
I1025 15:05:44.004812    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f3fc6b3-446b-4962-a8dc-f0cdde91c403-config-volume\") pod \"rke-network-plugin-deploy-job-ftnpq\" (UID: \"3f3fc6b3-446b-4962-a8dc-f0cdde91c403\") "
I1025 15:05:44.004908    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-6t4fr\" (UniqueName: \"kubernetes.io/projected/3f3fc6b3-446b-4962-a8dc-f0cdde91c403-kube-api-access-6t4fr\") pod \"rke-network-plugin-deploy-job-ftnpq\" (UID: \"3f3fc6b3-446b-4962-a8dc-f0cdde91c403\") "
I1025 15:05:44.005923    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3f3fc6b3-446b-4962-a8dc-f0cdde91c403-config-volume") pod "rke-network-plugin-deploy-job-ftnpq" (UID: "3f3fc6b3-446b-4962-a8dc-f0cdde91c403") 
I1025 15:05:44.019415    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "kube-api-access-6t4fr" (UniqueName: "kubernetes.io/projected/3f3fc6b3-446b-4962-a8dc-f0cdde91c403-kube-api-access-6t4fr") pod "rke-network-plugin-deploy-job-ftnpq" (UID: "3f3fc6b3-446b-4962-a8dc-f0cdde91c403") 
I1025 15:05:44.189483    4045 kuberuntime_manager.go:460] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/rke-network-plugin-deploy-job-ftnpq"
E1025 15:05:44.448147    4045 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
    For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I1025 15:05:44.448341    4045 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider
I1025 15:05:44.448537    4045 provider.go:82] Docker config file not found: couldn't find valid .dockercfg after checking in [/var/lib/kubelet  /root /]
I1025 15:05:47.165189    4045 kube_docker_client.go:347] "Stop pulling image" image="rancher/mirrored-pause:3.4.1" progress="Status: Downloaded newer image for rancher/mirrored-pause:3.4.1"
I1025 15:05:47.910579    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/rke-network-plugin-deploy-job-ftnpq" event=&{ID:3f3fc6b3-446b-4962-a8dc-f0cdde91c403 Type:ContainerStarted Data:fd4d24028fdb3cf99f87921d2db2f1dd2fde1f81909f470e4eb3525c048f0278}
I1025 15:05:47.910662    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/rke-network-plugin-deploy-job-ftnpq" event=&{ID:3f3fc6b3-446b-4962-a8dc-f0cdde91c403 Type:ContainerStarted Data:42596d94b280f6489451b1550c760e3df9fea4c914b648a8cd2dea5577fc7c09}
I1025 15:05:48.303051    4045 kubelet.go:1932] "SyncLoop ADD" source="api" pods=[kube-system/canal-dt6sf]
I1025 15:05:48.303210    4045 topology_manager.go:187] "Topology Admit Handler"
I1025 15:05:48.328267    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-policysync\") pod \"canal-dt6sf\" (UID: \"18413886-6b2a-4b22-9fd6-2bb45108ff1e\") "
I1025 15:05:48.328304    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-flexvol-driver-host\") pod \"canal-dt6sf\" (UID: \"18413886-6b2a-4b22-9fd6-2bb45108ff1e\") "
I1025 15:05:48.328324    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sysfs\" (UniqueName: \"kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-sysfs\") pod \"canal-dt6sf\" (UID: \"18413886-6b2a-4b22-9fd6-2bb45108ff1e\") "
I1025 15:05:48.328340    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-cni-net-dir\") pod \"canal-dt6sf\" (UID: \"18413886-6b2a-4b22-9fd6-2bb45108ff1e\") "
I1025 15:05:48.328358    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-cni-log-dir\") pod \"canal-dt6sf\" (UID: \"18413886-6b2a-4b22-9fd6-2bb45108ff1e\") "
I1025 15:05:48.328415    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bct2l\" (UniqueName: \"kubernetes.io/projected/18413886-6b2a-4b22-9fd6-2bb45108ff1e-kube-api-access-bct2l\") pod \"canal-dt6sf\" (UID: \"18413886-6b2a-4b22-9fd6-2bb45108ff1e\") "
I1025 15:05:48.328456    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-lib-modules\") pod \"canal-dt6sf\" (UID: \"18413886-6b2a-4b22-9fd6-2bb45108ff1e\") "
I1025 15:05:48.328480    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/18413886-6b2a-4b22-9fd6-2bb45108ff1e-flannel-cfg\") pod \"canal-dt6sf\" (UID: \"18413886-6b2a-4b22-9fd6-2bb45108ff1e\") "
I1025 15:05:48.328501    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-cni-bin-dir\") pod \"canal-dt6sf\" (UID: \"18413886-6b2a-4b22-9fd6-2bb45108ff1e\") "
I1025 15:05:48.328531    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-var-lib-calico\") pod \"canal-dt6sf\" (UID: \"18413886-6b2a-4b22-9fd6-2bb45108ff1e\") "
I1025 15:05:48.328554    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-xtables-lock\") pod \"canal-dt6sf\" (UID: \"18413886-6b2a-4b22-9fd6-2bb45108ff1e\") "
I1025 15:05:48.328645    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-var-run-calico\") pod \"canal-dt6sf\" (UID: \"18413886-6b2a-4b22-9fd6-2bb45108ff1e\") "
I1025 15:05:48.394050    4045 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
I1025 15:05:48.429505    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-var-lib-calico\") pod \"canal-dt6sf\" (UID: \"18413886-6b2a-4b22-9fd6-2bb45108ff1e\") "
I1025 15:05:48.429582    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/18413886-6b2a-4b22-9fd6-2bb45108ff1e-flannel-cfg\") pod \"canal-dt6sf\" (UID: \"18413886-6b2a-4b22-9fd6-2bb45108ff1e\") "
I1025 15:05:48.429628    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-cni-bin-dir\") pod \"canal-dt6sf\" (UID: \"18413886-6b2a-4b22-9fd6-2bb45108ff1e\") "
I1025 15:05:48.429671    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-var-run-calico\") pod \"canal-dt6sf\" (UID: \"18413886-6b2a-4b22-9fd6-2bb45108ff1e\") "
I1025 15:05:48.429708    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "var-lib-calico" (UniqueName: "kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-var-lib-calico") pod "canal-dt6sf" (UID: "18413886-6b2a-4b22-9fd6-2bb45108ff1e") 
I1025 15:05:48.429753    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-xtables-lock\") pod \"canal-dt6sf\" (UID: \"18413886-6b2a-4b22-9fd6-2bb45108ff1e\") "
I1025 15:05:48.429906    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "var-run-calico" (UniqueName: "kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-var-run-calico") pod "canal-dt6sf" (UID: "18413886-6b2a-4b22-9fd6-2bb45108ff1e") 
I1025 15:05:48.429941    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-policysync\") pod \"canal-dt6sf\" (UID: \"18413886-6b2a-4b22-9fd6-2bb45108ff1e\") "
I1025 15:05:48.430018    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-flexvol-driver-host\") pod \"canal-dt6sf\" (UID: \"18413886-6b2a-4b22-9fd6-2bb45108ff1e\") "
I1025 15:05:48.430069    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-bct2l\" (UniqueName: \"kubernetes.io/projected/18413886-6b2a-4b22-9fd6-2bb45108ff1e-kube-api-access-bct2l\") pod \"canal-dt6sf\" (UID: \"18413886-6b2a-4b22-9fd6-2bb45108ff1e\") "
I1025 15:05:48.430116    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-lib-modules\") pod \"canal-dt6sf\" (UID: \"18413886-6b2a-4b22-9fd6-2bb45108ff1e\") "
I1025 15:05:48.430181    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"sysfs\" (UniqueName: \"kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-sysfs\") pod \"canal-dt6sf\" (UID: \"18413886-6b2a-4b22-9fd6-2bb45108ff1e\") "
I1025 15:05:48.429914    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "cni-bin-dir" (UniqueName: "kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-cni-bin-dir") pod "canal-dt6sf" (UID: "18413886-6b2a-4b22-9fd6-2bb45108ff1e") 
I1025 15:05:48.430278    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-cni-net-dir\") pod \"canal-dt6sf\" (UID: \"18413886-6b2a-4b22-9fd6-2bb45108ff1e\") "
I1025 15:05:48.430395    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "cni-net-dir" (UniqueName: "kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-cni-net-dir") pod "canal-dt6sf" (UID: "18413886-6b2a-4b22-9fd6-2bb45108ff1e") 
I1025 15:05:48.430455    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-cni-log-dir\") pod \"canal-dt6sf\" (UID: \"18413886-6b2a-4b22-9fd6-2bb45108ff1e\") "
I1025 15:05:48.430559    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "cni-log-dir" (UniqueName: "kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-cni-log-dir") pod "canal-dt6sf" (UID: "18413886-6b2a-4b22-9fd6-2bb45108ff1e") 
I1025 15:05:48.430608    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-xtables-lock") pod "canal-dt6sf" (UID: "18413886-6b2a-4b22-9fd6-2bb45108ff1e") 
I1025 15:05:48.430721    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "sysfs" (UniqueName: "kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-sysfs") pod "canal-dt6sf" (UID: "18413886-6b2a-4b22-9fd6-2bb45108ff1e") 
I1025 15:05:48.430927    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "policysync" (UniqueName: "kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-policysync") pod "canal-dt6sf" (UID: "18413886-6b2a-4b22-9fd6-2bb45108ff1e") 
I1025 15:05:48.431226    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-lib-modules") pod "canal-dt6sf" (UID: "18413886-6b2a-4b22-9fd6-2bb45108ff1e") 
I1025 15:05:48.431372    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "flexvol-driver-host" (UniqueName: "kubernetes.io/host-path/18413886-6b2a-4b22-9fd6-2bb45108ff1e-flexvol-driver-host") pod "canal-dt6sf" (UID: "18413886-6b2a-4b22-9fd6-2bb45108ff1e") 
I1025 15:05:48.431508    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/18413886-6b2a-4b22-9fd6-2bb45108ff1e-flannel-cfg") pod "canal-dt6sf" (UID: "18413886-6b2a-4b22-9fd6-2bb45108ff1e") 
I1025 15:05:48.447570    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "kube-api-access-bct2l" (UniqueName: "kubernetes.io/projected/18413886-6b2a-4b22-9fd6-2bb45108ff1e-kube-api-access-bct2l") pod "canal-dt6sf" (UID: "18413886-6b2a-4b22-9fd6-2bb45108ff1e") 
I1025 15:05:48.625318    4045 kuberuntime_manager.go:460] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/canal-dt6sf"
I1025 15:05:48.924591    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/rke-network-plugin-deploy-job-ftnpq" event=&{ID:3f3fc6b3-446b-4962-a8dc-f0cdde91c403 Type:ContainerDied Data:fd4d24028fdb3cf99f87921d2db2f1dd2fde1f81909f470e4eb3525c048f0278}
I1025 15:05:48.924910    4045 scope.go:111] "RemoveContainer" containerID="fd4d24028fdb3cf99f87921d2db2f1dd2fde1f81909f470e4eb3525c048f0278"
I1025 15:05:48.933629    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/canal-dt6sf" event=&{ID:18413886-6b2a-4b22-9fd6-2bb45108ff1e Type:ContainerStarted Data:6d87db5b8a74ef7e5dde660996a63c35f5eda0e4a28d8bd44e5cdf6ae12f74ac}
E1025 15:05:48.948884    4045 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
I1025 15:05:49.952445    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/rke-network-plugin-deploy-job-ftnpq" event=&{ID:3f3fc6b3-446b-4962-a8dc-f0cdde91c403 Type:ContainerDied Data:42596d94b280f6489451b1550c760e3df9fea4c914b648a8cd2dea5577fc7c09}
I1025 15:05:49.952607    4045 kuberuntime_manager.go:479] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/rke-network-plugin-deploy-job-ftnpq"
I1025 15:05:49.952744    4045 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="42596d94b280f6489451b1550c760e3df9fea4c914b648a8cd2dea5577fc7c09"
I1025 15:05:51.045613    4045 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t4fr\" (UniqueName: \"kubernetes.io/projected/3f3fc6b3-446b-4962-a8dc-f0cdde91c403-kube-api-access-6t4fr\") pod \"3f3fc6b3-446b-4962-a8dc-f0cdde91c403\" (UID: \"3f3fc6b3-446b-4962-a8dc-f0cdde91c403\") "
I1025 15:05:51.045729    4045 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f3fc6b3-446b-4962-a8dc-f0cdde91c403-config-volume\") pod \"3f3fc6b3-446b-4962-a8dc-f0cdde91c403\" (UID: \"3f3fc6b3-446b-4962-a8dc-f0cdde91c403\") "
W1025 15:05:51.046280    4045 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/3f3fc6b3-446b-4962-a8dc-f0cdde91c403/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
I1025 15:05:51.046587    4045 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f3fc6b3-446b-4962-a8dc-f0cdde91c403-config-volume" (OuterVolumeSpecName: "config-volume") pod "3f3fc6b3-446b-4962-a8dc-f0cdde91c403" (UID: "3f3fc6b3-446b-4962-a8dc-f0cdde91c403"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
I1025 15:05:51.053075    4045 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f3fc6b3-446b-4962-a8dc-f0cdde91c403-kube-api-access-6t4fr" (OuterVolumeSpecName: "kube-api-access-6t4fr") pod "3f3fc6b3-446b-4962-a8dc-f0cdde91c403" (UID: "3f3fc6b3-446b-4962-a8dc-f0cdde91c403"). InnerVolumeSpecName "kube-api-access-6t4fr". PluginName "kubernetes.io/projected", VolumeGidValue ""
I1025 15:05:51.146642    4045 reconciler.go:319] "Volume detached for volume \"kube-api-access-6t4fr\" (UniqueName: \"kubernetes.io/projected/3f3fc6b3-446b-4962-a8dc-f0cdde91c403-kube-api-access-6t4fr\") on node \"192.168.70.2\" DevicePath \"\""
I1025 15:05:51.146715    4045 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f3fc6b3-446b-4962-a8dc-f0cdde91c403-config-volume\") on node \"192.168.70.2\" DevicePath \"\""
I1025 15:05:52.703863    4045 kube_docker_client.go:347] "Stop pulling image" image="rancher/mirrored-calico-cni:v3.19.2" progress="Status: Downloaded newer image for rancher/mirrored-calico-cni:v3.19.2"
I1025 15:05:52.972038    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/canal-dt6sf" event=&{ID:18413886-6b2a-4b22-9fd6-2bb45108ff1e Type:ContainerStarted Data:00080d2baa92c0725ab5e90db2828bad2c1ad1201d785630edbe7aac18d41ab1}
I1025 15:05:53.984521    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/canal-dt6sf" event=&{ID:18413886-6b2a-4b22-9fd6-2bb45108ff1e Type:ContainerDied Data:00080d2baa92c0725ab5e90db2828bad2c1ad1201d785630edbe7aac18d41ab1}
I1025 15:05:54.027861    4045 kubelet_node_status.go:554] "Recording event message for node" node="192.168.70.2" event="NodeReady"
I1025 15:05:54.404818    4045 kubelet.go:1932] "SyncLoop ADD" source="api" pods=[kube-system/rke-coredns-addon-deploy-job-cpzbk]
I1025 15:05:54.404928    4045 topology_manager.go:187] "Topology Admit Handler"
I1025 15:05:54.571161    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkmzc\" (UniqueName: \"kubernetes.io/projected/5e7c7e2c-ab6e-4353-8447-db98bc56d47f-kube-api-access-qkmzc\") pod \"rke-coredns-addon-deploy-job-cpzbk\" (UID: \"5e7c7e2c-ab6e-4353-8447-db98bc56d47f\") "
I1025 15:05:54.571275    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e7c7e2c-ab6e-4353-8447-db98bc56d47f-config-volume\") pod \"rke-coredns-addon-deploy-job-cpzbk\" (UID: \"5e7c7e2c-ab6e-4353-8447-db98bc56d47f\") "
I1025 15:05:54.671832    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e7c7e2c-ab6e-4353-8447-db98bc56d47f-config-volume\") pod \"rke-coredns-addon-deploy-job-cpzbk\" (UID: \"5e7c7e2c-ab6e-4353-8447-db98bc56d47f\") "
I1025 15:05:54.671927    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-qkmzc\" (UniqueName: \"kubernetes.io/projected/5e7c7e2c-ab6e-4353-8447-db98bc56d47f-kube-api-access-qkmzc\") pod \"rke-coredns-addon-deploy-job-cpzbk\" (UID: \"5e7c7e2c-ab6e-4353-8447-db98bc56d47f\") "
I1025 15:05:54.673458    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5e7c7e2c-ab6e-4353-8447-db98bc56d47f-config-volume") pod "rke-coredns-addon-deploy-job-cpzbk" (UID: "5e7c7e2c-ab6e-4353-8447-db98bc56d47f") 
I1025 15:05:54.696226    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "kube-api-access-qkmzc" (UniqueName: "kubernetes.io/projected/5e7c7e2c-ab6e-4353-8447-db98bc56d47f-kube-api-access-qkmzc") pod "rke-coredns-addon-deploy-job-cpzbk" (UID: "5e7c7e2c-ab6e-4353-8447-db98bc56d47f") 
I1025 15:05:54.721277    4045 kuberuntime_manager.go:460] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/rke-coredns-addon-deploy-job-cpzbk"
I1025 15:05:55.001863    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/rke-coredns-addon-deploy-job-cpzbk" event=&{ID:5e7c7e2c-ab6e-4353-8447-db98bc56d47f Type:ContainerStarted Data:32b7ba894e12a2c068a0b695b997968077a8dc5eb75a350e450fa4641fc2b192}
I1025 15:05:55.001926    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/rke-coredns-addon-deploy-job-cpzbk" event=&{ID:5e7c7e2c-ab6e-4353-8447-db98bc56d47f Type:ContainerStarted Data:61ce3a57f03433d7bcd085a53f4f0d49eb01a9406ffe20de489a6b0b41d2b9bc}
I1025 15:05:55.580891    4045 kubelet.go:1932] "SyncLoop ADD" source="api" pods=[kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr]
I1025 15:05:55.580944    4045 topology_manager.go:187] "Topology Admit Handler"
I1025 15:05:55.681135    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stsbs\" (UniqueName: \"kubernetes.io/projected/1a21dd63-86c2-4b0a-bba0-21634e9f2618-kube-api-access-stsbs\") pod \"coredns-autoscaler-57fd5c9bd5-bw2xr\" (UID: \"1a21dd63-86c2-4b0a-bba0-21634e9f2618\") "
I1025 15:05:55.781996    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-stsbs\" (UniqueName: \"kubernetes.io/projected/1a21dd63-86c2-4b0a-bba0-21634e9f2618-kube-api-access-stsbs\") pod \"coredns-autoscaler-57fd5c9bd5-bw2xr\" (UID: \"1a21dd63-86c2-4b0a-bba0-21634e9f2618\") "
I1025 15:05:55.800070    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "kube-api-access-stsbs" (UniqueName: "kubernetes.io/projected/1a21dd63-86c2-4b0a-bba0-21634e9f2618-kube-api-access-stsbs") pod "coredns-autoscaler-57fd5c9bd5-bw2xr" (UID: "1a21dd63-86c2-4b0a-bba0-21634e9f2618") 
I1025 15:05:55.900843    4045 kuberuntime_manager.go:460] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr"
I1025 15:05:56.009854    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/rke-coredns-addon-deploy-job-cpzbk" event=&{ID:5e7c7e2c-ab6e-4353-8447-db98bc56d47f Type:ContainerDied Data:32b7ba894e12a2c068a0b695b997968077a8dc5eb75a350e450fa4641fc2b192}
I1025 15:05:56.010044    4045 scope.go:111] "RemoveContainer" containerID="32b7ba894e12a2c068a0b695b997968077a8dc5eb75a350e450fa4641fc2b192"
I1025 15:05:56.141280    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr" event=&{ID:1a21dd63-86c2-4b0a-bba0-21634e9f2618 Type:ContainerDied Data:ccfdd2aed5bb7e856de2e18d305d5412c182bafddd98d9bfd41b1f701cc17bdf}
I1025 15:05:56.141377    4045 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="ccfdd2aed5bb7e856de2e18d305d5412c182bafddd98d9bfd41b1f701cc17bdf"
I1025 15:05:56.184172    4045 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e7c7e2c-ab6e-4353-8447-db98bc56d47f-config-volume\") pod \"5e7c7e2c-ab6e-4353-8447-db98bc56d47f\" (UID: \"5e7c7e2c-ab6e-4353-8447-db98bc56d47f\") "
I1025 15:05:56.184230    4045 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkmzc\" (UniqueName: \"kubernetes.io/projected/5e7c7e2c-ab6e-4353-8447-db98bc56d47f-kube-api-access-qkmzc\") pod \"5e7c7e2c-ab6e-4353-8447-db98bc56d47f\" (UID: \"5e7c7e2c-ab6e-4353-8447-db98bc56d47f\") "
W1025 15:05:56.184968    4045 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/5e7c7e2c-ab6e-4353-8447-db98bc56d47f/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
I1025 15:05:56.185362    4045 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e7c7e2c-ab6e-4353-8447-db98bc56d47f-config-volume" (OuterVolumeSpecName: "config-volume") pod "5e7c7e2c-ab6e-4353-8447-db98bc56d47f" (UID: "5e7c7e2c-ab6e-4353-8447-db98bc56d47f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
E1025 15:05:56.190526    4045 cni.go:361] "Error adding pod to network" err="stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr" podSandboxID={Type:docker ID:ccfdd2aed5bb7e856de2e18d305d5412c182bafddd98d9bfd41b1f701cc17bdf} podNetnsPath="/proc/5892/ns/net" networkType="calico" networkName="k8s-pod-network"
I1025 15:05:56.191260    4045 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e7c7e2c-ab6e-4353-8447-db98bc56d47f-kube-api-access-qkmzc" (OuterVolumeSpecName: "kube-api-access-qkmzc") pod "5e7c7e2c-ab6e-4353-8447-db98bc56d47f" (UID: "5e7c7e2c-ab6e-4353-8447-db98bc56d47f"). InnerVolumeSpecName "kube-api-access-qkmzc". PluginName "kubernetes.io/projected", VolumeGidValue ""
I1025 15:05:56.285184    4045 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e7c7e2c-ab6e-4353-8447-db98bc56d47f-config-volume\") on node \"192.168.70.2\" DevicePath \"\""
I1025 15:05:56.285335    4045 reconciler.go:319] "Volume detached for volume \"kube-api-access-qkmzc\" (UniqueName: \"kubernetes.io/projected/5e7c7e2c-ab6e-4353-8447-db98bc56d47f-kube-api-access-qkmzc\") on node \"192.168.70.2\" DevicePath \"\""
E1025 15:05:56.331589    4045 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"ccfdd2aed5bb7e856de2e18d305d5412c182bafddd98d9bfd41b1f701cc17bdf\" network for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr\": networkPlugin cni failed to set up pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
E1025 15:05:56.331686    4045 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to set up sandbox container \"ccfdd2aed5bb7e856de2e18d305d5412c182bafddd98d9bfd41b1f701cc17bdf\" network for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr\": networkPlugin cni failed to set up pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr"
E1025 15:05:56.331742    4045 kuberuntime_manager.go:790] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"ccfdd2aed5bb7e856de2e18d305d5412c182bafddd98d9bfd41b1f701cc17bdf\" network for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr\": networkPlugin cni failed to set up pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr"
E1025 15:05:56.331848    4045 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system(1a21dd63-86c2-4b0a-bba0-21634e9f2618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system(1a21dd63-86c2-4b0a-bba0-21634e9f2618)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"ccfdd2aed5bb7e856de2e18d305d5412c182bafddd98d9bfd41b1f701cc17bdf\\\" network for pod \\\"coredns-autoscaler-57fd5c9bd5-bw2xr\\\": networkPlugin cni failed to set up pod \\\"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\\\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr" podUID=1a21dd63-86c2-4b0a-bba0-21634e9f2618
I1025 15:05:57.150860    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/rke-coredns-addon-deploy-job-cpzbk" event=&{ID:5e7c7e2c-ab6e-4353-8447-db98bc56d47f Type:ContainerDied Data:61ce3a57f03433d7bcd085a53f4f0d49eb01a9406ffe20de489a6b0b41d2b9bc}
I1025 15:05:57.150889    4045 kuberuntime_manager.go:491] "Sandbox for pod has no IP address. Need to start a new one" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr"
I1025 15:05:57.150920    4045 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="61ce3a57f03433d7bcd085a53f4f0d49eb01a9406ffe20de489a6b0b41d2b9bc"
I1025 15:05:57.151043    4045 kuberuntime_manager.go:479] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/rke-coredns-addon-deploy-job-cpzbk"
I1025 15:05:57.153964    4045 cni.go:333] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"ccfdd2aed5bb7e856de2e18d305d5412c182bafddd98d9bfd41b1f701cc17bdf\""
E1025 15:05:57.582480    4045 cni.go:361] "Error adding pod to network" err="stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr" podSandboxID={Type:docker ID:9336bf9ada540449e0418301894a67fb21b2f0caeeea85c287b641e725d9e8bb} podNetnsPath="/proc/6147/ns/net" networkType="calico" networkName="k8s-pod-network"
I1025 15:05:57.711381    4045 kube_docker_client.go:347] "Stop pulling image" image="rancher/mirrored-calico-pod2daemon-flexvol:v3.19.2" progress="Status: Downloaded newer image for rancher/mirrored-calico-pod2daemon-flexvol:v3.19.2"
E1025 15:05:57.759095    4045 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"9336bf9ada540449e0418301894a67fb21b2f0caeeea85c287b641e725d9e8bb\" network for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr\": networkPlugin cni failed to set up pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
E1025 15:05:57.759166    4045 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to set up sandbox container \"9336bf9ada540449e0418301894a67fb21b2f0caeeea85c287b641e725d9e8bb\" network for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr\": networkPlugin cni failed to set up pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr"
E1025 15:05:57.759195    4045 kuberuntime_manager.go:790] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"9336bf9ada540449e0418301894a67fb21b2f0caeeea85c287b641e725d9e8bb\" network for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr\": networkPlugin cni failed to set up pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr"
E1025 15:05:57.759284    4045 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system(1a21dd63-86c2-4b0a-bba0-21634e9f2618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system(1a21dd63-86c2-4b0a-bba0-21634e9f2618)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"9336bf9ada540449e0418301894a67fb21b2f0caeeea85c287b641e725d9e8bb\\\" network for pod \\\"coredns-autoscaler-57fd5c9bd5-bw2xr\\\": networkPlugin cni failed to set up pod \\\"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\\\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr" podUID=1a21dd63-86c2-4b0a-bba0-21634e9f2618
I1025 15:05:57.867281    4045 kubelet_pods.go:1285] "Killing unwanted pod" podName="rke-coredns-addon-deploy-job-cpzbk"
I1025 15:05:58.160682    4045 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"9336bf9ada540449e0418301894a67fb21b2f0caeeea85c287b641e725d9e8bb\""
I1025 15:05:58.163454    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr" event=&{ID:1a21dd63-86c2-4b0a-bba0-21634e9f2618 Type:ContainerDied Data:9336bf9ada540449e0418301894a67fb21b2f0caeeea85c287b641e725d9e8bb}
I1025 15:05:58.163547    4045 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="9336bf9ada540449e0418301894a67fb21b2f0caeeea85c287b641e725d9e8bb"
I1025 15:05:58.163843    4045 kuberuntime_manager.go:479] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr"
I1025 15:05:58.165620    4045 cni.go:333] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"9336bf9ada540449e0418301894a67fb21b2f0caeeea85c287b641e725d9e8bb\""
I1025 15:05:58.173315    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/canal-dt6sf" event=&{ID:18413886-6b2a-4b22-9fd6-2bb45108ff1e Type:ContainerDied Data:bc2e203037639463f9880cda16b2cf8b8d1382991ed96446bfa2f712572ef1f8}
E1025 15:05:58.524692    4045 cni.go:361] "Error adding pod to network" err="stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr" podSandboxID={Type:docker ID:a7f1902d55fecc3fb466e8397c6aa9dc2a76bdad0d6ce2fb722f286319001252} podNetnsPath="/proc/6387/ns/net" networkType="calico" networkName="k8s-pod-network"
E1025 15:05:58.651203    4045 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"a7f1902d55fecc3fb466e8397c6aa9dc2a76bdad0d6ce2fb722f286319001252\" network for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr\": networkPlugin cni failed to set up pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
E1025 15:05:58.651289    4045 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to set up sandbox container \"a7f1902d55fecc3fb466e8397c6aa9dc2a76bdad0d6ce2fb722f286319001252\" network for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr\": networkPlugin cni failed to set up pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr"
E1025 15:05:58.651338    4045 kuberuntime_manager.go:790] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"a7f1902d55fecc3fb466e8397c6aa9dc2a76bdad0d6ce2fb722f286319001252\" network for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr\": networkPlugin cni failed to set up pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr"
E1025 15:05:58.651438    4045 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system(1a21dd63-86c2-4b0a-bba0-21634e9f2618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system(1a21dd63-86c2-4b0a-bba0-21634e9f2618)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"a7f1902d55fecc3fb466e8397c6aa9dc2a76bdad0d6ce2fb722f286319001252\\\" network for pod \\\"coredns-autoscaler-57fd5c9bd5-bw2xr\\\": networkPlugin cni failed to set up pod \\\"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\\\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr" podUID=1a21dd63-86c2-4b0a-bba0-21634e9f2618
I1025 15:05:59.183504    4045 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a7f1902d55fecc3fb466e8397c6aa9dc2a76bdad0d6ce2fb722f286319001252\""
I1025 15:05:59.187401    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr" event=&{ID:1a21dd63-86c2-4b0a-bba0-21634e9f2618 Type:ContainerDied Data:a7f1902d55fecc3fb466e8397c6aa9dc2a76bdad0d6ce2fb722f286319001252}
I1025 15:05:59.187552    4045 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="a7f1902d55fecc3fb466e8397c6aa9dc2a76bdad0d6ce2fb722f286319001252"
I1025 15:05:59.187829    4045 kuberuntime_manager.go:479] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr"
I1025 15:05:59.192479    4045 cni.go:333] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"a7f1902d55fecc3fb466e8397c6aa9dc2a76bdad0d6ce2fb722f286319001252\""
E1025 15:05:59.555663    4045 cni.go:361] "Error adding pod to network" err="stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr" podSandboxID={Type:docker ID:8118b6a351a7d1026fa043e4175eff97216904a08322559b43466b5c4a82c998} podNetnsPath="/proc/6574/ns/net" networkType="calico" networkName="k8s-pod-network"
E1025 15:05:59.693080    4045 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"8118b6a351a7d1026fa043e4175eff97216904a08322559b43466b5c4a82c998\" network for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr\": networkPlugin cni failed to set up pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
E1025 15:05:59.693172    4045 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to set up sandbox container \"8118b6a351a7d1026fa043e4175eff97216904a08322559b43466b5c4a82c998\" network for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr\": networkPlugin cni failed to set up pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr"
E1025 15:05:59.693221    4045 kuberuntime_manager.go:790] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"8118b6a351a7d1026fa043e4175eff97216904a08322559b43466b5c4a82c998\" network for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr\": networkPlugin cni failed to set up pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr"
E1025 15:05:59.693314    4045 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system(1a21dd63-86c2-4b0a-bba0-21634e9f2618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system(1a21dd63-86c2-4b0a-bba0-21634e9f2618)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"8118b6a351a7d1026fa043e4175eff97216904a08322559b43466b5c4a82c998\\\" network for pod \\\"coredns-autoscaler-57fd5c9bd5-bw2xr\\\": networkPlugin cni failed to set up pod \\\"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\\\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr" podUID=1a21dd63-86c2-4b0a-bba0-21634e9f2618
I1025 15:06:00.197965    4045 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"8118b6a351a7d1026fa043e4175eff97216904a08322559b43466b5c4a82c998\""
I1025 15:06:00.202583    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr" event=&{ID:1a21dd63-86c2-4b0a-bba0-21634e9f2618 Type:ContainerDied Data:8118b6a351a7d1026fa043e4175eff97216904a08322559b43466b5c4a82c998}
I1025 15:06:00.202666    4045 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="8118b6a351a7d1026fa043e4175eff97216904a08322559b43466b5c4a82c998"
I1025 15:06:00.202961    4045 kuberuntime_manager.go:479] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr"
I1025 15:06:00.204536    4045 cni.go:333] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"8118b6a351a7d1026fa043e4175eff97216904a08322559b43466b5c4a82c998\""
I1025 15:06:00.466106    4045 kubelet.go:1932] "SyncLoop ADD" source="api" pods=[kube-system/rke-metrics-addon-deploy-job-x2f95]
I1025 15:06:00.466170    4045 topology_manager.go:187] "Topology Admit Handler"
E1025 15:06:00.575537    4045 cni.go:361] "Error adding pod to network" err="stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr" podSandboxID={Type:docker ID:fd5d698480b4cc7a860cd9468382dfbdd20377c665db7770be55ff67f3aa8c69} podNetnsPath="/proc/6742/ns/net" networkType="calico" networkName="k8s-pod-network"
I1025 15:06:00.614800    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84cxx\" (UniqueName: \"kubernetes.io/projected/62ee4c74-20c7-43be-b150-959a371c4d16-kube-api-access-84cxx\") pod \"rke-metrics-addon-deploy-job-x2f95\" (UID: \"62ee4c74-20c7-43be-b150-959a371c4d16\") "
I1025 15:06:00.614868    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62ee4c74-20c7-43be-b150-959a371c4d16-config-volume\") pod \"rke-metrics-addon-deploy-job-x2f95\" (UID: \"62ee4c74-20c7-43be-b150-959a371c4d16\") "
I1025 15:06:00.716062    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-84cxx\" (UniqueName: \"kubernetes.io/projected/62ee4c74-20c7-43be-b150-959a371c4d16-kube-api-access-84cxx\") pod \"rke-metrics-addon-deploy-job-x2f95\" (UID: \"62ee4c74-20c7-43be-b150-959a371c4d16\") "
I1025 15:06:00.716107    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62ee4c74-20c7-43be-b150-959a371c4d16-config-volume\") pod \"rke-metrics-addon-deploy-job-x2f95\" (UID: \"62ee4c74-20c7-43be-b150-959a371c4d16\") "
I1025 15:06:00.718258    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "config-volume" (UniqueName: "kubernetes.io/configmap/62ee4c74-20c7-43be-b150-959a371c4d16-config-volume") pod "rke-metrics-addon-deploy-job-x2f95" (UID: "62ee4c74-20c7-43be-b150-959a371c4d16") 
E1025 15:06:00.723702    4045 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"fd5d698480b4cc7a860cd9468382dfbdd20377c665db7770be55ff67f3aa8c69\" network for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr\": networkPlugin cni failed to set up pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
E1025 15:06:00.723758    4045 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to set up sandbox container \"fd5d698480b4cc7a860cd9468382dfbdd20377c665db7770be55ff67f3aa8c69\" network for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr\": networkPlugin cni failed to set up pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr"
E1025 15:06:00.723784    4045 kuberuntime_manager.go:790] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"fd5d698480b4cc7a860cd9468382dfbdd20377c665db7770be55ff67f3aa8c69\" network for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr\": networkPlugin cni failed to set up pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr"
E1025 15:06:00.723849    4045 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system(1a21dd63-86c2-4b0a-bba0-21634e9f2618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system(1a21dd63-86c2-4b0a-bba0-21634e9f2618)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"fd5d698480b4cc7a860cd9468382dfbdd20377c665db7770be55ff67f3aa8c69\\\" network for pod \\\"coredns-autoscaler-57fd5c9bd5-bw2xr\\\": networkPlugin cni failed to set up pod \\\"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\\\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr" podUID=1a21dd63-86c2-4b0a-bba0-21634e9f2618
I1025 15:06:00.784943    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "kube-api-access-84cxx" (UniqueName: "kubernetes.io/projected/62ee4c74-20c7-43be-b150-959a371c4d16-kube-api-access-84cxx") pod "rke-metrics-addon-deploy-job-x2f95" (UID: "62ee4c74-20c7-43be-b150-959a371c4d16") 
I1025 15:06:00.789079    4045 kuberuntime_manager.go:460] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/rke-metrics-addon-deploy-job-x2f95"
I1025 15:06:01.211456    4045 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"fd5d698480b4cc7a860cd9468382dfbdd20377c665db7770be55ff67f3aa8c69\""
I1025 15:06:01.214624    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr" event=&{ID:1a21dd63-86c2-4b0a-bba0-21634e9f2618 Type:ContainerDied Data:fd5d698480b4cc7a860cd9468382dfbdd20377c665db7770be55ff67f3aa8c69}
I1025 15:06:01.214678    4045 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="fd5d698480b4cc7a860cd9468382dfbdd20377c665db7770be55ff67f3aa8c69"
I1025 15:06:01.214936    4045 kuberuntime_manager.go:479] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr"
I1025 15:06:01.219592    4045 cni.go:333] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"fd5d698480b4cc7a860cd9468382dfbdd20377c665db7770be55ff67f3aa8c69\""
I1025 15:06:01.226705    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/rke-metrics-addon-deploy-job-x2f95" event=&{ID:62ee4c74-20c7-43be-b150-959a371c4d16 Type:ContainerStarted Data:ce2bb660d799af89c12604a72ac97937a1d05acbbb70d84eaa00b07f29422eb3}
I1025 15:06:01.226771    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/rke-metrics-addon-deploy-job-x2f95" event=&{ID:62ee4c74-20c7-43be-b150-959a371c4d16 Type:ContainerStarted Data:89c2392f76d6a0118bff7aec4f3f707ed9f42f0f184bc2b337324d398d51559a}
E1025 15:06:01.673607    4045 cni.go:361] "Error adding pod to network" err="stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr" podSandboxID={Type:docker ID:583e0a583bcb6a4b027e6d65138dce9fd40eaa8e0172d35463089b439fea1353} podNetnsPath="/proc/7025/ns/net" networkType="calico" networkName="k8s-pod-network"
E1025 15:06:01.833291    4045 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"583e0a583bcb6a4b027e6d65138dce9fd40eaa8e0172d35463089b439fea1353\" network for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr\": networkPlugin cni failed to set up pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
E1025 15:06:01.833351    4045 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to set up sandbox container \"583e0a583bcb6a4b027e6d65138dce9fd40eaa8e0172d35463089b439fea1353\" network for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr\": networkPlugin cni failed to set up pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr"
E1025 15:06:01.833375    4045 kuberuntime_manager.go:790] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"583e0a583bcb6a4b027e6d65138dce9fd40eaa8e0172d35463089b439fea1353\" network for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr\": networkPlugin cni failed to set up pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr"
E1025 15:06:01.833440    4045 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system(1a21dd63-86c2-4b0a-bba0-21634e9f2618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system(1a21dd63-86c2-4b0a-bba0-21634e9f2618)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"583e0a583bcb6a4b027e6d65138dce9fd40eaa8e0172d35463089b439fea1353\\\" network for pod \\\"coredns-autoscaler-57fd5c9bd5-bw2xr\\\": networkPlugin cni failed to set up pod \\\"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\\\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr" podUID=1a21dd63-86c2-4b0a-bba0-21634e9f2618
I1025 15:06:02.237175    4045 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"583e0a583bcb6a4b027e6d65138dce9fd40eaa8e0172d35463089b439fea1353\""
I1025 15:06:02.241521    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr" event=&{ID:1a21dd63-86c2-4b0a-bba0-21634e9f2618 Type:ContainerDied Data:583e0a583bcb6a4b027e6d65138dce9fd40eaa8e0172d35463089b439fea1353}
I1025 15:06:02.241583    4045 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="583e0a583bcb6a4b027e6d65138dce9fd40eaa8e0172d35463089b439fea1353"
I1025 15:06:02.241804    4045 kuberuntime_manager.go:479] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr"
I1025 15:06:02.242891    4045 cni.go:333] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"583e0a583bcb6a4b027e6d65138dce9fd40eaa8e0172d35463089b439fea1353\""
I1025 15:06:02.247118    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/rke-metrics-addon-deploy-job-x2f95" event=&{ID:62ee4c74-20c7-43be-b150-959a371c4d16 Type:ContainerDied Data:ce2bb660d799af89c12604a72ac97937a1d05acbbb70d84eaa00b07f29422eb3}
I1025 15:06:02.247216    4045 scope.go:111] "RemoveContainer" containerID="ce2bb660d799af89c12604a72ac97937a1d05acbbb70d84eaa00b07f29422eb3"
E1025 15:06:02.616222    4045 cni.go:361] "Error adding pod to network" err="stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr" podSandboxID={Type:docker ID:6888fae926c85d8cd602b9907aba87c56cf3053911511570f6780c27f2c38e99} podNetnsPath="/proc/7273/ns/net" networkType="calico" networkName="k8s-pod-network"
E1025 15:06:02.784671    4045 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"6888fae926c85d8cd602b9907aba87c56cf3053911511570f6780c27f2c38e99\" network for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr\": networkPlugin cni failed to set up pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
E1025 15:06:02.784770    4045 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to set up sandbox container \"6888fae926c85d8cd602b9907aba87c56cf3053911511570f6780c27f2c38e99\" network for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr\": networkPlugin cni failed to set up pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr"
E1025 15:06:02.784822    4045 kuberuntime_manager.go:790] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"6888fae926c85d8cd602b9907aba87c56cf3053911511570f6780c27f2c38e99\" network for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr\": networkPlugin cni failed to set up pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr"
E1025 15:06:02.784922    4045 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system(1a21dd63-86c2-4b0a-bba0-21634e9f2618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system(1a21dd63-86c2-4b0a-bba0-21634e9f2618)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"6888fae926c85d8cd602b9907aba87c56cf3053911511570f6780c27f2c38e99\\\" network for pod \\\"coredns-autoscaler-57fd5c9bd5-bw2xr\\\": networkPlugin cni failed to set up pod \\\"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\\\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr" podUID=1a21dd63-86c2-4b0a-bba0-21634e9f2618
I1025 15:06:03.183463    4045 kube_docker_client.go:347] "Stop pulling image" image="rancher/mirrored-calico-node:v3.19.2" progress="Status: Downloaded newer image for rancher/mirrored-calico-node:v3.19.2"
I1025 15:06:03.344260    4045 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"coredns-autoscaler-57fd5c9bd5-bw2xr_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"6888fae926c85d8cd602b9907aba87c56cf3053911511570f6780c27f2c38e99\""
I1025 15:06:03.348633    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr" event=&{ID:1a21dd63-86c2-4b0a-bba0-21634e9f2618 Type:ContainerDied Data:6888fae926c85d8cd602b9907aba87c56cf3053911511570f6780c27f2c38e99}
I1025 15:06:03.348692    4045 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="6888fae926c85d8cd602b9907aba87c56cf3053911511570f6780c27f2c38e99"
I1025 15:06:03.348908    4045 kuberuntime_manager.go:479] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr"
I1025 15:06:03.350887    4045 cni.go:333] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"6888fae926c85d8cd602b9907aba87c56cf3053911511570f6780c27f2c38e99\""
I1025 15:06:03.360933    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/rke-metrics-addon-deploy-job-x2f95" event=&{ID:62ee4c74-20c7-43be-b150-959a371c4d16 Type:ContainerDied Data:89c2392f76d6a0118bff7aec4f3f707ed9f42f0f184bc2b337324d398d51559a}
I1025 15:06:03.360997    4045 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="89c2392f76d6a0118bff7aec4f3f707ed9f42f0f184bc2b337324d398d51559a"
I1025 15:06:03.361190    4045 kuberuntime_manager.go:479] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/rke-metrics-addon-deploy-job-x2f95"
I1025 15:06:03.766377    4045 kubelet.go:1939] "SyncLoop UPDATE" source="api" pods=[kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr]
2021-10-25 15:06:03.697 [INFO][7579] utils.go 108: File /var/lib/calico/mtu does not exist
2021-10-25 15:06:03.722 [INFO][7579] plugin.go 260: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {192.168.70.2-k8s-coredns--autoscaler--57fd5c9bd5--bw2xr-eth0 coredns-autoscaler-57fd5c9bd5- kube-system  1a21dd63-86c2-4b0a-bba0-21634e9f2618 608 0 2021-10-25 15:05:55 +0000 UTC <nil> <nil> map[k8s-app:coredns-autoscaler pod-template-hash:57fd5c9bd5 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns-autoscaler] map[] [] []  []} {k8s  192.168.70.2  coredns-autoscaler-57fd5c9bd5-bw2xr eth0 coredns-autoscaler [] []   [kns.kube-system ksa.kube-system.coredns-autoscaler] cali8fcfe179851  []}} ContainerID="6a72c8eb56ca8bc28c74f9b77d26905dd11271493225caf215c4ac55ab5af656" Namespace="kube-system" Pod="coredns-autoscaler-57fd5c9bd5-bw2xr" WorkloadEndpoint="192.168.70.2-k8s-coredns--autoscaler--57fd5c9bd5--bw2xr-"
2021-10-25 15:06:03.722 [INFO][7579] k8s.go 71: Extracted identifiers for CmdAddK8s ContainerID="6a72c8eb56ca8bc28c74f9b77d26905dd11271493225caf215c4ac55ab5af656" Namespace="kube-system" Pod="coredns-autoscaler-57fd5c9bd5-bw2xr" WorkloadEndpoint="192.168.70.2-k8s-coredns--autoscaler--57fd5c9bd5--bw2xr-eth0"
2021-10-25 15:06:03.723 [INFO][7579] utils.go 331: Calico CNI fetching podCidr from Kubernetes ContainerID="6a72c8eb56ca8bc28c74f9b77d26905dd11271493225caf215c4ac55ab5af656" Namespace="kube-system" Pod="coredns-autoscaler-57fd5c9bd5-bw2xr" WorkloadEndpoint="192.168.70.2-k8s-coredns--autoscaler--57fd5c9bd5--bw2xr-eth0"
2021-10-25 15:06:03.729 [INFO][7579] utils.go 337: Fetched podCidr ContainerID="6a72c8eb56ca8bc28c74f9b77d26905dd11271493225caf215c4ac55ab5af656" Namespace="kube-system" Pod="coredns-autoscaler-57fd5c9bd5-bw2xr" WorkloadEndpoint="192.168.70.2-k8s-coredns--autoscaler--57fd5c9bd5--bw2xr-eth0" podCidr="10.42.2.0/24"
2021-10-25 15:06:03.729 [INFO][7579] utils.go 340: Calico CNI passing podCidr to host-local IPAM: 10.42.2.0/24 ContainerID="6a72c8eb56ca8bc28c74f9b77d26905dd11271493225caf215c4ac55ab5af656" Namespace="kube-system" Pod="coredns-autoscaler-57fd5c9bd5-bw2xr" WorkloadEndpoint="192.168.70.2-k8s-coredns--autoscaler--57fd5c9bd5--bw2xr-eth0"
2021-10-25 15:06:03.739 [INFO][7579] k8s.go 374: Populated endpoint ContainerID="6a72c8eb56ca8bc28c74f9b77d26905dd11271493225caf215c4ac55ab5af656" Namespace="kube-system" Pod="coredns-autoscaler-57fd5c9bd5-bw2xr" WorkloadEndpoint="192.168.70.2-k8s-coredns--autoscaler--57fd5c9bd5--bw2xr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.70.2-k8s-coredns--autoscaler--57fd5c9bd5--bw2xr-eth0", GenerateName:"coredns-autoscaler-57fd5c9bd5-", Namespace:"kube-system", SelfLink:"", UID:"1a21dd63-86c2-4b0a-bba0-21634e9f2618", ResourceVersion:"608", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63770771155, loc:(*time.Location)(0x2b9b600)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"coredns-autoscaler", "pod-template-hash":"57fd5c9bd5", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns-autoscaler"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.70.2", ContainerID:"", Pod:"coredns-autoscaler-57fd5c9bd5-bw2xr", Endpoint:"eth0", ServiceAccountName:"coredns-autoscaler", IPNetworks:[]string{"10.42.2.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns-autoscaler"}, InterfaceName:"cali8fcfe179851", MAC:"", Ports:[]v3.EndpointPort(nil)}}
2021-10-25 15:06:03.739 [INFO][7579] k8s.go 375: Calico CNI using IPs: [10.42.2.2/32] ContainerID="6a72c8eb56ca8bc28c74f9b77d26905dd11271493225caf215c4ac55ab5af656" Namespace="kube-system" Pod="coredns-autoscaler-57fd5c9bd5-bw2xr" WorkloadEndpoint="192.168.70.2-k8s-coredns--autoscaler--57fd5c9bd5--bw2xr-eth0"
2021-10-25 15:06:03.739 [INFO][7579] dataplane_linux.go 66: Setting the host side veth name to cali8fcfe179851 ContainerID="6a72c8eb56ca8bc28c74f9b77d26905dd11271493225caf215c4ac55ab5af656" Namespace="kube-system" Pod="coredns-autoscaler-57fd5c9bd5-bw2xr" WorkloadEndpoint="192.168.70.2-k8s-coredns--autoscaler--57fd5c9bd5--bw2xr-eth0"
2021-10-25 15:06:03.741 [INFO][7579] dataplane_linux.go 420: Disabling IPv4 forwarding ContainerID="6a72c8eb56ca8bc28c74f9b77d26905dd11271493225caf215c4ac55ab5af656" Namespace="kube-system" Pod="coredns-autoscaler-57fd5c9bd5-bw2xr" WorkloadEndpoint="192.168.70.2-k8s-coredns--autoscaler--57fd5c9bd5--bw2xr-eth0"
2021-10-25 15:06:03.756 [INFO][7579] k8s.go 402: Added Mac, interface name, and active container ID to endpoint ContainerID="6a72c8eb56ca8bc28c74f9b77d26905dd11271493225caf215c4ac55ab5af656" Namespace="kube-system" Pod="coredns-autoscaler-57fd5c9bd5-bw2xr" WorkloadEndpoint="192.168.70.2-k8s-coredns--autoscaler--57fd5c9bd5--bw2xr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.70.2-k8s-coredns--autoscaler--57fd5c9bd5--bw2xr-eth0", GenerateName:"coredns-autoscaler-57fd5c9bd5-", Namespace:"kube-system", SelfLink:"", UID:"1a21dd63-86c2-4b0a-bba0-21634e9f2618", ResourceVersion:"608", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63770771155, loc:(*time.Location)(0x2b9b600)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"coredns-autoscaler", "pod-template-hash":"57fd5c9bd5", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns-autoscaler"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.70.2", ContainerID:"6a72c8eb56ca8bc28c74f9b77d26905dd11271493225caf215c4ac55ab5af656", Pod:"coredns-autoscaler-57fd5c9bd5-bw2xr", Endpoint:"eth0", ServiceAccountName:"coredns-autoscaler", IPNetworks:[]string{"10.42.2.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns-autoscaler"}, InterfaceName:"cali8fcfe179851", MAC:"ea:70:d5:24:5f:eb", Ports:[]v3.EndpointPort(nil)}}
2021-10-25 15:06:03.766 [INFO][7579] k8s.go 476: Wrote updated endpoint to datastore ContainerID="6a72c8eb56ca8bc28c74f9b77d26905dd11271493225caf215c4ac55ab5af656" Namespace="kube-system" Pod="coredns-autoscaler-57fd5c9bd5-bw2xr" WorkloadEndpoint="192.168.70.2-k8s-coredns--autoscaler--57fd5c9bd5--bw2xr-eth0"
I1025 15:06:03.869652    4045 kubelet_pods.go:1285] "Killing unwanted pod" podName="rke-metrics-addon-deploy-job-x2f95"
E1025 15:06:03.933706    4045 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/1a21dd63-86c2-4b0a-bba0-21634e9f2618/etc-hosts with error exit status 1" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr"
I1025 15:06:04.386233    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr" event=&{ID:1a21dd63-86c2-4b0a-bba0-21634e9f2618 Type:ContainerStarted Data:6a72c8eb56ca8bc28c74f9b77d26905dd11271493225caf215c4ac55ab5af656}
I1025 15:06:04.396716    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/canal-dt6sf" event=&{ID:18413886-6b2a-4b22-9fd6-2bb45108ff1e Type:ContainerStarted Data:bfdb83331a0f3531b9c0d833ce525ba44a80720f68aa59ef313b50f90e26ffac}
I1025 15:06:04.440759    4045 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62ee4c74-20c7-43be-b150-959a371c4d16-config-volume\") pod \"62ee4c74-20c7-43be-b150-959a371c4d16\" (UID: \"62ee4c74-20c7-43be-b150-959a371c4d16\") "
I1025 15:06:04.440820    4045 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84cxx\" (UniqueName: \"kubernetes.io/projected/62ee4c74-20c7-43be-b150-959a371c4d16-kube-api-access-84cxx\") pod \"62ee4c74-20c7-43be-b150-959a371c4d16\" (UID: \"62ee4c74-20c7-43be-b150-959a371c4d16\") "
W1025 15:06:04.441413    4045 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/62ee4c74-20c7-43be-b150-959a371c4d16/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
I1025 15:06:04.441647    4045 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62ee4c74-20c7-43be-b150-959a371c4d16-config-volume" (OuterVolumeSpecName: "config-volume") pod "62ee4c74-20c7-43be-b150-959a371c4d16" (UID: "62ee4c74-20c7-43be-b150-959a371c4d16"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
I1025 15:06:04.445291    4045 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62ee4c74-20c7-43be-b150-959a371c4d16-kube-api-access-84cxx" (OuterVolumeSpecName: "kube-api-access-84cxx") pod "62ee4c74-20c7-43be-b150-959a371c4d16" (UID: "62ee4c74-20c7-43be-b150-959a371c4d16"). InnerVolumeSpecName "kube-api-access-84cxx". PluginName "kubernetes.io/projected", VolumeGidValue ""
I1025 15:06:04.541007    4045 reconciler.go:319] "Volume detached for volume \"kube-api-access-84cxx\" (UniqueName: \"kubernetes.io/projected/62ee4c74-20c7-43be-b150-959a371c4d16-kube-api-access-84cxx\") on node \"192.168.70.2\" DevicePath \"\""
I1025 15:06:04.541054    4045 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62ee4c74-20c7-43be-b150-959a371c4d16-config-volume\") on node \"192.168.70.2\" DevicePath \"\""
I1025 15:06:06.037714    4045 kubelet.go:1932] "SyncLoop ADD" source="api" pods=[kube-system/rke-ingress-controller-deploy-job-t2mjs]
I1025 15:06:06.037777    4045 topology_manager.go:187] "Topology Admit Handler"
I1025 15:06:06.152660    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg24p\" (UniqueName: \"kubernetes.io/projected/710c2bda-08c4-4c37-ab92-c8637ab10d78-kube-api-access-hg24p\") pod \"rke-ingress-controller-deploy-job-t2mjs\" (UID: \"710c2bda-08c4-4c37-ab92-c8637ab10d78\") "
I1025 15:06:06.152724    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/710c2bda-08c4-4c37-ab92-c8637ab10d78-config-volume\") pod \"rke-ingress-controller-deploy-job-t2mjs\" (UID: \"710c2bda-08c4-4c37-ab92-c8637ab10d78\") "
I1025 15:06:06.253665    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/710c2bda-08c4-4c37-ab92-c8637ab10d78-config-volume\") pod \"rke-ingress-controller-deploy-job-t2mjs\" (UID: \"710c2bda-08c4-4c37-ab92-c8637ab10d78\") "
I1025 15:06:06.253714    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-hg24p\" (UniqueName: \"kubernetes.io/projected/710c2bda-08c4-4c37-ab92-c8637ab10d78-kube-api-access-hg24p\") pod \"rke-ingress-controller-deploy-job-t2mjs\" (UID: \"710c2bda-08c4-4c37-ab92-c8637ab10d78\") "
I1025 15:06:06.254372    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "config-volume" (UniqueName: "kubernetes.io/configmap/710c2bda-08c4-4c37-ab92-c8637ab10d78-config-volume") pod "rke-ingress-controller-deploy-job-t2mjs" (UID: "710c2bda-08c4-4c37-ab92-c8637ab10d78") 
I1025 15:06:06.263631    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "kube-api-access-hg24p" (UniqueName: "kubernetes.io/projected/710c2bda-08c4-4c37-ab92-c8637ab10d78-kube-api-access-hg24p") pod "rke-ingress-controller-deploy-job-t2mjs" (UID: "710c2bda-08c4-4c37-ab92-c8637ab10d78") 
I1025 15:06:06.360758    4045 kuberuntime_manager.go:460] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/rke-ingress-controller-deploy-job-t2mjs"
I1025 15:06:06.499900    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/rke-ingress-controller-deploy-job-t2mjs" event=&{ID:710c2bda-08c4-4c37-ab92-c8637ab10d78 Type:ContainerDied Data:0a75a7331de136a636da706d4efd97981f440f3bcd7ada777826a2c100a74fcc}
I1025 15:06:06.499966    4045 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="0a75a7331de136a636da706d4efd97981f440f3bcd7ada777826a2c100a74fcc"
I1025 15:06:07.036078    4045 kubelet.go:1932] "SyncLoop ADD" source="api" pods=[ingress-nginx/nginx-ingress-controller-855pv]
I1025 15:06:07.036145    4045 topology_manager.go:187] "Topology Admit Handler"
W1025 15:06:07.059363    4045 container.go:586] Failed to update stats for container "/kubepods/besteffort/pod169e628d-46b0-478d-a133-39df0dbcd4ad": /sys/fs/cgroup/cpuset/kubepods/besteffort/pod169e628d-46b0-478d-a133-39df0dbcd4ad/cpuset.cpus found to be empty, continuing to push stats
I1025 15:06:07.117862    4045 kubelet.go:1932] "SyncLoop ADD" source="api" pods=[ingress-nginx/ingress-nginx-admission-create-cwm6l]
I1025 15:06:07.117917    4045 topology_manager.go:187] "Topology Admit Handler"
I1025 15:06:07.160827    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/169e628d-46b0-478d-a133-39df0dbcd4ad-webhook-cert\") pod \"nginx-ingress-controller-855pv\" (UID: \"169e628d-46b0-478d-a133-39df0dbcd4ad\") "
I1025 15:06:07.160879    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kklmd\" (UniqueName: \"kubernetes.io/projected/169e628d-46b0-478d-a133-39df0dbcd4ad-kube-api-access-kklmd\") pod \"nginx-ingress-controller-855pv\" (UID: \"169e628d-46b0-478d-a133-39df0dbcd4ad\") "
I1025 15:06:07.261192    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-kklmd\" (UniqueName: \"kubernetes.io/projected/169e628d-46b0-478d-a133-39df0dbcd4ad-kube-api-access-kklmd\") pod \"nginx-ingress-controller-855pv\" (UID: \"169e628d-46b0-478d-a133-39df0dbcd4ad\") "
I1025 15:06:07.261201    4045 kube_docker_client.go:347] "Stop pulling image" image="rancher/mirrored-coreos-flannel:v0.14.0" progress="Status: Downloaded newer image for rancher/mirrored-coreos-flannel:v0.14.0"
I1025 15:06:07.261235    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/169e628d-46b0-478d-a133-39df0dbcd4ad-webhook-cert\") pod \"nginx-ingress-controller-855pv\" (UID: \"169e628d-46b0-478d-a133-39df0dbcd4ad\") "
I1025 15:06:07.261259    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glz2s\" (UniqueName: \"kubernetes.io/projected/b61ee2eb-f71f-43f6-b643-4fa34cee4c46-kube-api-access-glz2s\") pod \"ingress-nginx-admission-create-cwm6l\" (UID: \"b61ee2eb-f71f-43f6-b643-4fa34cee4c46\") "
E1025 15:06:07.261378    4045 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found
E1025 15:06:07.261474    4045 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/169e628d-46b0-478d-a133-39df0dbcd4ad-webhook-cert podName:169e628d-46b0-478d-a133-39df0dbcd4ad nodeName:}" failed. No retries permitted until 2021-10-25 15:06:07.761441871 +0000 UTC m=+34.775012077 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/169e628d-46b0-478d-a133-39df0dbcd4ad-webhook-cert\") pod \"nginx-ingress-controller-855pv\" (UID: \"169e628d-46b0-478d-a133-39df0dbcd4ad\") : secret \"ingress-nginx-admission\" not found"
I1025 15:06:07.270464    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "kube-api-access-kklmd" (UniqueName: "kubernetes.io/projected/169e628d-46b0-478d-a133-39df0dbcd4ad-kube-api-access-kklmd") pod "nginx-ingress-controller-855pv" (UID: "169e628d-46b0-478d-a133-39df0dbcd4ad") 
I1025 15:06:07.361806    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-glz2s\" (UniqueName: \"kubernetes.io/projected/b61ee2eb-f71f-43f6-b643-4fa34cee4c46-kube-api-access-glz2s\") pod \"ingress-nginx-admission-create-cwm6l\" (UID: \"b61ee2eb-f71f-43f6-b643-4fa34cee4c46\") "
I1025 15:06:07.375438    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "kube-api-access-glz2s" (UniqueName: "kubernetes.io/projected/b61ee2eb-f71f-43f6-b643-4fa34cee4c46-kube-api-access-glz2s") pod "ingress-nginx-admission-create-cwm6l" (UID: "b61ee2eb-f71f-43f6-b643-4fa34cee4c46") 
I1025 15:06:07.437189    4045 kuberuntime_manager.go:460] "No sandbox for pod can be found. Need to start a new one" pod="ingress-nginx/ingress-nginx-admission-create-cwm6l"
I1025 15:06:07.512544    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/rke-ingress-controller-deploy-job-t2mjs" event=&{ID:710c2bda-08c4-4c37-ab92-c8637ab10d78 Type:ContainerStarted Data:0a75a7331de136a636da706d4efd97981f440f3bcd7ada777826a2c100a74fcc}
I1025 15:06:07.512665    4045 scope.go:111] "RemoveContainer" containerID="435d1bb9c2788ad5f8d8e83fc5e8ddd6789e9bc5109f171d11357da64be81190"
I1025 15:06:07.512747    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/rke-ingress-controller-deploy-job-t2mjs" event=&{ID:710c2bda-08c4-4c37-ab92-c8637ab10d78 Type:ContainerDied Data:435d1bb9c2788ad5f8d8e83fc5e8ddd6789e9bc5109f171d11357da64be81190}
I1025 15:06:07.644257    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="ingress-nginx/ingress-nginx-admission-create-cwm6l" event=&{ID:b61ee2eb-f71f-43f6-b643-4fa34cee4c46 Type:ContainerDied Data:a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde}
I1025 15:06:07.644321    4045 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde"
I1025 15:06:07.659851    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/canal-dt6sf" event=&{ID:18413886-6b2a-4b22-9fd6-2bb45108ff1e Type:ContainerStarted Data:ffa7a37b993e6bbca0565dbc7013f197f6aab3bb29fc6a5ea52c28c6b6e85e6b}
I1025 15:06:07.660705    4045 kubelet.go:2026] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/canal-dt6sf"
I1025 15:06:07.660992    4045 kubelet.go:2026] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/canal-dt6sf"
I1025 15:06:07.663338    4045 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg24p\" (UniqueName: \"kubernetes.io/projected/710c2bda-08c4-4c37-ab92-c8637ab10d78-kube-api-access-hg24p\") pod \"710c2bda-08c4-4c37-ab92-c8637ab10d78\" (UID: \"710c2bda-08c4-4c37-ab92-c8637ab10d78\") "
I1025 15:06:07.663374    4045 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/710c2bda-08c4-4c37-ab92-c8637ab10d78-config-volume\") pod \"710c2bda-08c4-4c37-ab92-c8637ab10d78\" (UID: \"710c2bda-08c4-4c37-ab92-c8637ab10d78\") "
W1025 15:06:07.664710    4045 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/710c2bda-08c4-4c37-ab92-c8637ab10d78/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
I1025 15:06:07.664806    4045 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/710c2bda-08c4-4c37-ab92-c8637ab10d78-config-volume" (OuterVolumeSpecName: "config-volume") pod "710c2bda-08c4-4c37-ab92-c8637ab10d78" (UID: "710c2bda-08c4-4c37-ab92-c8637ab10d78"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
I1025 15:06:07.669435    4045 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/710c2bda-08c4-4c37-ab92-c8637ab10d78-kube-api-access-hg24p" (OuterVolumeSpecName: "kube-api-access-hg24p") pod "710c2bda-08c4-4c37-ab92-c8637ab10d78" (UID: "710c2bda-08c4-4c37-ab92-c8637ab10d78"). InnerVolumeSpecName "kube-api-access-hg24p". PluginName "kubernetes.io/projected", VolumeGidValue ""
I1025 15:06:07.764775    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/169e628d-46b0-478d-a133-39df0dbcd4ad-webhook-cert\") pod \"nginx-ingress-controller-855pv\" (UID: \"169e628d-46b0-478d-a133-39df0dbcd4ad\") "
I1025 15:06:07.764828    4045 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/710c2bda-08c4-4c37-ab92-c8637ab10d78-config-volume\") on node \"192.168.70.2\" DevicePath \"\""
I1025 15:06:07.764841    4045 reconciler.go:319] "Volume detached for volume \"kube-api-access-hg24p\" (UniqueName: \"kubernetes.io/projected/710c2bda-08c4-4c37-ab92-c8637ab10d78-kube-api-access-hg24p\") on node \"192.168.70.2\" DevicePath \"\""
E1025 15:06:07.765114    4045 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found
E1025 15:06:07.765195    4045 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/169e628d-46b0-478d-a133-39df0dbcd4ad-webhook-cert podName:169e628d-46b0-478d-a133-39df0dbcd4ad nodeName:}" failed. No retries permitted until 2021-10-25 15:06:08.765174499 +0000 UTC m=+35.778744675 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/169e628d-46b0-478d-a133-39df0dbcd4ad-webhook-cert\") pod \"nginx-ingress-controller-855pv\" (UID: \"169e628d-46b0-478d-a133-39df0dbcd4ad\") : secret \"ingress-nginx-admission\" not found"
I1025 15:06:07.782133    4045 kubelet.go:1939] "SyncLoop UPDATE" source="api" pods=[ingress-nginx/ingress-nginx-admission-create-cwm6l]
2021-10-25 15:06:07.722 [INFO][8126] plugin.go 260: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {192.168.70.2-k8s-ingress--nginx--admission--create--cwm6l-eth0 ingress-nginx-admission-create- ingress-nginx  b61ee2eb-f71f-43f6-b643-4fa34cee4c46 842 0 2021-10-25 15:06:07 +0000 UTC <nil> <nil> map[app.kubernetes.io/component:admission-webhook app.kubernetes.io/instance:ingress-nginx app.kubernetes.io/name:ingress-nginx app.kubernetes.io/version:0.48.1 controller-uid:c4a103ab-1006-4247-83b2-5141be23c1c4 job-name:ingress-nginx-admission-create projectcalico.org/namespace:ingress-nginx projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:ingress-nginx-admission] map[] [] []  []} {k8s  192.168.70.2  ingress-nginx-admission-create-cwm6l eth0 ingress-nginx-admission [] []   [kns.ingress-nginx ksa.ingress-nginx.ingress-nginx-admission] cali421ba77593d  []}} ContainerID="a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde" Namespace="ingress-nginx" Pod="ingress-nginx-admission-create-cwm6l" WorkloadEndpoint="192.168.70.2-k8s-ingress--nginx--admission--create--cwm6l-"
2021-10-25 15:06:07.722 [INFO][8126] k8s.go 71: Extracted identifiers for CmdAddK8s ContainerID="a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde" Namespace="ingress-nginx" Pod="ingress-nginx-admission-create-cwm6l" WorkloadEndpoint="192.168.70.2-k8s-ingress--nginx--admission--create--cwm6l-eth0"
2021-10-25 15:06:07.723 [INFO][8126] utils.go 331: Calico CNI fetching podCidr from Kubernetes ContainerID="a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde" Namespace="ingress-nginx" Pod="ingress-nginx-admission-create-cwm6l" WorkloadEndpoint="192.168.70.2-k8s-ingress--nginx--admission--create--cwm6l-eth0"
2021-10-25 15:06:07.727 [INFO][8126] utils.go 337: Fetched podCidr ContainerID="a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde" Namespace="ingress-nginx" Pod="ingress-nginx-admission-create-cwm6l" WorkloadEndpoint="192.168.70.2-k8s-ingress--nginx--admission--create--cwm6l-eth0" podCidr="10.42.2.0/24"
2021-10-25 15:06:07.727 [INFO][8126] utils.go 340: Calico CNI passing podCidr to host-local IPAM: 10.42.2.0/24 ContainerID="a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde" Namespace="ingress-nginx" Pod="ingress-nginx-admission-create-cwm6l" WorkloadEndpoint="192.168.70.2-k8s-ingress--nginx--admission--create--cwm6l-eth0"
2021-10-25 15:06:07.734 [INFO][8126] k8s.go 374: Populated endpoint ContainerID="a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde" Namespace="ingress-nginx" Pod="ingress-nginx-admission-create-cwm6l" WorkloadEndpoint="192.168.70.2-k8s-ingress--nginx--admission--create--cwm6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.70.2-k8s-ingress--nginx--admission--create--cwm6l-eth0", GenerateName:"ingress-nginx-admission-create-", Namespace:"ingress-nginx", SelfLink:"", UID:"b61ee2eb-f71f-43f6-b643-4fa34cee4c46", ResourceVersion:"842", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63770771167, loc:(*time.Location)(0x2b9b600)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/component":"admission-webhook", "app.kubernetes.io/instance":"ingress-nginx", "app.kubernetes.io/name":"ingress-nginx", "app.kubernetes.io/version":"0.48.1", "controller-uid":"c4a103ab-1006-4247-83b2-5141be23c1c4", "job-name":"ingress-nginx-admission-create", "projectcalico.org/namespace":"ingress-nginx", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"ingress-nginx-admission"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.70.2", ContainerID:"", Pod:"ingress-nginx-admission-create-cwm6l", Endpoint:"eth0", ServiceAccountName:"ingress-nginx-admission", IPNetworks:[]string{"10.42.2.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.ingress-nginx", "ksa.ingress-nginx.ingress-nginx-admission"}, InterfaceName:"cali421ba77593d", MAC:"", Ports:[]v3.EndpointPort(nil)}}
2021-10-25 15:06:07.735 [INFO][8126] k8s.go 375: Calico CNI using IPs: [10.42.2.3/32] ContainerID="a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde" Namespace="ingress-nginx" Pod="ingress-nginx-admission-create-cwm6l" WorkloadEndpoint="192.168.70.2-k8s-ingress--nginx--admission--create--cwm6l-eth0"
2021-10-25 15:06:07.735 [INFO][8126] dataplane_linux.go 66: Setting the host side veth name to cali421ba77593d ContainerID="a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde" Namespace="ingress-nginx" Pod="ingress-nginx-admission-create-cwm6l" WorkloadEndpoint="192.168.70.2-k8s-ingress--nginx--admission--create--cwm6l-eth0"
2021-10-25 15:06:07.736 [INFO][8126] dataplane_linux.go 420: Disabling IPv4 forwarding ContainerID="a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde" Namespace="ingress-nginx" Pod="ingress-nginx-admission-create-cwm6l" WorkloadEndpoint="192.168.70.2-k8s-ingress--nginx--admission--create--cwm6l-eth0"
2021-10-25 15:06:07.771 [INFO][8126] k8s.go 402: Added Mac, interface name, and active container ID to endpoint ContainerID="a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde" Namespace="ingress-nginx" Pod="ingress-nginx-admission-create-cwm6l" WorkloadEndpoint="192.168.70.2-k8s-ingress--nginx--admission--create--cwm6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.70.2-k8s-ingress--nginx--admission--create--cwm6l-eth0", GenerateName:"ingress-nginx-admission-create-", Namespace:"ingress-nginx", SelfLink:"", UID:"b61ee2eb-f71f-43f6-b643-4fa34cee4c46", ResourceVersion:"842", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63770771167, loc:(*time.Location)(0x2b9b600)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/component":"admission-webhook", "app.kubernetes.io/instance":"ingress-nginx", "app.kubernetes.io/name":"ingress-nginx", "app.kubernetes.io/version":"0.48.1", "controller-uid":"c4a103ab-1006-4247-83b2-5141be23c1c4", "job-name":"ingress-nginx-admission-create", "projectcalico.org/namespace":"ingress-nginx", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"ingress-nginx-admission"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.70.2", ContainerID:"a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde", Pod:"ingress-nginx-admission-create-cwm6l", Endpoint:"eth0", ServiceAccountName:"ingress-nginx-admission", IPNetworks:[]string{"10.42.2.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.ingress-nginx", "ksa.ingress-nginx.ingress-nginx-admission"}, InterfaceName:"cali421ba77593d", MAC:"6e:ea:5c:a8:5e:fb", Ports:[]v3.EndpointPort(nil)}}
2021-10-25 15:06:07.781 [INFO][8126] k8s.go 476: Wrote updated endpoint to datastore ContainerID="a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde" Namespace="ingress-nginx" Pod="ingress-nginx-admission-create-cwm6l" WorkloadEndpoint="192.168.70.2-k8s-ingress--nginx--admission--create--cwm6l-eth0"
I1025 15:06:08.677922    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="ingress-nginx/ingress-nginx-admission-create-cwm6l" event=&{ID:b61ee2eb-f71f-43f6-b643-4fa34cee4c46 Type:ContainerStarted Data:a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde}
I1025 15:06:08.682916    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/rke-ingress-controller-deploy-job-t2mjs" event=&{ID:710c2bda-08c4-4c37-ab92-c8637ab10d78 Type:ContainerDied Data:0a75a7331de136a636da706d4efd97981f440f3bcd7ada777826a2c100a74fcc}
I1025 15:06:08.684031    4045 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="0a75a7331de136a636da706d4efd97981f440f3bcd7ada777826a2c100a74fcc"
I1025 15:06:08.776088    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/169e628d-46b0-478d-a133-39df0dbcd4ad-webhook-cert\") pod \"nginx-ingress-controller-855pv\" (UID: \"169e628d-46b0-478d-a133-39df0dbcd4ad\") "
E1025 15:06:08.776248    4045 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found
E1025 15:06:08.776351    4045 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/169e628d-46b0-478d-a133-39df0dbcd4ad-webhook-cert podName:169e628d-46b0-478d-a133-39df0dbcd4ad nodeName:}" failed. No retries permitted until 2021-10-25 15:06:10.776316801 +0000 UTC m=+37.789887017 (durationBeforeRetry 2s). Error: "MountVolume.SetUp failed for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/169e628d-46b0-478d-a133-39df0dbcd4ad-webhook-cert\") pod \"nginx-ingress-controller-855pv\" (UID: \"169e628d-46b0-478d-a133-39df0dbcd4ad\") : secret \"ingress-nginx-admission\" not found"
E1025 15:06:10.791371    4045 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found
E1025 15:06:10.791496    4045 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/169e628d-46b0-478d-a133-39df0dbcd4ad-webhook-cert podName:169e628d-46b0-478d-a133-39df0dbcd4ad nodeName:}" failed. No retries permitted until 2021-10-25 15:06:14.791458256 +0000 UTC m=+41.805028482 (durationBeforeRetry 4s). Error: "MountVolume.SetUp failed for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/169e628d-46b0-478d-a133-39df0dbcd4ad-webhook-cert\") pod \"nginx-ingress-controller-855pv\" (UID: \"169e628d-46b0-478d-a133-39df0dbcd4ad\") : secret \"ingress-nginx-admission\" not found"
I1025 15:06:10.791232    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/169e628d-46b0-478d-a133-39df0dbcd4ad-webhook-cert\") pod \"nginx-ingress-controller-855pv\" (UID: \"169e628d-46b0-478d-a133-39df0dbcd4ad\") "
I1025 15:06:10.815317    4045 kube_docker_client.go:347] "Stop pulling image" image="rancher/mirrored-cluster-proportional-autoscaler:1.8.3" progress="Status: Downloaded newer image for rancher/mirrored-cluster-proportional-autoscaler:1.8.3"
I1025 15:06:11.247719    4045 kubelet.go:1932] "SyncLoop ADD" source="api" pods=[kube-system/coredns-685d6d555d-9j7xt]
I1025 15:06:11.247814    4045 topology_manager.go:187] "Topology Admit Handler"
I1025 15:06:11.394922    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bdb1ee7-2200-45a6-87f0-64b0ec64c87e-config-volume\") pod \"coredns-685d6d555d-9j7xt\" (UID: \"2bdb1ee7-2200-45a6-87f0-64b0ec64c87e\") "
I1025 15:06:11.394961    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnrzz\" (UniqueName: \"kubernetes.io/projected/2bdb1ee7-2200-45a6-87f0-64b0ec64c87e-kube-api-access-hnrzz\") pod \"coredns-685d6d555d-9j7xt\" (UID: \"2bdb1ee7-2200-45a6-87f0-64b0ec64c87e\") "
I1025 15:06:11.495745    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bdb1ee7-2200-45a6-87f0-64b0ec64c87e-config-volume\") pod \"coredns-685d6d555d-9j7xt\" (UID: \"2bdb1ee7-2200-45a6-87f0-64b0ec64c87e\") "
I1025 15:06:11.495842    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-hnrzz\" (UniqueName: \"kubernetes.io/projected/2bdb1ee7-2200-45a6-87f0-64b0ec64c87e-kube-api-access-hnrzz\") pod \"coredns-685d6d555d-9j7xt\" (UID: \"2bdb1ee7-2200-45a6-87f0-64b0ec64c87e\") "
I1025 15:06:11.496420    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2bdb1ee7-2200-45a6-87f0-64b0ec64c87e-config-volume") pod "coredns-685d6d555d-9j7xt" (UID: "2bdb1ee7-2200-45a6-87f0-64b0ec64c87e") 
I1025 15:06:11.511234    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "kube-api-access-hnrzz" (UniqueName: "kubernetes.io/projected/2bdb1ee7-2200-45a6-87f0-64b0ec64c87e-kube-api-access-hnrzz") pod "coredns-685d6d555d-9j7xt" (UID: "2bdb1ee7-2200-45a6-87f0-64b0ec64c87e") 
I1025 15:06:11.576807    4045 kuberuntime_manager.go:460] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/coredns-685d6d555d-9j7xt"
I1025 15:06:11.806081    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/coredns-685d6d555d-9j7xt" event=&{ID:2bdb1ee7-2200-45a6-87f0-64b0ec64c87e Type:ContainerDied Data:f34c3374ebf2db2811509769a878f55e9a767f7726e1a73eb2fb5af81552a50d}
I1025 15:06:11.806145    4045 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="f34c3374ebf2db2811509769a878f55e9a767f7726e1a73eb2fb5af81552a50d"
I1025 15:06:11.821989    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/coredns-autoscaler-57fd5c9bd5-bw2xr" event=&{ID:1a21dd63-86c2-4b0a-bba0-21634e9f2618 Type:ContainerStarted Data:0cc608818923c1152e994dccf97253105f60e09ad09935fb8ee1693f724d573c}
I1025 15:06:11.932502    4045 kubelet.go:1939] "SyncLoop UPDATE" source="api" pods=[kube-system/coredns-685d6d555d-9j7xt]
2021-10-25 15:06:11.873 [INFO][8414] plugin.go 260: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {192.168.70.2-k8s-coredns--685d6d555d--9j7xt-eth0 coredns-685d6d555d- kube-system  2bdb1ee7-2200-45a6-87f0-64b0ec64c87e 909 0 2021-10-25 15:06:11 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:685d6d555d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] []  []} {k8s  192.168.70.2  coredns-685d6d555d-9j7xt eth0 coredns [] []   [kns.kube-system ksa.kube-system.coredns] cali010c2a6679e  [{dns UDP 53} {dns-tcp TCP 53} {metrics TCP 9153}]}} ContainerID="f34c3374ebf2db2811509769a878f55e9a767f7726e1a73eb2fb5af81552a50d" Namespace="kube-system" Pod="coredns-685d6d555d-9j7xt" WorkloadEndpoint="192.168.70.2-k8s-coredns--685d6d555d--9j7xt-"
2021-10-25 15:06:11.873 [INFO][8414] k8s.go 71: Extracted identifiers for CmdAddK8s ContainerID="f34c3374ebf2db2811509769a878f55e9a767f7726e1a73eb2fb5af81552a50d" Namespace="kube-system" Pod="coredns-685d6d555d-9j7xt" WorkloadEndpoint="192.168.70.2-k8s-coredns--685d6d555d--9j7xt-eth0"
2021-10-25 15:06:11.874 [INFO][8414] utils.go 331: Calico CNI fetching podCidr from Kubernetes ContainerID="f34c3374ebf2db2811509769a878f55e9a767f7726e1a73eb2fb5af81552a50d" Namespace="kube-system" Pod="coredns-685d6d555d-9j7xt" WorkloadEndpoint="192.168.70.2-k8s-coredns--685d6d555d--9j7xt-eth0"
2021-10-25 15:06:11.881 [INFO][8414] utils.go 337: Fetched podCidr ContainerID="f34c3374ebf2db2811509769a878f55e9a767f7726e1a73eb2fb5af81552a50d" Namespace="kube-system" Pod="coredns-685d6d555d-9j7xt" WorkloadEndpoint="192.168.70.2-k8s-coredns--685d6d555d--9j7xt-eth0" podCidr="10.42.2.0/24"
2021-10-25 15:06:11.881 [INFO][8414] utils.go 340: Calico CNI passing podCidr to host-local IPAM: 10.42.2.0/24 ContainerID="f34c3374ebf2db2811509769a878f55e9a767f7726e1a73eb2fb5af81552a50d" Namespace="kube-system" Pod="coredns-685d6d555d-9j7xt" WorkloadEndpoint="192.168.70.2-k8s-coredns--685d6d555d--9j7xt-eth0"
2021-10-25 15:06:11.893 [INFO][8414] k8s.go 374: Populated endpoint ContainerID="f34c3374ebf2db2811509769a878f55e9a767f7726e1a73eb2fb5af81552a50d" Namespace="kube-system" Pod="coredns-685d6d555d-9j7xt" WorkloadEndpoint="192.168.70.2-k8s-coredns--685d6d555d--9j7xt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.70.2-k8s-coredns--685d6d555d--9j7xt-eth0", GenerateName:"coredns-685d6d555d-", Namespace:"kube-system", SelfLink:"", UID:"2bdb1ee7-2200-45a6-87f0-64b0ec64c87e", ResourceVersion:"909", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63770771171, loc:(*time.Location)(0x2b9b600)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"685d6d555d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.70.2", ContainerID:"", Pod:"coredns-685d6d555d-9j7xt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"10.42.2.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali010c2a6679e", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35}, v3.EndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35}, v3.EndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1}}}}
2021-10-25 15:06:11.893 [INFO][8414] k8s.go 375: Calico CNI using IPs: [10.42.2.4/32] ContainerID="f34c3374ebf2db2811509769a878f55e9a767f7726e1a73eb2fb5af81552a50d" Namespace="kube-system" Pod="coredns-685d6d555d-9j7xt" WorkloadEndpoint="192.168.70.2-k8s-coredns--685d6d555d--9j7xt-eth0"
2021-10-25 15:06:11.893 [INFO][8414] dataplane_linux.go 66: Setting the host side veth name to cali010c2a6679e ContainerID="f34c3374ebf2db2811509769a878f55e9a767f7726e1a73eb2fb5af81552a50d" Namespace="kube-system" Pod="coredns-685d6d555d-9j7xt" WorkloadEndpoint="192.168.70.2-k8s-coredns--685d6d555d--9j7xt-eth0"
2021-10-25 15:06:11.894 [INFO][8414] dataplane_linux.go 420: Disabling IPv4 forwarding ContainerID="f34c3374ebf2db2811509769a878f55e9a767f7726e1a73eb2fb5af81552a50d" Namespace="kube-system" Pod="coredns-685d6d555d-9j7xt" WorkloadEndpoint="192.168.70.2-k8s-coredns--685d6d555d--9j7xt-eth0"
2021-10-25 15:06:11.917 [INFO][8414] k8s.go 402: Added Mac, interface name, and active container ID to endpoint ContainerID="f34c3374ebf2db2811509769a878f55e9a767f7726e1a73eb2fb5af81552a50d" Namespace="kube-system" Pod="coredns-685d6d555d-9j7xt" WorkloadEndpoint="192.168.70.2-k8s-coredns--685d6d555d--9j7xt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.70.2-k8s-coredns--685d6d555d--9j7xt-eth0", GenerateName:"coredns-685d6d555d-", Namespace:"kube-system", SelfLink:"", UID:"2bdb1ee7-2200-45a6-87f0-64b0ec64c87e", ResourceVersion:"909", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63770771171, loc:(*time.Location)(0x2b9b600)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"685d6d555d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.70.2", ContainerID:"f34c3374ebf2db2811509769a878f55e9a767f7726e1a73eb2fb5af81552a50d", Pod:"coredns-685d6d555d-9j7xt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"10.42.2.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali010c2a6679e", MAC:"96:55:b7:c1:de:72", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35}, v3.EndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35}, v3.EndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1}}}}
2021-10-25 15:06:11.930 [INFO][8414] k8s.go 476: Wrote updated endpoint to datastore ContainerID="f34c3374ebf2db2811509769a878f55e9a767f7726e1a73eb2fb5af81552a50d" Namespace="kube-system" Pod="coredns-685d6d555d-9j7xt" WorkloadEndpoint="192.168.70.2-k8s-coredns--685d6d555d--9j7xt-eth0"
I1025 15:06:12.841964    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/coredns-685d6d555d-9j7xt" event=&{ID:2bdb1ee7-2200-45a6-87f0-64b0ec64c87e Type:ContainerStarted Data:f34c3374ebf2db2811509769a878f55e9a767f7726e1a73eb2fb5af81552a50d}
E1025 15:06:13.950783    4045 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/pod169e628d-46b0-478d-a133-39df0dbcd4ad\": RecentStats: unable to find data in memory cache]"
E1025 15:06:13.951853    4045 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/b61ee2eb-f71f-43f6-b643-4fa34cee4c46/etc-hosts with error exit status 1" pod="ingress-nginx/ingress-nginx-admission-create-cwm6l"
E1025 15:06:13.956598    4045 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/2bdb1ee7-2200-45a6-87f0-64b0ec64c87e/etc-hosts with error exit status 1" pod="kube-system/coredns-685d6d555d-9j7xt"
I1025 15:06:14.130526    4045 kube_docker_client.go:347] "Stop pulling image" image="rancher/mirrored-jettech-kube-webhook-certgen:v1.5.1" progress="Status: Downloaded newer image for rancher/mirrored-jettech-kube-webhook-certgen:v1.5.1"
E1025 15:06:14.223361    4045 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/pod169e628d-46b0-478d-a133-39df0dbcd4ad\": RecentStats: unable to find data in memory cache]"
I1025 15:06:14.819566    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/169e628d-46b0-478d-a133-39df0dbcd4ad-webhook-cert\") pod \"nginx-ingress-controller-855pv\" (UID: \"169e628d-46b0-478d-a133-39df0dbcd4ad\") "
I1025 15:06:14.822461    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/169e628d-46b0-478d-a133-39df0dbcd4ad-webhook-cert") pod "nginx-ingress-controller-855pv" (UID: "169e628d-46b0-478d-a133-39df0dbcd4ad") 
I1025 15:06:14.860470    4045 kuberuntime_manager.go:460] "No sandbox for pod can be found. Need to start a new one" pod="ingress-nginx/nginx-ingress-controller-855pv"
I1025 15:06:14.871810    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="ingress-nginx/ingress-nginx-admission-create-cwm6l" event=&{ID:b61ee2eb-f71f-43f6-b643-4fa34cee4c46 Type:ContainerDied Data:9ca21f166fc6bd524566fa20733b5408e96c0d9273b2f414b2eb70c55e435350}
I1025 15:06:14.871934    4045 scope.go:111] "RemoveContainer" containerID="9ca21f166fc6bd524566fa20733b5408e96c0d9273b2f414b2eb70c55e435350"
I1025 15:06:15.032893    4045 kubelet.go:1939] "SyncLoop UPDATE" source="api" pods=[ingress-nginx/ingress-nginx-admission-create-cwm6l]
2021-10-25 15:06:15.031 [INFO][8750] k8s.go 563: Cleaning up netns ContainerID="a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde"
2021-10-25 15:06:15.032 [INFO][8750] dataplane_linux.go 473: Calico CNI deleting device in netns /proc/8082/ns/net ContainerID="a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde"
2021-10-25 15:06:15.057 [INFO][8750] dataplane_linux.go 490: Calico CNI deleted device in netns /proc/8082/ns/net ContainerID="a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde"
2021-10-25 15:06:15.057 [INFO][8750] k8s.go 570: Releasing IP address(es) ContainerID="a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde"
2021-10-25 15:06:15.057 [INFO][8750] utils.go 196: Calico CNI releasing IP address ContainerID="a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde"
2021-10-25 15:06:15.058 [INFO][8750] utils.go 212: Using a dummy podCidr to release the IP ContainerID="a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde" podCidr="0.0.0.0/0"
2021-10-25 15:06:15.058 [INFO][8750] utils.go 331: Calico CNI fetching podCidr from Kubernetes ContainerID="a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde"
2021-10-25 15:06:15.058 [INFO][8750] utils.go 337: Fetched podCidr ContainerID="a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde" podCidr="0.0.0.0/0"
2021-10-25 15:06:15.058 [INFO][8750] utils.go 340: Calico CNI passing podCidr to host-local IPAM: 0.0.0.0/0 ContainerID="a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde"
2021-10-25 15:06:15.060 [INFO][8750] k8s.go 576: Teardown processing complete. ContainerID="a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde"
I1025 15:06:15.235458    4045 kubelet.go:1939] "SyncLoop UPDATE" source="api" pods=[ingress-nginx/nginx-ingress-controller-855pv]
2021-10-25 15:06:15.176 [INFO][8822] plugin.go 260: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {192.168.70.2-k8s-nginx--ingress--controller--855pv-eth0 nginx-ingress-controller- ingress-nginx  169e628d-46b0-478d-a133-39df0dbcd4ad 827 0 2021-10-25 15:06:07 +0000 UTC <nil> <nil> map[app:ingress-nginx app.kubernetes.io/component:controller app.kubernetes.io/instance:ingress-nginx app.kubernetes.io/name:ingress-nginx app.kubernetes.io/version:0.48.1 controller-revision-hash:c5575d6d5 pod-template-generation:1 projectcalico.org/namespace:ingress-nginx projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nginx-ingress-serviceaccount] map[] [] []  []} {k8s  192.168.70.2  nginx-ingress-controller-855pv eth0 nginx-ingress-serviceaccount [] []   [kns.ingress-nginx ksa.ingress-nginx.nginx-ingress-serviceaccount] cali84b377352cd  [{webhook TCP 8443} {http TCP 80} {https TCP 443}]}} ContainerID="302b5130a2223afcaf1d6ca05c552e953fa966dfd565d53ecd94f334891f6f1b" Namespace="ingress-nginx" Pod="nginx-ingress-controller-855pv" WorkloadEndpoint="192.168.70.2-k8s-nginx--ingress--controller--855pv-"
2021-10-25 15:06:15.176 [INFO][8822] k8s.go 71: Extracted identifiers for CmdAddK8s ContainerID="302b5130a2223afcaf1d6ca05c552e953fa966dfd565d53ecd94f334891f6f1b" Namespace="ingress-nginx" Pod="nginx-ingress-controller-855pv" WorkloadEndpoint="192.168.70.2-k8s-nginx--ingress--controller--855pv-eth0"
2021-10-25 15:06:15.177 [INFO][8822] utils.go 331: Calico CNI fetching podCidr from Kubernetes ContainerID="302b5130a2223afcaf1d6ca05c552e953fa966dfd565d53ecd94f334891f6f1b" Namespace="ingress-nginx" Pod="nginx-ingress-controller-855pv" WorkloadEndpoint="192.168.70.2-k8s-nginx--ingress--controller--855pv-eth0"
2021-10-25 15:06:15.180 [INFO][8822] utils.go 337: Fetched podCidr ContainerID="302b5130a2223afcaf1d6ca05c552e953fa966dfd565d53ecd94f334891f6f1b" Namespace="ingress-nginx" Pod="nginx-ingress-controller-855pv" WorkloadEndpoint="192.168.70.2-k8s-nginx--ingress--controller--855pv-eth0" podCidr="10.42.2.0/24"
2021-10-25 15:06:15.181 [INFO][8822] utils.go 340: Calico CNI passing podCidr to host-local IPAM: 10.42.2.0/24 ContainerID="302b5130a2223afcaf1d6ca05c552e953fa966dfd565d53ecd94f334891f6f1b" Namespace="ingress-nginx" Pod="nginx-ingress-controller-855pv" WorkloadEndpoint="192.168.70.2-k8s-nginx--ingress--controller--855pv-eth0"
2021-10-25 15:06:15.191 [INFO][8822] k8s.go 374: Populated endpoint ContainerID="302b5130a2223afcaf1d6ca05c552e953fa966dfd565d53ecd94f334891f6f1b" Namespace="ingress-nginx" Pod="nginx-ingress-controller-855pv" WorkloadEndpoint="192.168.70.2-k8s-nginx--ingress--controller--855pv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.70.2-k8s-nginx--ingress--controller--855pv-eth0", GenerateName:"nginx-ingress-controller-", Namespace:"ingress-nginx", SelfLink:"", UID:"169e628d-46b0-478d-a133-39df0dbcd4ad", ResourceVersion:"827", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63770771167, loc:(*time.Location)(0x2b9b600)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ingress-nginx", "app.kubernetes.io/component":"controller", "app.kubernetes.io/instance":"ingress-nginx", "app.kubernetes.io/name":"ingress-nginx", "app.kubernetes.io/version":"0.48.1", "controller-revision-hash":"c5575d6d5", "pod-template-generation":"1", "projectcalico.org/namespace":"ingress-nginx", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nginx-ingress-serviceaccount"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.70.2", ContainerID:"", Pod:"nginx-ingress-controller-855pv", Endpoint:"eth0", ServiceAccountName:"nginx-ingress-serviceaccount", IPNetworks:[]string{"10.42.2.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.ingress-nginx", "ksa.ingress-nginx.nginx-ingress-serviceaccount"}, InterfaceName:"cali84b377352cd", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"webhook", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x20fb}, v3.EndpointPort{Name:"http", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x50}, v3.EndpointPort{Name:"https", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1bb}}}}
2021-10-25 15:06:15.192 [INFO][8822] k8s.go 375: Calico CNI using IPs: [10.42.2.5/32] ContainerID="302b5130a2223afcaf1d6ca05c552e953fa966dfd565d53ecd94f334891f6f1b" Namespace="ingress-nginx" Pod="nginx-ingress-controller-855pv" WorkloadEndpoint="192.168.70.2-k8s-nginx--ingress--controller--855pv-eth0"
2021-10-25 15:06:15.192 [INFO][8822] dataplane_linux.go 66: Setting the host side veth name to cali84b377352cd ContainerID="302b5130a2223afcaf1d6ca05c552e953fa966dfd565d53ecd94f334891f6f1b" Namespace="ingress-nginx" Pod="nginx-ingress-controller-855pv" WorkloadEndpoint="192.168.70.2-k8s-nginx--ingress--controller--855pv-eth0"
2021-10-25 15:06:15.194 [INFO][8822] dataplane_linux.go 420: Disabling IPv4 forwarding ContainerID="302b5130a2223afcaf1d6ca05c552e953fa966dfd565d53ecd94f334891f6f1b" Namespace="ingress-nginx" Pod="nginx-ingress-controller-855pv" WorkloadEndpoint="192.168.70.2-k8s-nginx--ingress--controller--855pv-eth0"
2021-10-25 15:06:15.222 [INFO][8822] k8s.go 402: Added Mac, interface name, and active container ID to endpoint ContainerID="302b5130a2223afcaf1d6ca05c552e953fa966dfd565d53ecd94f334891f6f1b" Namespace="ingress-nginx" Pod="nginx-ingress-controller-855pv" WorkloadEndpoint="192.168.70.2-k8s-nginx--ingress--controller--855pv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.70.2-k8s-nginx--ingress--controller--855pv-eth0", GenerateName:"nginx-ingress-controller-", Namespace:"ingress-nginx", SelfLink:"", UID:"169e628d-46b0-478d-a133-39df0dbcd4ad", ResourceVersion:"827", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63770771167, loc:(*time.Location)(0x2b9b600)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ingress-nginx", "app.kubernetes.io/component":"controller", "app.kubernetes.io/instance":"ingress-nginx", "app.kubernetes.io/name":"ingress-nginx", "app.kubernetes.io/version":"0.48.1", "controller-revision-hash":"c5575d6d5", "pod-template-generation":"1", "projectcalico.org/namespace":"ingress-nginx", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nginx-ingress-serviceaccount"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.70.2", ContainerID:"302b5130a2223afcaf1d6ca05c552e953fa966dfd565d53ecd94f334891f6f1b", Pod:"nginx-ingress-controller-855pv", Endpoint:"eth0", ServiceAccountName:"nginx-ingress-serviceaccount", IPNetworks:[]string{"10.42.2.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.ingress-nginx", "ksa.ingress-nginx.nginx-ingress-serviceaccount"}, InterfaceName:"cali84b377352cd", MAC:"72:92:4b:3a:11:28", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"webhook", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x20fb}, v3.EndpointPort{Name:"http", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x50}, v3.EndpointPort{Name:"https", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1bb}}}}
2021-10-25 15:06:15.236 [INFO][8822] k8s.go 476: Wrote updated endpoint to datastore ContainerID="302b5130a2223afcaf1d6ca05c552e953fa966dfd565d53ecd94f334891f6f1b" Namespace="ingress-nginx" Pod="nginx-ingress-controller-855pv" WorkloadEndpoint="192.168.70.2-k8s-nginx--ingress--controller--855pv-eth0"
I1025 15:06:15.888204    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="ingress-nginx/nginx-ingress-controller-855pv" event=&{ID:169e628d-46b0-478d-a133-39df0dbcd4ad Type:ContainerStarted Data:302b5130a2223afcaf1d6ca05c552e953fa966dfd565d53ecd94f334891f6f1b}
I1025 15:06:15.893818    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="ingress-nginx/ingress-nginx-admission-create-cwm6l" event=&{ID:b61ee2eb-f71f-43f6-b643-4fa34cee4c46 Type:ContainerDied Data:a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde}
I1025 15:06:15.893856    4045 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="a8b7b561e1f9dda51e1d24d2175a602fa41d0c6901423f32cbb36056d9c1edde"
I1025 15:06:15.893936    4045 kuberuntime_manager.go:479] "No ready sandbox for pod can be found. Need to start a new one" pod="ingress-nginx/ingress-nginx-admission-create-cwm6l"
I1025 15:06:17.032394    4045 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glz2s\" (UniqueName: \"kubernetes.io/projected/b61ee2eb-f71f-43f6-b643-4fa34cee4c46-kube-api-access-glz2s\") pod \"b61ee2eb-f71f-43f6-b643-4fa34cee4c46\" (UID: \"b61ee2eb-f71f-43f6-b643-4fa34cee4c46\") "
I1025 15:06:17.033912    4045 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b61ee2eb-f71f-43f6-b643-4fa34cee4c46-kube-api-access-glz2s" (OuterVolumeSpecName: "kube-api-access-glz2s") pod "b61ee2eb-f71f-43f6-b643-4fa34cee4c46" (UID: "b61ee2eb-f71f-43f6-b643-4fa34cee4c46"). InnerVolumeSpecName "kube-api-access-glz2s". PluginName "kubernetes.io/projected", VolumeGidValue ""
I1025 15:06:17.132788    4045 reconciler.go:319] "Volume detached for volume \"kube-api-access-glz2s\" (UniqueName: \"kubernetes.io/projected/b61ee2eb-f71f-43f6-b643-4fa34cee4c46-kube-api-access-glz2s\") on node \"192.168.70.2\" DevicePath \"\""
I1025 15:06:17.226887    4045 kube_docker_client.go:347] "Stop pulling image" image="rancher/mirrored-coredns-coredns:1.8.4" progress="Status: Downloaded newer image for rancher/mirrored-coredns-coredns:1.8.4"
I1025 15:06:17.924928    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="kube-system/coredns-685d6d555d-9j7xt" event=&{ID:2bdb1ee7-2200-45a6-87f0-64b0ec64c87e Type:ContainerStarted Data:67833382b935eedcce29e956f3788a215668a5e39c1f71ce4130198e6e218416}
I1025 15:06:17.925248    4045 kubelet.go:2026] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/coredns-685d6d555d-9j7xt"
I1025 15:06:17.926011    4045 kubelet.go:2026] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/coredns-685d6d555d-9j7xt"
I1025 15:06:22.947014    4045 kube_docker_client.go:347] "Stop pulling image" image="rancher/nginx-ingress-controller:nginx-0.48.1-rancher1" progress="Status: Downloaded newer image for rancher/nginx-ingress-controller:nginx-0.48.1-rancher1"
I1025 15:06:23.983501    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="ingress-nginx/nginx-ingress-controller-855pv" event=&{ID:169e628d-46b0-478d-a133-39df0dbcd4ad Type:ContainerStarted Data:72109c8e85f4047df9c88ae269bb6bb7fbd566ae398baf08fa429c4d949eb1b8}
I1025 15:06:23.983656    4045 kubelet.go:2026] "SyncLoop (probe)" probe="readiness" status="" pod="ingress-nginx/nginx-ingress-controller-855pv"
I1025 15:06:37.039616    4045 kubelet.go:2026] "SyncLoop (probe)" probe="readiness" status="ready" pod="ingress-nginx/nginx-ingress-controller-855pv"
I1025 15:07:48.615809    4045 kubelet.go:1932] "SyncLoop ADD" source="api" pods=[metallb/metallb-speaker-wf85k]
I1025 15:07:48.615884    4045 topology_manager.go:187] "Topology Admit Handler"
W1025 15:07:48.647561    4045 container.go:586] Failed to update stats for container "/kubepods/besteffort/podb529cef5-8bb9-4386-bb63-aceb1a61984c": /sys/fs/cgroup/cpuset/kubepods/besteffort/podb529cef5-8bb9-4386-bb63-aceb1a61984c/cpuset.mems found to be empty, continuing to push stats
I1025 15:07:48.715534    4045 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn9fb\" (UniqueName: \"kubernetes.io/projected/b529cef5-8bb9-4386-bb63-aceb1a61984c-kube-api-access-zn9fb\") pod \"metallb-speaker-wf85k\" (UID: \"b529cef5-8bb9-4386-bb63-aceb1a61984c\") "
I1025 15:07:48.816860    4045 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-zn9fb\" (UniqueName: \"kubernetes.io/projected/b529cef5-8bb9-4386-bb63-aceb1a61984c-kube-api-access-zn9fb\") pod \"metallb-speaker-wf85k\" (UID: \"b529cef5-8bb9-4386-bb63-aceb1a61984c\") "
I1025 15:07:48.832621    4045 operation_generator.go:698] MountVolume.SetUp succeeded for volume "kube-api-access-zn9fb" (UniqueName: "kubernetes.io/projected/b529cef5-8bb9-4386-bb63-aceb1a61984c-kube-api-access-zn9fb") pod "metallb-speaker-wf85k" (UID: "b529cef5-8bb9-4386-bb63-aceb1a61984c") 
I1025 15:07:48.949454    4045 kuberuntime_manager.go:460] "No sandbox for pod can be found. Need to start a new one" pod="metallb/metallb-speaker-wf85k"
I1025 15:07:49.913105    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="metallb/metallb-speaker-wf85k" event=&{ID:b529cef5-8bb9-4386-bb63-aceb1a61984c Type:ContainerStarted Data:c16a75317502b44ab7e04813a5f197cd1b6465d73edf7ba82edf615dad8100d0}
I1025 15:07:53.037775    4045 kube_docker_client.go:347] "Stop pulling image" image="docker.io/bitnami/metallb-speaker:0.10.2-debian-10-r110" progress="Status: Downloaded newer image for bitnami/metallb-speaker:0.10.2-debian-10-r110"
I1025 15:07:53.958195    4045 kubelet.go:1970] "SyncLoop (PLEG): event for pod" pod="metallb/metallb-speaker-wf85k" event=&{ID:b529cef5-8bb9-4386-bb63-aceb1a61984c Type:ContainerStarted Data:60572a28b692a553bb3ac52c40a3346e2bca4e84a28174d2464ca2d2458f5636}
I1025 15:07:53.958456    4045 kubelet.go:2026] "SyncLoop (probe)" probe="readiness" status="" pod="metallb/metallb-speaker-wf85k"
E1025 15:07:54.225011    4045 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/podb529cef5-8bb9-4386-bb63-aceb1a61984c\": RecentStats: unable to find data in memory cache]"
E1025 15:07:59.196203    4045 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/podb529cef5-8bb9-4386-bb63-aceb1a61984c\": RecentStats: unable to find data in memory cache]"
I1025 15:08:08.625529    4045 kubelet.go:2026] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb/metallb-speaker-wf85k"
I1025 15:10:33.754020    4045 kubelet.go:1316] "Image garbage collection succeeded"
I1025 15:10:33.895347    4045 container_manager_linux.go:510] "Discovered runtime cgroup name" cgroupName="/system.slice/docker.service"
elacy commented 3 years ago

OK I may have cracked it.

So I figured if this is happening it must be a network issue. Turns out that canal can be provided a particular interface as described here. the interface I'm using for communication between nodes is not the default interface, I changed it to the one I use for inter node communication and it now works great!

stale[bot] commented 2 years ago

This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 60 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.