k0sproject / k0s

k0s - The Zero Friction Kubernetes
https://docs.k0sproject.io
Other
3.46k stars 354 forks source link

invalid capacity 0 on image filesystem after successful deployment on Debian 9 or Debian 10 #897

Closed ghost closed 3 years ago

ghost commented 3 years ago

Version

v1.20.6+k0s.0

Platform Which platform did you run k0s on?

No LSB modules are available.
Distributor ID: Debian
Description:    Debian GNU/Linux 10 (buster)
Release:        10
Codename:       buster

What happened? After a fresh install with , the worker node reports "invalid capacity 0 on image filesystem" in Lens app and in kubectl describe node k0s-multinode-slave1

How To Reproduce

all:
  children:
    initial_controller:
      hosts:
        k0s-1:
    worker:
      hosts:
        k0s-2:

  hosts:
    k0s-1:
      ansible_host: HOST1_IP
    k0s-2:
      ansible_host: HOST2_IP
  vars:
    ansible_user: USERNAM

Expected behavior I would think that everything would just work

Screenshots & Logs

$ export KUBECONFIG=/var/lib/k0s/pki/admin.conf
$ kubectl describe node k0s-multinode-slave1
Name:               k0s-multinode-slave1
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=k0s-multinode-slave1
                    kubernetes.io/os=linux
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 07 May 2021 18:48:15 +0300
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  k0s-multinode-slave1
  AcquireTime:     <unset>
  RenewTime:       Fri, 07 May 2021 19:14:58 +0300
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Fri, 07 May 2021 19:14:58 +0300   Fri, 07 May 2021 18:59:39 +0300   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Fri, 07 May 2021 19:14:58 +0300   Fri, 07 May 2021 18:59:39 +0300   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Fri, 07 May 2021 19:14:58 +0300   Fri, 07 May 2021 18:59:39 +0300   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Fri, 07 May 2021 19:14:58 +0300   Fri, 07 May 2021 18:59:50 +0300   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  65.21.110.11
  Hostname:    k0s-multinode-slave1
Capacity:
  cpu:                3
  ephemeral-storage:  78585088Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3942980Ki
  pods:               110
Allocatable:
  cpu:                3
  ephemeral-storage:  72424016981
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3840580Ki
  pods:               110
System Info:
  Machine ID:                 4a6a1a2e0d60485bb50a882b5e8ec26f
  System UUID:                4a6a1a2e-0d60-485b-b50a-882b5e8ec26f
  Boot ID:                    dd9b436f-e891-4f50-bccf-61e9bb722519
  Kernel Version:             4.19.0-16-amd64
  OS Image:                   Debian GNU/Linux 10 (buster)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.4.4
  Kubelet Version:            v1.20.6-k0s1
  Kube-Proxy Version:         v1.20.6-k0s1
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (5 in total)
  Namespace                   Name                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                               ------------  ----------  ---------------  -------------  ---
  kube-system                 coredns-5c98d7d4d8-fftng           100m (3%)     0 (0%)      70Mi (1%)        170Mi (4%)     43m
  kube-system                 konnectivity-agent-96htm           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
  kube-system                 kube-proxy-xlnld                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
  kube-system                 kube-router-8bv27                  250m (8%)     0 (0%)      250Mi (6%)       0 (0%)         26m
  kube-system                 metrics-server-6fbcd86f7b-7pss8    10m (0%)      0 (0%)      30M (0%)         0 (0%)         43m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests        Limits
  --------           --------        ------
  cpu                360m (12%)      0 (0%)
  memory             365544320 (9%)  170Mi (4%)
  ephemeral-storage  0 (0%)          0 (0%)
  hugepages-1Gi      0 (0%)          0 (0%)
  hugepages-2Mi      0 (0%)          0 (0%)
Events:
  Type     Reason                   Age                From        Message
  ----     ------                   ----               ----        -------
  Normal   Starting                 26m                kubelet     Starting kubelet.
  Warning  InvalidDiskCapacity      26m                kubelet     invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientMemory  26m (x2 over 26m)  kubelet     Node k0s-multinode-slave1 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    26m (x2 over 26m)  kubelet     Node k0s-multinode-slave1 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     26m (x2 over 26m)  kubelet     Node k0s-multinode-slave1 status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  26m                kubelet     Updated Node Allocatable limit across pods
  Warning  InvalidDiskCapacity      24m                kubelet     invalid capacity 0 on image filesystem
  Normal   Starting                 24m                kubelet     Starting kubelet.
  Normal   NodeAllocatableEnforced  24m                kubelet     Updated Node Allocatable limit across pods
  Normal   NodeHasSufficientMemory  24m (x2 over 24m)  kubelet     Node k0s-multinode-slave1 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    24m (x2 over 24m)  kubelet     Node k0s-multinode-slave1 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     24m (x2 over 24m)  kubelet     Node k0s-multinode-slave1 status is now: NodeHasSufficientPID
  Warning  Rebooted                 24m                kubelet     Node k0s-multinode-slave1 has been rebooted, boot id: defa89de-32a2-43fa-8793-0cc9a068a668
  Normal   NodeNotReady             24m                kubelet     Node k0s-multinode-slave1 status is now: NodeNotReady
  Normal   Starting                 24m                kube-proxy  Starting kube-proxy.
  Normal   NodeReady                24m                kubelet     Node k0s-multinode-slave1 status is now: NodeReady
  Normal   Starting                 15m                kubelet     Starting kubelet.
  Warning  InvalidDiskCapacity      15m                kubelet     invalid capacity 0 on image filesystem
  Normal   NodeAllocatableEnforced  15m                kubelet     Updated Node Allocatable limit across pods
  Normal   NodeHasSufficientMemory  15m (x2 over 15m)  kubelet     Node k0s-multinode-slave1 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    15m (x2 over 15m)  kubelet     Node k0s-multinode-slave1 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     15m (x2 over 15m)  kubelet     Node k0s-multinode-slave1 status is now: NodeHasSufficientPID
  Warning  Rebooted                 15m                kubelet     Node k0s-multinode-slave1 has been rebooted, boot id: dd9b436f-e891-4f50-bccf-61e9bb722519
  Normal   NodeNotReady             15m                kubelet     Node k0s-multinode-slave1 status is now: NodeNotReady
  Normal   Starting                 15m                kube-proxy  Starting kube-proxy.
  Normal   NodeReady                15m                kubelet     Node k0s-multinode-slave1 status is now: NodeReady```

Additional context I enabled cgroup v2 in the kernel following https://rootlesscontaine.rs/getting-started/common/cgroup2/ so that /sys/fs/cgroup/cgroup.controllers is present but I still have the warning

jnummelin commented 3 years ago

@achrjulien did you check the logs for the worker if there's anything kubelet is reporting why it fails to get the image fs status? You can filter kubelet logs e.g. with journalctl -u k0sworker.service | grep "component=kubelet". On the node status I see there are all the expected system pods up-and-running, thus I'm suspecting it's some issue with kubelet getting the FS details rather than a real issue pulling and using the images.

This is what I see on a healthy cluster in logs when kubelet starts:

May 11 16:18:10 k0s-wrkr-1 k0s[880]: time="2021-05-11 16:18:10" level=info msg="E0511 16:18:10.057877     904 cri_stats_provider.go:369] \"Failed to get the info of the filesystem with mountpoint\" err=\"unable to find data in memory cache\" mountpoint=\"/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs\"" component=kubelet
May 11 16:18:10 k0s-wrkr-1 k0s[880]: time="2021-05-11 16:18:10" level=info msg="E0511 16:18:10.057920     904 kubelet.go:1313] \"Image garbage collection failed once. Stats initialization may not have completed yet\" err=\"invalid capacity 0 on image filesystem\"" component=kubelet

This is expected behaviour as the caches etc. for node stats are not yet populated but after while the warning condition should be cleared.

ghost commented 3 years ago

@jnummelin Thank you for checking this issue. Here is a part of the logs from my side:

May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="E0512 17:56:22.647072     764 cri_stats_provider.go:376] Failed to get the info of the filesystem with mountpoint \"/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs\": unable to find data in memory cache." component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="E0512 17:56:22.647098     764 kubelet.go:1296] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.648202     764 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.649761     764 volume_manager.go:271] Starting Kubelet Volume Manager" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.650343     764 desired_state_of_world_populator.go:142] Desired state populator starts to run" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.651945     764 scope.go:111] [topologymanager] RemoveContainer - Container ID: 15e319ba9b0f50d6f5ebed71c428f4d93605f62d3135c70bcd7acfa6e7ad323e" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.652672     764 client.go:86] parsed scheme: \"unix\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.652993     764 client.go:86] scheme \"unix\" not registered, fallback to default scheme" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.653116     764 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/k0s/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" component=kubelet

The full log is rather long but I post it in case it can help to find what is happening:

# journalctl -u k0sworker.service | grep "component=kubelet"
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="Starting to supervise" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="Started successfully, go nuts" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed." component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="Flag --kube-reserved-cgroup has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information." component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="Flag --kubelet-cgroups has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information." component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information." component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information." component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.300941     764 flags.go:59] FLAG: --add-dir-header=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.300999     764 flags.go:59] FLAG: --address=\"0.0.0.0\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301004     764 flags.go:59] FLAG: --allowed-unsafe-sysctls=\"[]\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301012     764 flags.go:59] FLAG: --alsologtostderr=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301015     764 flags.go:59] FLAG: --anonymous-auth=\"true\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301020     764 flags.go:59] FLAG: --application-metrics-count-limit=\"100\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301024     764 flags.go:59] FLAG: --authentication-token-webhook=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301028     764 flags.go:59] FLAG: --authentication-token-webhook-cache-ttl=\"2m0s\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301033     764 flags.go:59] FLAG: --authorization-mode=\"AlwaysAllow\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301037     764 flags.go:59] FLAG: --authorization-webhook-cache-authorized-ttl=\"5m0s\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301040     764 flags.go:59] FLAG: --authorization-webhook-cache-unauthorized-ttl=\"30s\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301044     764 flags.go:59] FLAG: --boot-id-file=\"/proc/sys/kernel/random/boot_id\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301048     764 flags.go:59] FLAG: --bootstrap-kubeconfig=\"/var/lib/k0s/kubelet-bootstrap.conf\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301052     764 flags.go:59] FLAG: --cert-dir=\"/var/lib/k0s/kubelet/pki\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301055     764 flags.go:59] FLAG: --cgroup-driver=\"cgroupfs\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301059     764 flags.go:59] FLAG: --cgroup-root=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301062     764 flags.go:59] FLAG: --cgroups-per-qos=\"true\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301065     764 flags.go:59] FLAG: --chaos-chance=\"0\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301167     764 flags.go:59] FLAG: --client-ca-file=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301178     764 flags.go:59] FLAG: --cloud-config=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301181     764 flags.go:59] FLAG: --cloud-provider=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301184     764 flags.go:59] FLAG: --cluster-dns=\"[]\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301189     764 flags.go:59] FLAG: --cluster-domain=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301192     764 flags.go:59] FLAG: --cni-bin-dir=\"/opt/cni/bin\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301195     764 flags.go:59] FLAG: --cni-cache-dir=\"/var/lib/cni/cache\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301198     764 flags.go:59] FLAG: --cni-conf-dir=\"/etc/cni/net.d\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301202     764 flags.go:59] FLAG: --config=\"/var/lib/k0s/kubelet-config.yaml\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301206     764 flags.go:59] FLAG: --container-hints=\"/etc/cadvisor/container_hints.json\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301210     764 flags.go:59] FLAG: --container-log-max-files=\"5\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301216     764 flags.go:59] FLAG: --container-log-max-size=\"10Mi\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301251     764 flags.go:59] FLAG: --container-runtime=\"remote\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301257     764 flags.go:59] FLAG: --container-runtime-endpoint=\"unix:///run/k0s/containerd.sock\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301261     764 flags.go:59] FLAG: --containerd=\"/run/k0s/containerd.sock\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301265     764 flags.go:59] FLAG: --containerd-namespace=\"k8s.io\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301268     764 flags.go:59] FLAG: --contention-profiling=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301272     764 flags.go:59] FLAG: --cpu-cfs-quota=\"true\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301275     764 flags.go:59] FLAG: --cpu-cfs-quota-period=\"100ms\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301279     764 flags.go:59] FLAG: --cpu-manager-policy=\"none\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301283     764 flags.go:59] FLAG: --cpu-manager-reconcile-period=\"10s\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301287     764 flags.go:59] FLAG: --docker=\"unix:///var/run/docker.sock\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301291     764 flags.go:59] FLAG: --docker-endpoint=\"unix:///var/run/docker.sock\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301295     764 flags.go:59] FLAG: --docker-env-metadata-whitelist=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301298     764 flags.go:59] FLAG: --docker-only=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301337     764 flags.go:59] FLAG: --docker-root=\"/var/lib/docker\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301343     764 flags.go:59] FLAG: --docker-tls=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301347     764 flags.go:59] FLAG: --docker-tls-ca=\"ca.pem\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301350     764 flags.go:59] FLAG: --docker-tls-cert=\"cert.pem\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301354     764 flags.go:59] FLAG: --docker-tls-key=\"key.pem\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301357     764 flags.go:59] FLAG: --dynamic-config-dir=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301363     764 flags.go:59] FLAG: --enable-cadvisor-json-endpoints=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301367     764 flags.go:59] FLAG: --enable-controller-attach-detach=\"true\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301370     764 flags.go:59] FLAG: --enable-debugging-handlers=\"true\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301373     764 flags.go:59] FLAG: --enable-load-reader=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301377     764 flags.go:59] FLAG: --enable-server=\"true\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301380     764 flags.go:59] FLAG: --enforce-node-allocatable=\"[pods]\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301418     764 flags.go:59] FLAG: --event-burst=\"10\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301423     764 flags.go:59] FLAG: --event-qps=\"5\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301428     764 flags.go:59] FLAG: --event-storage-age-limit=\"default=0\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301432     764 flags.go:59] FLAG: --event-storage-event-limit=\"default=0\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301435     764 flags.go:59] FLAG: --eviction-hard=\"imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301449     764 flags.go:59] FLAG: --eviction-max-pod-grace-period=\"0\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301452     764 flags.go:59] FLAG: --eviction-minimum-reclaim=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301457     764 flags.go:59] FLAG: --eviction-pressure-transition-period=\"5m0s\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301461     764 flags.go:59] FLAG: --eviction-soft=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301464     764 flags.go:59] FLAG: --eviction-soft-grace-period=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301467     764 flags.go:59] FLAG: --exit-on-lock-contention=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301470     764 flags.go:59] FLAG: --experimental-allocatable-ignore-eviction=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301473     764 flags.go:59] FLAG: --experimental-bootstrap-kubeconfig=\"/var/lib/k0s/kubelet-bootstrap.conf\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301509     764 flags.go:59] FLAG: --experimental-check-node-capabilities-before-mount=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301515     764 flags.go:59] FLAG: --experimental-dockershim-root-directory=\"/var/lib/dockershim\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301518     764 flags.go:59] FLAG: --experimental-kernel-memcg-notification=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301521     764 flags.go:59] FLAG: --experimental-logging-sanitization=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301525     764 flags.go:59] FLAG: --experimental-mounter-path=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301528     764 flags.go:59] FLAG: --fail-swap-on=\"true\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301531     764 flags.go:59] FLAG: --feature-gates=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301537     764 flags.go:59] FLAG: --file-check-frequency=\"20s\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301541     764 flags.go:59] FLAG: --global-housekeeping-interval=\"1m0s\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301544     764 flags.go:59] FLAG: --hairpin-mode=\"promiscuous-bridge\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301548     764 flags.go:59] FLAG: --healthz-bind-address=\"127.0.0.1\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301551     764 flags.go:59] FLAG: --healthz-port=\"10248\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301555     764 flags.go:59] FLAG: --help=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301558     764 flags.go:59] FLAG: --hostname-override=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301561     764 flags.go:59] FLAG: --housekeeping-interval=\"10s\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301565     764 flags.go:59] FLAG: --http-check-frequency=\"20s\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301568     764 flags.go:59] FLAG: --image-credential-provider-bin-dir=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301571     764 flags.go:59] FLAG: --image-credential-provider-config=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301574     764 flags.go:59] FLAG: --image-gc-high-threshold=\"85\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301577     764 flags.go:59] FLAG: --image-gc-low-threshold=\"80\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301580     764 flags.go:59] FLAG: --image-pull-progress-deadline=\"1m0s\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301584     764 flags.go:59] FLAG: --image-service-endpoint=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301586     764 flags.go:59] FLAG: --iptables-drop-bit=\"15\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301590     764 flags.go:59] FLAG: --iptables-masquerade-bit=\"14\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301593     764 flags.go:59] FLAG: --keep-terminated-pod-volumes=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301596     764 flags.go:59] FLAG: --kernel-memcg-notification=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301599     764 flags.go:59] FLAG: --kube-api-burst=\"10\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301604     764 flags.go:59] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301608     764 flags.go:59] FLAG: --kube-api-qps=\"5\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301611     764 flags.go:59] FLAG: --kube-reserved=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301615     764 flags.go:59] FLAG: --kube-reserved-cgroup=\"system.slice\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301618     764 flags.go:59] FLAG: --kubeconfig=\"/var/lib/k0s/kubelet.conf\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301622     764 flags.go:59] FLAG: --kubelet-cgroups=\"/system.slice/containerd.service\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301626     764 flags.go:59] FLAG: --lock-file=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301629     764 flags.go:59] FLAG: --log-backtrace-at=\":0\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301633     764 flags.go:59] FLAG: --log-cadvisor-usage=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301637     764 flags.go:59] FLAG: --log-dir=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301640     764 flags.go:59] FLAG: --log-file=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301643     764 flags.go:59] FLAG: --log-file-max-size=\"1800\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301685     764 flags.go:59] FLAG: --log-flush-frequency=\"5s\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301690     764 flags.go:59] FLAG: --logging-format=\"text\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301693     764 flags.go:59] FLAG: --logtostderr=\"true\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301696     764 flags.go:59] FLAG: --machine-id-file=\"/etc/machine-id,/var/lib/dbus/machine-id\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301700     764 flags.go:59] FLAG: --make-iptables-util-chains=\"true\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301703     764 flags.go:59] FLAG: --manifest-url=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301707     764 flags.go:59] FLAG: --manifest-url-header=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301713     764 flags.go:59] FLAG: --master-service-namespace=\"default\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.301717     764 flags.go:59] FLAG: --max-open-files=\"1000000\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302847     764 flags.go:59] FLAG: --max-pods=\"110\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302853     764 flags.go:59] FLAG: --maximum-dead-containers=\"-1\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302858     764 flags.go:59] FLAG: --maximum-dead-containers-per-container=\"1\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302862     764 flags.go:59] FLAG: --minimum-container-ttl-duration=\"0s\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302865     764 flags.go:59] FLAG: --minimum-image-ttl-duration=\"2m0s\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302869     764 flags.go:59] FLAG: --network-plugin=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302872     764 flags.go:59] FLAG: --network-plugin-mtu=\"0\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302876     764 flags.go:59] FLAG: --node-ip=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302878     764 flags.go:59] FLAG: --node-labels=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302884     764 flags.go:59] FLAG: --node-status-max-images=\"50\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302887     764 flags.go:59] FLAG: --node-status-update-frequency=\"10s\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302890     764 flags.go:59] FLAG: --non-masquerade-cidr=\"10.0.0.0/8\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302893     764 flags.go:59] FLAG: --one-output=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302898     764 flags.go:59] FLAG: --oom-score-adj=\"-999\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302901     764 flags.go:59] FLAG: --pod-cidr=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302903     764 flags.go:59] FLAG: --pod-infra-container-image=\"k8s.gcr.io/pause:3.2\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302906     764 flags.go:59] FLAG: --pod-manifest-path=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302909     764 flags.go:59] FLAG: --pod-max-pids=\"-1\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302914     764 flags.go:59] FLAG: --pods-per-core=\"0\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302917     764 flags.go:59] FLAG: --port=\"10250\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302920     764 flags.go:59] FLAG: --protect-kernel-defaults=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302923     764 flags.go:59] FLAG: --provider-id=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302926     764 flags.go:59] FLAG: --qos-reserved=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302931     764 flags.go:59] FLAG: --read-only-port=\"10255\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302934     764 flags.go:59] FLAG: --really-crash-for-testing=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302937     764 flags.go:59] FLAG: --redirect-container-streaming=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302941     764 flags.go:59] FLAG: --register-node=\"true\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302944     764 flags.go:59] FLAG: --register-schedulable=\"true\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302947     764 flags.go:59] FLAG: --register-with-taints=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302955     764 flags.go:59] FLAG: --registry-burst=\"10\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302958     764 flags.go:59] FLAG: --registry-qps=\"5\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302961     764 flags.go:59] FLAG: --reserved-cpus=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302963     764 flags.go:59] FLAG: --resolv-conf=\"/etc/resolv.conf\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302967     764 flags.go:59] FLAG: --root-dir=\"/var/lib/k0s/kubelet\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302970     764 flags.go:59] FLAG: --rotate-certificates=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302973     764 flags.go:59] FLAG: --rotate-server-certificates=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302976     764 flags.go:59] FLAG: --runonce=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302979     764 flags.go:59] FLAG: --runtime-cgroups=\"/system.slice/containerd.service\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302982     764 flags.go:59] FLAG: --runtime-request-timeout=\"2m0s\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302985     764 flags.go:59] FLAG: --seccomp-profile-root=\"/var/lib/kubelet/seccomp\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302989     764 flags.go:59] FLAG: --serialize-image-pulls=\"true\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302992     764 flags.go:59] FLAG: --skip-headers=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302995     764 flags.go:59] FLAG: --skip-log-headers=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.302997     764 flags.go:59] FLAG: --stderrthreshold=\"2\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.303000     764 flags.go:59] FLAG: --storage-driver-buffer-duration=\"1m0s\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.303004     764 flags.go:59] FLAG: --storage-driver-db=\"cadvisor\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.303007     764 flags.go:59] FLAG: --storage-driver-host=\"localhost:8086\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.303010     764 flags.go:59] FLAG: --storage-driver-password=\"root\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.303013     764 flags.go:59] FLAG: --storage-driver-secure=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.303016     764 flags.go:59] FLAG: --storage-driver-table=\"stats\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.303019     764 flags.go:59] FLAG: --storage-driver-user=\"root\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.303023     764 flags.go:59] FLAG: --streaming-connection-idle-timeout=\"4h0m0s\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.303026     764 flags.go:59] FLAG: --sync-frequency=\"1m0s\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.303029     764 flags.go:59] FLAG: --system-cgroups=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.303032     764 flags.go:59] FLAG: --system-reserved=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.303035     764 flags.go:59] FLAG: --system-reserved-cgroup=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.303038     764 flags.go:59] FLAG: --tls-cert-file=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.303042     764 flags.go:59] FLAG: --tls-cipher-suites=\"[]\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.303047     764 flags.go:59] FLAG: --tls-min-version=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.303050     764 flags.go:59] FLAG: --tls-private-key-file=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.303053     764 flags.go:59] FLAG: --topology-manager-policy=\"none\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.303056     764 flags.go:59] FLAG: --topology-manager-scope=\"container\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.303059     764 flags.go:59] FLAG: --v=\"1\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.303061     764 flags.go:59] FLAG: --version=\"false\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.303066     764 flags.go:59] FLAG: --vmodule=\"\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.303069     764 flags.go:59] FLAG: --volume-plugin-dir=\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.303074     764 flags.go:59] FLAG: --volume-stats-agg-period=\"1m0s\"" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.303124     764 feature_gate.go:243] feature gates: &{map[]}" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="Flag --kube-reserved-cgroup has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information." component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="Flag --kubelet-cgroups has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information." component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information." component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information." component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.306011     764 feature_gate.go:243] feature gates: &{map[]}" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.306086     764 feature_gate.go:243] feature gates: &{map[]}" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="W0512 17:56:17.306121     764 server.go:219] unsupported configuration:KubeletCgroups is not within KubeReservedCgroup" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.317914     764 server.go:416] Version: v1.20.6-k0s1" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.318004     764 feature_gate.go:243] feature gates: &{map[]}" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.318094     764 feature_gate.go:243] feature gates: &{map[]}" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.318227     764 server.go:837] Client rotation is on, will bootstrap in background" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.327823     764 certificate_store.go:130] Loading cert/key pair from \"/var/lib/k0s/kubelet/pki/kubelet-client-current.pem\"." component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="I0512 17:56:17.331057     764 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/k0s/pki/ca.crt" component=kubelet
May 12 17:56:17 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:17" level=info msg="W0512 17:56:17.331526     764 manager.go:159] Cannot detect current cgroup on cgroup v2" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.335091     764 fs.go:127] Filesystem UUIDs: map[175E-2886:/dev/sda15 95869d0b-2f7c-4c81-95d0-526224a44fa6:/dev/sda1]" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.335137     764 fs.go:128] Filesystem partitions: map[/dev/sda1:{mountpoint:/ major:8 minor:1 fsType:ext4 blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:21 fsType:tmpfs blockSize:0} /run/lock:{mountpoint:/run/lock major:0 minor:23 fsType:tmpfs blockSize:0}]" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.339384     764 manager.go:213] Machine: {Timestamp:2021-05-12 17:56:22.338920249 +0300 EEST m=+5.210945194 NumCores:3 NumPhysicalCores:3 NumSockets:1 CpuFrequency:2495312 MemoryCapacity:4037627904 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:4a6a1a2e0d60485bb50a882b5e8ec26f SystemUUID:4a6a1a2e-0d60-485b-b50a-882b5e8ec26f BootID:5b9238d0-5f84-46a1-88de-91841e778183 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:21 Capacity:403763200 Type:vfs Inodes:492874 HasInodes:true} {Device:/dev/sda1 DeviceMajor:8 DeviceMinor:1 Capacity:80471130112 Type:vfs Inodes:4880000 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:2018811904 Type:vfs Inodes:492874 HasInodes:true} {Device:/run/lock DeviceMajor:0 DeviceMinor:23 Capacity:5242880 Type:vfs Inodes:492874 HasInodes:true}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:81923145728 Scheduler:mq-deadline}] NetworkDevices:[{Name:enp7s0 MacAddress:86:00:00:b1:4a:26 Speed:-1 Mtu:1450} {Name:eth0 MacAddress:96:00:00:b1:4a:24 Speed:-1 Mtu:1500}] Topology:[{Id:0 Memory:4037627904 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:65536 Type:Instruction Level:1} {Size:524288 Type:Unified Level:2}] SocketID:0} {Id:1 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:65536 Type:Instruction Level:1} {Size:524288 Type:Unified Level:2}] SocketID:0} {Id:2 Threads:[2] Caches:[{Size:32768 Type:Data Level:1} {Size:65536 Type:Instruction Level:1} {Size:524288 Type:Unified Level:2}] SocketID:0}] Caches:[{Size:8388608 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.339559     764 manager_no_libpfm.go:28] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available." component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.339829     764 manager.go:229] Version: {KernelVersion:4.19.0-16-amd64 ContainerOsVersion:Debian GNU/Linux 10 (buster) DockerVersion:Unknown DockerAPIVersion:Unknown CadvisorVersion: CadvisorRevision:}" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.340548     764 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.341153     764 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.341183     764 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/system.slice/containerd.service SystemCgroupsName: KubeletCgroupsName:/system.slice/containerd.service ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/k0s/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName:system.slice SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.341577     764 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.341648     764 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.341656     764 container_manager_linux.go:315] Creating device plugin manager: true" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.342734     764 remote_runtime.go:62] parsed scheme: \"\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.342760     764 remote_runtime.go:62] scheme \"\" not registered, fallback to default scheme" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.342797     764 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/k0s/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.342806     764 clientconn.go:948] ClientConn switching balancer to \"pick_first\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.342868     764 remote_image.go:50] parsed scheme: \"\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.342882     764 remote_image.go:50] scheme \"\" not registered, fallback to default scheme" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.342893     764 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/k0s/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.343116     764 clientconn.go:948] ClientConn switching balancer to \"pick_first\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.344386     764 kubelet.go:276] Watching apiserver" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.346441     764 kubelet.go:453] Kubelet client is not nil" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.352438     764 kuberuntime_manager.go:216] Container runtime containerd initialized, version: v1.4.4, apiVersion: v1alpha2" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="E0512 17:56:22.638063     764 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated." component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.639481     764 certificate_store.go:130] Loading cert/key pair from \"/var/lib/k0s/kubelet/pki/kubelet-server-current.pem\"." component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.642157     764 plugins.go:638] Loaded volume plugin \"kubernetes.io/empty-dir\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.642197     764 plugins.go:638] Loaded volume plugin \"kubernetes.io/git-repo\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.642211     764 plugins.go:638] Loaded volume plugin \"kubernetes.io/host-path\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.642229     764 plugins.go:638] Loaded volume plugin \"kubernetes.io/nfs\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.642250     764 plugins.go:638] Loaded volume plugin \"kubernetes.io/secret\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.642267     764 plugins.go:638] Loaded volume plugin \"kubernetes.io/iscsi\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.642280     764 plugins.go:638] Loaded volume plugin \"kubernetes.io/glusterfs\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.642297     764 plugins.go:638] Loaded volume plugin \"kubernetes.io/rbd\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.642314     764 plugins.go:638] Loaded volume plugin \"kubernetes.io/quobyte\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.642327     764 plugins.go:638] Loaded volume plugin \"kubernetes.io/cephfs\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.642339     764 plugins.go:638] Loaded volume plugin \"kubernetes.io/downward-api\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.642392     764 plugins.go:638] Loaded volume plugin \"kubernetes.io/fc\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.642402     764 plugins.go:638] Loaded volume plugin \"kubernetes.io/flocker\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.642415     764 plugins.go:638] Loaded volume plugin \"kubernetes.io/configmap\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.642425     764 plugins.go:638] Loaded volume plugin \"kubernetes.io/projected\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.642469     764 plugins.go:638] Loaded volume plugin \"kubernetes.io/portworx-volume\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.642486     764 plugins.go:638] Loaded volume plugin \"kubernetes.io/scaleio\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.642497     764 plugins.go:638] Loaded volume plugin \"kubernetes.io/local-volume\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.642521     764 plugins.go:638] Loaded volume plugin \"kubernetes.io/storageos\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.642930     764 plugins.go:638] Loaded volume plugin \"kubernetes.io/csi\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.643381     764 server.go:1176] Started kubelet" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.644159     764 server.go:148] Starting to listen on 0.0.0.0:10250" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.645682     764 server.go:410] Adding debug handlers to kubelet server." component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="E0512 17:56:22.647072     764 cri_stats_provider.go:376] Failed to get the info of the filesystem with mountpoint \"/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs\": unable to find data in memory cache." component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="E0512 17:56:22.647098     764 kubelet.go:1296] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.648202     764 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.649761     764 volume_manager.go:271] Starting Kubelet Volume Manager" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.650343     764 desired_state_of_world_populator.go:142] Desired state populator starts to run" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.651945     764 scope.go:111] [topologymanager] RemoveContainer - Container ID: 15e319ba9b0f50d6f5ebed71c428f4d93605f62d3135c70bcd7acfa6e7ad323e" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.652672     764 client.go:86] parsed scheme: \"unix\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.652993     764 client.go:86] scheme \"unix\" not registered, fallback to default scheme" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.653116     764 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/k0s/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.653248     764 clientconn.go:948] ClientConn switching balancer to \"pick_first\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.654741     764 factory.go:137] Registering containerd factory" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.654850     764 factory.go:55] Registering systemd factory" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.654985     764 factory.go:101] Registering Raw factory" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.667457     764 scope.go:111] [topologymanager] RemoveContainer - Container ID: 88a689880e9390c195823319eb2526c81e15ad3eb97d9251112f392f0e28e1f3" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.673344     764 scope.go:111] [topologymanager] RemoveContainer - Container ID: 6a3d83b6fe6a3b10a2e76c921b55ee82d88f4bc06f9a49c52e6794a7070241bf" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.678506     764 scope.go:111] [topologymanager] RemoveContainer - Container ID: 39ea15d9e1a7c2d675da057f23337c7d2b40b315f5b2db01ae07627e5c18cd51" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.685572     764 scope.go:111] [topologymanager] RemoveContainer - Container ID: c23253cdb8a2f2b82927235726c88d52fa9f51234efbc59da19a19cceff37d39" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.703853     764 cpu_manager.go:193] [cpumanager] starting with none policy" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.703910     764 cpu_manager.go:194] [cpumanager] reconciling every 10s" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.703996     764 state_mem.go:36] [cpumanager] initializing new in-memory state store" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.707350     764 state_mem.go:88] [cpumanager] updated default cpuset: \"\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.707471     764 state_mem.go:96] [cpumanager] updated cpuset assignments: \"map[]\"" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.707492     764 policy_none.go:43] [cpumanager] none policy: Start" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="W0512 17:56:22.715687     764 manager.go:594] Failed to retrieve checkpoint for \"kubelet_internal_checkpoint\": checkpoint is not found" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.716774     764 plugin_manager.go:114] Starting Kubelet Plugin Manager" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.750074     764 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.244.0.0/24" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.752013     764 kubelet_network.go:77] Setting Pod CIDR:  -> 10.244.0.0/24" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.752330     764 kubelet_node_status.go:71] Attempting to register node k0s-multinode-slave1" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.765837     764 kubelet_node_status.go:109] Node k0s-multinode-slave1 was previously registered" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.765955     764 kubelet_node_status.go:74] Successfully registered node k0s-multinode-slave1" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.770892     764 setters.go:577] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2021-05-12 17:56:22.770853951 +0300 EEST m=+5.642878756 LastTransitionTime:2021-05-12 17:56:22.770853951 +0300 EEST m=+5.642878756 Reason:KubeletNotReady Message:PLEG is not healthy: pleg has yet to be successful}" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.814450     764 kubelet_network_linux.go:56] Initialized IPv4 iptables rules." component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.814521     764 status_manager.go:158] Starting to sync pod status with apiserver" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.814547     764 kubelet.go:1833] Starting kubelet main sync loop." component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="E0512 17:56:22.814601     764 kubelet.go:1857] skipping pod synchronization - PLEG is not healthy: pleg has yet to be successful" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.914990     764 topology_manager.go:187] [topologymanager] Topology Admit Handler" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.915745     764 topology_manager.go:187] [topologymanager] Topology Admit Handler" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.917006     764 topology_manager.go:187] [topologymanager] Topology Admit Handler" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.917168     764 topology_manager.go:187] [topologymanager] Topology Admit Handler" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="I0512 17:56:22.917291     764 topology_manager.go:187] [topologymanager] Topology Admit Handler" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="W0512 17:56:22.917470     764 pod_container_deletor.go:79] Container \"5d17a93f645685225f82320b8983a6098053a31600e04a634dc246480acd814c\" not found in pod's containers" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="W0512 17:56:22.917628     764 pod_container_deletor.go:79] Container \"b60ec0f44bdf1100d940c99b4c5134743965097322991d4b23883a8662a7544f\" not found in pod's containers" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="W0512 17:56:22.917696     764 pod_container_deletor.go:79] Container \"8963828f237b4a2199cf5d27707f294b9705e33a9cb1a26ca256f871eb99d859\" not found in pod's containers" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="W0512 17:56:22.917769     764 pod_container_deletor.go:79] Container \"9f97fc5f9ba91379b2cac03dcdfafc3b5758deef4b3fdd6609825feaf6aa0cf4\" not found in pod's containers" component=kubelet
May 12 17:56:22 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:22" level=info msg="W0512 17:56:22.917991     764 pod_container_deletor.go:79] Container \"4e343648845904cb8bc6a1b8c483be02a6f58a34250653b7c0ca9da03972a938\" not found in pod's containers" component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.051829     764 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-agent-token-r9p7p\" (UniqueName: \"kubernetes.io/secret/b4cf811e-a4ef-4648-990e-82a5b74d21e0-konnectivity-agent-token-r9p7p\") pod \"konnectivity-agent-96htm\" (UID: \"b4cf811e-a4ef-4648-990e-82a5b74d21e0\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.051922     764 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b4c09114-56c8-4cb5-a2e1-beda6c80f88f-tmp-dir\") pod \"metrics-server-6fbcd86f7b-7pss8\" (UID: \"b4c09114-56c8-4cb5-a2e1-beda6c80f88f\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.051958     764 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-token-qsmwr\" (UniqueName: \"kubernetes.io/secret/b4c09114-56c8-4cb5-a2e1-beda6c80f88f-metrics-server-token-qsmwr\") pod \"metrics-server-6fbcd86f7b-7pss8\" (UID: \"b4c09114-56c8-4cb5-a2e1-beda6c80f88f\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.051992     764 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-agent-token\" (UniqueName: \"kubernetes.io/projected/b4cf811e-a4ef-4648-990e-82a5b74d21e0-konnectivity-agent-token\") pod \"konnectivity-agent-96htm\" (UID: \"b4cf811e-a4ef-4648-990e-82a5b74d21e0\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.052024     764 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin\" (UniqueName: \"kubernetes.io/host-path/a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7-cni-bin\") pod \"kube-router-8bv27\" (UID: \"a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.052055     764 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdf531d5-d559-431d-ab36-cd31d61d7664-lib-modules\") pod \"kube-proxy-xlnld\" (UID: \"fdf531d5-d559-431d-ab36-cd31d61d7664\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.052082     764 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy-token-r5wss\" (UniqueName: \"kubernetes.io/secret/fdf531d5-d559-431d-ab36-cd31d61d7664-kube-proxy-token-r5wss\") pod \"kube-proxy-xlnld\" (UID: \"fdf531d5-d559-431d-ab36-cd31d61d7664\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.052109     764 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ac7716e-eb05-4342-befd-fa141c20d376-config-volume\") pod \"coredns-5c98d7d4d8-fftng\" (UID: \"6ac7716e-eb05-4342-befd-fa141c20d376\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.052137     764 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume \"coredns-token-v6bfh\" (UniqueName: \"kubernetes.io/secret/6ac7716e-eb05-4342-befd-fa141c20d376-coredns-token-v6bfh\") pod \"coredns-5c98d7d4d8-fftng\" (UID: \"6ac7716e-eb05-4342-befd-fa141c20d376\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.052164     764 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7-cni-conf-dir\") pod \"kube-router-8bv27\" (UID: \"a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.052192     764 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-router-token-k5g7k\" (UniqueName: \"kubernetes.io/secret/a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7-kube-router-token-k5g7k\") pod \"kube-router-8bv27\" (UID: \"a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.052241     764 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fdf531d5-d559-431d-ab36-cd31d61d7664-kube-proxy\") pod \"kube-proxy-xlnld\" (UID: \"fdf531d5-d559-431d-ab36-cd31d61d7664\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.052268     764 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7-lib-modules\") pod \"kube-router-8bv27\" (UID: \"a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.052294     764 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-router-cfg\" (UniqueName: \"kubernetes.io/configmap/a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7-kube-router-cfg\") pod \"kube-router-8bv27\" (UID: \"a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.052319     764 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7-xtables-lock\") pod \"kube-router-8bv27\" (UID: \"a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.052345     764 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdf531d5-d559-431d-ab36-cd31d61d7664-xtables-lock\") pod \"kube-proxy-xlnld\" (UID: \"fdf531d5-d559-431d-ab36-cd31d61d7664\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.052358     764 reconciler.go:157] Reconciler: start to sync state" component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.157896     764 reconciler.go:269] operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdf531d5-d559-431d-ab36-cd31d61d7664-lib-modules\") pod \"kube-proxy-xlnld\" (UID: \"fdf531d5-d559-431d-ab36-cd31d61d7664\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.157945     764 reconciler.go:269] operationExecutor.MountVolume started for volume \"kube-proxy-token-r5wss\" (UniqueName: \"kubernetes.io/secret/fdf531d5-d559-431d-ab36-cd31d61d7664-kube-proxy-token-r5wss\") pod \"kube-proxy-xlnld\" (UID: \"fdf531d5-d559-431d-ab36-cd31d61d7664\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.158014     764 reconciler.go:269] operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ac7716e-eb05-4342-befd-fa141c20d376-config-volume\") pod \"coredns-5c98d7d4d8-fftng\" (UID: \"6ac7716e-eb05-4342-befd-fa141c20d376\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.158092     764 reconciler.go:269] operationExecutor.MountVolume started for volume \"coredns-token-v6bfh\" (UniqueName: \"kubernetes.io/secret/6ac7716e-eb05-4342-befd-fa141c20d376-coredns-token-v6bfh\") pod \"coredns-5c98d7d4d8-fftng\" (UID: \"6ac7716e-eb05-4342-befd-fa141c20d376\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.158291     764 operation_generator.go:672] MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdf531d5-d559-431d-ab36-cd31d61d7664-lib-modules\") pod \"kube-proxy-xlnld\" (UID: \"fdf531d5-d559-431d-ab36-cd31d61d7664\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.158301     764 reconciler.go:269] operationExecutor.MountVolume started for volume \"cni-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7-cni-conf-dir\") pod \"kube-router-8bv27\" (UID: \"a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.158520     764 reconciler.go:269] operationExecutor.MountVolume started for volume \"kube-router-token-k5g7k\" (UniqueName: \"kubernetes.io/secret/a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7-kube-router-token-k5g7k\") pod \"kube-router-8bv27\" (UID: \"a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.158656     764 reconciler.go:269] operationExecutor.MountVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fdf531d5-d559-431d-ab36-cd31d61d7664-kube-proxy\") pod \"kube-proxy-xlnld\" (UID: \"fdf531d5-d559-431d-ab36-cd31d61d7664\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.158724     764 reconciler.go:269] operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7-lib-modules\") pod \"kube-router-8bv27\" (UID: \"a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.158812     764 reconciler.go:269] operationExecutor.MountVolume started for volume \"kube-router-cfg\" (UniqueName: \"kubernetes.io/configmap/a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7-kube-router-cfg\") pod \"kube-router-8bv27\" (UID: \"a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.158941     764 reconciler.go:269] operationExecutor.MountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7-xtables-lock\") pod \"kube-router-8bv27\" (UID: \"a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.159050     764 reconciler.go:269] operationExecutor.MountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdf531d5-d559-431d-ab36-cd31d61d7664-xtables-lock\") pod \"kube-proxy-xlnld\" (UID: \"fdf531d5-d559-431d-ab36-cd31d61d7664\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.159178     764 reconciler.go:269] operationExecutor.MountVolume started for volume \"konnectivity-agent-token-r9p7p\" (UniqueName: \"kubernetes.io/secret/b4cf811e-a4ef-4648-990e-82a5b74d21e0-konnectivity-agent-token-r9p7p\") pod \"konnectivity-agent-96htm\" (UID: \"b4cf811e-a4ef-4648-990e-82a5b74d21e0\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.159299     764 reconciler.go:269] operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b4c09114-56c8-4cb5-a2e1-beda6c80f88f-tmp-dir\") pod \"metrics-server-6fbcd86f7b-7pss8\" (UID: \"b4c09114-56c8-4cb5-a2e1-beda6c80f88f\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.159368     764 reconciler.go:269] operationExecutor.MountVolume started for volume \"metrics-server-token-qsmwr\" (UniqueName: \"kubernetes.io/secret/b4c09114-56c8-4cb5-a2e1-beda6c80f88f-metrics-server-token-qsmwr\") pod \"metrics-server-6fbcd86f7b-7pss8\" (UID: \"b4c09114-56c8-4cb5-a2e1-beda6c80f88f\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.159398     764 reconciler.go:269] operationExecutor.MountVolume started for volume \"konnectivity-agent-token\" (UniqueName: \"kubernetes.io/projected/b4cf811e-a4ef-4648-990e-82a5b74d21e0-konnectivity-agent-token\") pod \"konnectivity-agent-96htm\" (UID: \"b4cf811e-a4ef-4648-990e-82a5b74d21e0\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.159432     764 reconciler.go:269] operationExecutor.MountVolume started for volume \"cni-bin\" (UniqueName: \"kubernetes.io/host-path/a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7-cni-bin\") pod \"kube-router-8bv27\" (UID: \"a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.159664     764 operation_generator.go:672] MountVolume.SetUp succeeded for volume \"cni-bin\" (UniqueName: \"kubernetes.io/host-path/a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7-cni-bin\") pod \"kube-router-8bv27\" (UID: \"a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.159837     764 operation_generator.go:672] MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7-lib-modules\") pod \"kube-router-8bv27\" (UID: \"a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.160495     764 operation_generator.go:672] MountVolume.SetUp succeeded for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7-xtables-lock\") pod \"kube-router-8bv27\" (UID: \"a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.160930     764 operation_generator.go:672] MountVolume.SetUp succeeded for volume \"cni-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7-cni-conf-dir\") pod \"kube-router-8bv27\" (UID: \"a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.162281     764 operation_generator.go:672] MountVolume.SetUp succeeded for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdf531d5-d559-431d-ab36-cd31d61d7664-xtables-lock\") pod \"kube-proxy-xlnld\" (UID: \"fdf531d5-d559-431d-ab36-cd31d61d7664\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.163019     764 operation_generator.go:672] MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b4c09114-56c8-4cb5-a2e1-beda6c80f88f-tmp-dir\") pod \"metrics-server-6fbcd86f7b-7pss8\" (UID: \"b4c09114-56c8-4cb5-a2e1-beda6c80f88f\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.163375     764 operation_generator.go:672] MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ac7716e-eb05-4342-befd-fa141c20d376-config-volume\") pod \"coredns-5c98d7d4d8-fftng\" (UID: \"6ac7716e-eb05-4342-befd-fa141c20d376\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.163965     764 operation_generator.go:672] MountVolume.SetUp succeeded for volume \"kube-router-token-k5g7k\" (UniqueName: \"kubernetes.io/secret/a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7-kube-router-token-k5g7k\") pod \"kube-router-8bv27\" (UID: \"a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.165047     764 operation_generator.go:672] MountVolume.SetUp succeeded for volume \"metrics-server-token-qsmwr\" (UniqueName: \"kubernetes.io/secret/b4c09114-56c8-4cb5-a2e1-beda6c80f88f-metrics-server-token-qsmwr\") pod \"metrics-server-6fbcd86f7b-7pss8\" (UID: \"b4c09114-56c8-4cb5-a2e1-beda6c80f88f\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.173822     764 operation_generator.go:672] MountVolume.SetUp succeeded for volume \"kube-router-cfg\" (UniqueName: \"kubernetes.io/configmap/a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7-kube-router-cfg\") pod \"kube-router-8bv27\" (UID: \"a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.176397     764 operation_generator.go:672] MountVolume.SetUp succeeded for volume \"coredns-token-v6bfh\" (UniqueName: \"kubernetes.io/secret/6ac7716e-eb05-4342-befd-fa141c20d376-coredns-token-v6bfh\") pod \"coredns-5c98d7d4d8-fftng\" (UID: \"6ac7716e-eb05-4342-befd-fa141c20d376\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.222047     764 scope.go:111] [topologymanager] RemoveContainer - Container ID: c5390b890fd501edf355f036ceff79cd0dc36cfef3db156e765b1933673c635b" component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.231534     764 scope.go:111] [topologymanager] RemoveContainer - Container ID: f3f9fbaad40f8e449bddeb94d08140dbaf02f4a02b39d4c8fae46dd4ed3eb6f5" component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.559614     764 operation_generator.go:672] MountVolume.SetUp succeeded for volume \"kube-proxy-token-r5wss\" (UniqueName: \"kubernetes.io/secret/fdf531d5-d559-431d-ab36-cd31d61d7664-kube-proxy-token-r5wss\") pod \"kube-proxy-xlnld\" (UID: \"fdf531d5-d559-431d-ab36-cd31d61d7664\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.761585     764 operation_generator.go:672] MountVolume.SetUp succeeded for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fdf531d5-d559-431d-ab36-cd31d61d7664-kube-proxy\") pod \"kube-proxy-xlnld\" (UID: \"fdf531d5-d559-431d-ab36-cd31d61d7664\") " component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.945912     764 request.go:655] Throttling request took 1.026886727s, request: GET:https://65.21.106.166:6443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dkonnectivity-agent-token-r9p7p&limit=500&resourceVersion=0" component=kubelet
May 12 17:56:23 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:23" level=info msg="I0512 17:56:23.960718     764 operation_generator.go:672] MountVolume.SetUp succeeded for volume \"konnectivity-agent-token-r9p7p\" (UniqueName: \"kubernetes.io/secret/b4cf811e-a4ef-4648-990e-82a5b74d21e0-konnectivity-agent-token-r9p7p\") pod \"konnectivity-agent-96htm\" (UID: \"b4cf811e-a4ef-4648-990e-82a5b74d21e0\") " component=kubelet
May 12 17:56:24 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:24" level=info msg="I0512 17:56:24.168435     764 operation_generator.go:672] MountVolume.SetUp succeeded for volume \"konnectivity-agent-token\" (UniqueName: \"kubernetes.io/projected/b4cf811e-a4ef-4648-990e-82a5b74d21e0-konnectivity-agent-token\") pod \"konnectivity-agent-96htm\" (UID: \"b4cf811e-a4ef-4648-990e-82a5b74d21e0\") " component=kubelet
May 12 17:56:24 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:24" level=info msg="I0512 17:56:24.804419     764 prober.go:117] Readiness probe for \"coredns-5c98d7d4d8-fftng_kube-system(6ac7716e-eb05-4342-befd-fa141c20d376):coredns\" failed (failure): HTTP probe failed with statuscode: 503" component=kubelet
May 12 17:56:25 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:25" level=info msg="I0512 17:56:25.582140     764 kubelet.go:1990] SyncLoop (container unhealthy): \"kube-router-8bv27_kube-system(a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7)\"" component=kubelet
May 12 17:56:25 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:25" level=info msg="I0512 17:56:25.871112     764 scope.go:111] [topologymanager] RemoveContainer - Container ID: 3a8a0d6be0dd6a6fc576c2a004746bb9f73c89d6333bf9072d1710dd97eb1fd3" component=kubelet
May 12 17:56:26 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:26" level=info msg="I0512 17:56:26.805109     764 prober.go:117] Readiness probe for \"coredns-5c98d7d4d8-fftng_kube-system(6ac7716e-eb05-4342-befd-fa141c20d376):coredns\" failed (failure): HTTP probe failed with statuscode: 503" component=kubelet
May 12 17:56:28 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:28" level=info msg="I0512 17:56:28.804692     764 prober.go:117] Readiness probe for \"coredns-5c98d7d4d8-fftng_kube-system(6ac7716e-eb05-4342-befd-fa141c20d376):coredns\" failed (failure): HTTP probe failed with statuscode: 503" component=kubelet
May 12 17:56:30 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:30" level=info msg="I0512 17:56:30.804663     764 prober.go:117] Readiness probe for \"coredns-5c98d7d4d8-fftng_kube-system(6ac7716e-eb05-4342-befd-fa141c20d376):coredns\" failed (failure): HTTP probe failed with statuscode: 503" component=kubelet
May 12 17:56:32 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:32" level=info msg="I0512 17:56:32.804843     764 prober.go:117] Readiness probe for \"coredns-5c98d7d4d8-fftng_kube-system(6ac7716e-eb05-4342-befd-fa141c20d376):coredns\" failed (failure): HTTP probe failed with statuscode: 503" component=kubelet
May 12 17:56:33 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:33" level=info msg="I0512 17:56:33.745755     764 prober.go:117] Readiness probe for \"metrics-server-6fbcd86f7b-7pss8_kube-system(b4c09114-56c8-4cb5-a2e1-beda6c80f88f):metrics-server\" failed (failure): Get \"https://10.244.0.8:4443/healthz\": context deadline exceeded" component=kubelet
May 12 17:56:34 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:34" level=info msg="I0512 17:56:34.804661     764 prober.go:117] Readiness probe for \"coredns-5c98d7d4d8-fftng_kube-system(6ac7716e-eb05-4342-befd-fa141c20d376):coredns\" failed (failure): HTTP probe failed with statuscode: 503" component=kubelet
May 12 17:56:36 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:36" level=info msg="I0512 17:56:36.804751     764 prober.go:117] Readiness probe for \"coredns-5c98d7d4d8-fftng_kube-system(6ac7716e-eb05-4342-befd-fa141c20d376):coredns\" failed (failure): HTTP probe failed with statuscode: 503" component=kubelet
May 12 17:56:39 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:39" level=info msg="I0512 17:56:39.317953     764 prober.go:117] Readiness probe for \"coredns-5c98d7d4d8-fftng_kube-system(6ac7716e-eb05-4342-befd-fa141c20d376):coredns\" failed (failure): HTTP probe failed with statuscode: 503" component=kubelet
May 12 17:56:41 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:41" level=info msg="I0512 17:56:41.317716     764 prober.go:117] Readiness probe for \"coredns-5c98d7d4d8-fftng_kube-system(6ac7716e-eb05-4342-befd-fa141c20d376):coredns\" failed (failure): HTTP probe failed with statuscode: 503" component=kubelet
May 12 17:56:43 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:43" level=info msg="I0512 17:56:43.317605     764 prober.go:117] Readiness probe for \"coredns-5c98d7d4d8-fftng_kube-system(6ac7716e-eb05-4342-befd-fa141c20d376):coredns\" failed (failure): HTTP probe failed with statuscode: 503" component=kubelet
May 12 17:56:44 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:44" level=info msg="I0512 17:56:44.258925     764 prober.go:117] Readiness probe for \"metrics-server-6fbcd86f7b-7pss8_kube-system(b4c09114-56c8-4cb5-a2e1-beda6c80f88f):metrics-server\" failed (failure): Get \"https://10.244.0.8:4443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" component=kubelet
May 12 17:56:45 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:45" level=info msg="I0512 17:56:45.317862     764 prober.go:117] Readiness probe for \"coredns-5c98d7d4d8-fftng_kube-system(6ac7716e-eb05-4342-befd-fa141c20d376):coredns\" failed (failure): HTTP probe failed with statuscode: 503" component=kubelet
May 12 17:56:47 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:47" level=info msg="I0512 17:56:47.317859     764 prober.go:117] Readiness probe for \"coredns-5c98d7d4d8-fftng_kube-system(6ac7716e-eb05-4342-befd-fa141c20d376):coredns\" failed (failure): HTTP probe failed with statuscode: 503" component=kubelet
May 12 17:56:49 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:49" level=info msg="I0512 17:56:49.317843     764 prober.go:117] Readiness probe for \"coredns-5c98d7d4d8-fftng_kube-system(6ac7716e-eb05-4342-befd-fa141c20d376):coredns\" failed (failure): HTTP probe failed with statuscode: 503" component=kubelet
May 12 17:56:51 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:51" level=info msg="I0512 17:56:51.317730     764 prober.go:117] Readiness probe for \"coredns-5c98d7d4d8-fftng_kube-system(6ac7716e-eb05-4342-befd-fa141c20d376):coredns\" failed (failure): HTTP probe failed with statuscode: 503" component=kubelet
May 12 17:56:53 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:53" level=info msg="I0512 17:56:53.317983     764 prober.go:117] Readiness probe for \"coredns-5c98d7d4d8-fftng_kube-system(6ac7716e-eb05-4342-befd-fa141c20d376):coredns\" failed (failure): HTTP probe failed with statuscode: 503" component=kubelet
May 12 17:56:54 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:54" level=info msg="I0512 17:56:54.258945     764 prober.go:117] Readiness probe for \"metrics-server-6fbcd86f7b-7pss8_kube-system(b4c09114-56c8-4cb5-a2e1-beda6c80f88f):metrics-server\" failed (failure): Get \"https://10.244.0.8:4443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" component=kubelet
May 12 17:56:55 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:55" level=info msg="I0512 17:56:55.317646     764 prober.go:117] Readiness probe for \"coredns-5c98d7d4d8-fftng_kube-system(6ac7716e-eb05-4342-befd-fa141c20d376):coredns\" failed (failure): HTTP probe failed with statuscode: 503" component=kubelet
May 12 17:56:55 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:55" level=info msg="I0512 17:56:55.438007     764 scope.go:111] [topologymanager] RemoveContainer - Container ID: 7456dd052f8d44406f825a6587c58560efae676a5cdb5bf796e113aa0ca57dea" component=kubelet
May 12 17:56:55 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:55" level=info msg="I0512 17:56:55.438498     764 scope.go:111] [topologymanager] RemoveContainer - Container ID: 734507cd887b081c0a9f95c50771ec6c870333841b7dde3fe4d47a5998f2e1a2" component=kubelet
May 12 17:56:55 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:56:55" level=info msg="E0512 17:56:55.439064     764 pod_workers.go:191] Error syncing pod b4c09114-56c8-4cb5-a2e1-beda6c80f88f (\"metrics-server-6fbcd86f7b-7pss8_kube-system(b4c09114-56c8-4cb5-a2e1-beda6c80f88f)\"), skipping: failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=metrics-server pod=metrics-server-6fbcd86f7b-7pss8_kube-system(b4c09114-56c8-4cb5-a2e1-beda6c80f88f)\"" component=kubelet
May 12 17:57:06 k0s-multinode-slave1 k0s[697]: time="2021-05-12 17:57:06" level=info msg="I0512 17:57:06.328260     764 scope.go:111] [topologymanager] RemoveContainer - Container ID: 734507cd887b081c0a9f95c50771ec6c870333841b7dde3fe4d47a5998f2e1a2" component=kubelet
ghost commented 3 years ago

@jnummelin Could this all be linked to bugs in containerd 1.4.4 or even to the fact that I use ext4? (default from Hetzner) I put this here if it can help:

root@k0s-multinode-slave1:/home/julien# uname -a
Linux k0s-multinode-slave1 4.19.0-16-amd64 #1 SMP Debian 4.19.181-1 (2021-03-19) x86_64 GNU/Linux
root@k0s-multinode-slave1:/home/julien# cat /etc/issue
Debian GNU/Linux 10 \n \l
root@k0s-multinode-slave1:/home/julien# df -Bm
Filesystem     1M-blocks   Used Available Use% Mounted on
udev               1909M     0M     1909M   0% /dev
tmpfs               386M    40M      346M  11% /run
/dev/sda1         76744M 10046M    63532M  14% /
tmpfs              1926M     0M     1926M   0% /dev/shm
tmpfs                 5M     0M        5M   0% /run/lock
/dev/sda15          121M     1M      120M   1% /boot/efi
tmpfs              1926M     1M     1926M   1% /var/lib/k0s/kubelet/pods/a3ed1fdc-c5e2-4402-b6dd-f2b724f648c7/volumes/kubernetes.io~secret/kube-router-token-k5g7k
tmpfs              1926M     1M     1926M   1% /var/lib/k0s/kubelet/pods/b4c09114-56c8-4cb5-a2e1-beda6c80f88f/volumes/kubernetes.io~secret/metrics-server-token-qsmwr
tmpfs              1926M     1M     1926M   1% /var/lib/k0s/kubelet/pods/6ac7716e-eb05-4342-befd-fa141c20d376/volumes/kubernetes.io~secret/coredns-token-v6bfh
shm                  64M     0M       64M   0% /run/k0s/containerd/io.containerd.grpc.v1.cri/sandboxes/e4789d6fa11b74e678eabee613a765542e0037dad76ac7059183ff2af6bd74cc/shm
overlay           76744M 10046M    63532M  14% /run/k0s/containerd/io.containerd.runtime.v2.task/k8s.io/e4789d6fa11b74e678eabee613a765542e0037dad76ac7059183ff2af6bd74cc/rootfs
shm                  64M     0M       64M   0% /run/k0s/containerd/io.containerd.grpc.v1.cri/sandboxes/59784032e8276261f7a0a3164a9348c50ab643bd2aac9006841936c6aa9d0cfb/shm
shm                  64M     0M       64M   0% /run/k0s/containerd/io.containerd.grpc.v1.cri/sandboxes/60a7ebac72193c0bbcd362f785321702305d1f6d04b12a546415a40a23655a2b/shm
overlay           76744M 10046M    63532M  14% /run/k0s/containerd/io.containerd.runtime.v2.task/k8s.io/59784032e8276261f7a0a3164a9348c50ab643bd2aac9006841936c6aa9d0cfb/rootfs
overlay           76744M 10046M    63532M  14% /run/k0s/containerd/io.containerd.runtime.v2.task/k8s.io/60a7ebac72193c0bbcd362f785321702305d1f6d04b12a546415a40a23655a2b/rootfs
tmpfs              1926M     1M     1926M   1% /var/lib/k0s/kubelet/pods/fdf531d5-d559-431d-ab36-cd31d61d7664/volumes/kubernetes.io~secret/kube-proxy-token-r5wss
overlay           76744M 10046M    63532M  14% /run/k0s/containerd/io.containerd.runtime.v2.task/k8s.io/5ecf0dbc71b5ccf3578271a0c98c42682a74504a1a7bcf8cc8ace23fc5141fda/rootfs
shm                  64M     0M       64M   0% /run/k0s/containerd/io.containerd.grpc.v1.cri/sandboxes/e9a89ab0496fbdfea6e63a50e8416266d292144d3db8177428752105355bb427/shm
overlay           76744M 10046M    63532M  14% /run/k0s/containerd/io.containerd.runtime.v2.task/k8s.io/e9a89ab0496fbdfea6e63a50e8416266d292144d3db8177428752105355bb427/rootfs
tmpfs              1926M     1M     1926M   1% /var/lib/k0s/kubelet/pods/b4cf811e-a4ef-4648-990e-82a5b74d21e0/volumes/kubernetes.io~secret/konnectivity-agent-token-r9p7p
overlay           76744M 10046M    63532M  14% /run/k0s/containerd/io.containerd.runtime.v2.task/k8s.io/273446e4a7055705a5f18711b19b136dddad24e8c741c6e7d9028387b0a32a30/rootfs
tmpfs              1926M     1M     1926M   1% /var/lib/k0s/kubelet/pods/b4cf811e-a4ef-4648-990e-82a5b74d21e0/volumes/kubernetes.io~projected/konnectivity-agent-token
shm                  64M     0M       64M   0% /run/k0s/containerd/io.containerd.grpc.v1.cri/sandboxes/113158eb1869ab6609f639a6c1d30f4ce3cd14edb82b7bd0f825c2bb382b679d/shm
overlay           76744M 10046M    63532M  14% /run/k0s/containerd/io.containerd.runtime.v2.task/k8s.io/113158eb1869ab6609f639a6c1d30f4ce3cd14edb82b7bd0f825c2bb382b679d/rootfs
overlay           76744M 10046M    63532M  14% /run/k0s/containerd/io.containerd.runtime.v2.task/k8s.io/ef4939ddb3a17f0e0b68cb3f3845f8539d2727692ee380b8642d346ab15e024c/rootfs
overlay           76744M 10046M    63532M  14% /run/k0s/containerd/io.containerd.runtime.v2.task/k8s.io/d673a0be1f71bccf0b67d96b2504f6d9e5900f05df48cddcdf6a7006cac01df2/rootfs
overlay           76744M 10046M    63532M  14% /run/k0s/containerd/io.containerd.runtime.v2.task/k8s.io/399edc3b39b7a6f74c15a4304b594f435d08523063b3765b27b7af2c88e4e22b/rootfs
tmpfs               386M     0M      386M   0% /run/user/1000
root@k0s-multinode-slave1:/home/julien# lsblk -f
NAME    FSTYPE LABEL UUID                                 FSAVAIL FSUSE% MOUNTPOINT
sda
├─sda1  ext4         95869d0b-2f7c-4c81-95d0-526224a44fa6     62G    13% /
├─sda14
└─sda15 vfat         175E-2886                               120M     0% /boot/efi
sr0
root@k0s-multinode-slave1:/home/julien# cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/sda3 during installation
UUID=95869d0b-2f7c-4c81-95d0-526224a44fa6 /               ext4    errors=remount-ro 0       1
# /boot/efi was on /dev/sda2 during installation
UUID=175E-2886  /boot/efi       vfat    umask=0077      0       1
/dev/sr0        /media/cdrom0   udf,iso9660 user,noauto     0       0
/swapfile none swap sw 0 0
jnummelin commented 3 years ago

I just tested this on Debian9 on Hetzner. In my test kubelet triggers this "invalid capacity 0 on image filesystem" on the first boot, but it does go away when the event "expires" (by default the events expire in 1h).

Looking at your logs, I do not see kubelet complaining about anything related to this topic after the initial sync status (which is expected). Thus I really think the event should expire after kubelet has been up more than 1h.

ghost commented 3 years ago

That's true, sorry I am new to Kubernetes and I haven't left this online more than one hour (due to retrying from snapshots). This can be closed then :) At least it's documented. Thank you for your help!