kubernetes / kubelet

kubelet component configs
Apache License 2.0
324 stars 144 forks source link

Kubelet E1228 RunPodSandbox from runtime service failed #22

Closed enricovittorini closed 3 years ago

enricovittorini commented 3 years ago

OS: CentOS 7.4 Kubernetes Version 1.20.01 CRI-O 1.20.0

cluster inizialized with Kubeadm init --config=config.yml

config.yml

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
networking:
  podSubnet: "10.224.0.0/24"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
NAME     STATUS   ROLES                  AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
node01   Ready    control-plane,master   9h    v1.20.1   172.16.0.1    <none>        CentOS Linux 7 (Core)   3.10.0-1160.11.1.el7.x86_64   cri-o://1.20.0
[root@node01 ~]# k get po --all-namespaces
NAMESPACE     NAME                                       READY   STATUS              RESTARTS   AGE
kube-system   calico-kube-controllers-744cfdf676-qdcrd   0/1     ContainerCreating   0          18m
kube-system   calico-node-2mnxp                          1/1     Running             0          18m
kube-system   coredns-74ff55c5b-bd2ts                    0/1     ContainerCreating   0          9h
kube-system   coredns-74ff55c5b-f2wlh                    0/1     ContainerCreating   0          9h
kube-system   etcd-node01                                1/1     Running             0          9h
kube-system   kube-apiserver-node01                      1/1     Running             0          9h
kube-system   kube-controller-manager-node01             1/1     Running             0          9h
kube-system   kube-proxy-jq7cx                           1/1     Running             0          9h
kube-system   kube-scheduler-node01                      1/1     Running             0          9h

Kubelet logs

Dec 28 07:24:34 node01 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Dec 28 07:24:34 node01 kubelet[6272]: I1228 07:24:34.314434    6272 server.go:416] Version: v1.20.1
Dec 28 07:24:34 node01 kubelet[6272]: I1228 07:24:34.314927    6272 server.go:837] Client rotation is on, will bootstrap in background
Dec 28 07:24:34 node01 kubelet[6272]: I1228 07:24:34.316990    6272 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Dec 28 07:24:34 node01 kubelet[6272]: I1228 07:24:34.318071    6272 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328077    6272 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328294    6272 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328303    6272 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328365    6272 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328371    6272 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328374    6272 container_manager_linux.go:315] Creating device plugin manager: true
Dec 28 07:24:39 node01 kubelet[6272]: W1228 07:24:39.328640    6272 util_unix.go:103] Using "/var/run/crio/crio.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/crio/crio.sock".
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328668    6272 remote_runtime.go:62] parsed scheme: ""
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328673    6272 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328689    6272 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock  <nil> 0 <nil>}] <nil> <nil>}
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328698    6272 clientconn.go:948] ClientConn switching balancer to "pick_first"
Dec 28 07:24:39 node01 kubelet[6272]: W1228 07:24:39.328725    6272 util_unix.go:103] Using "/var/run/crio/crio.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/crio/crio.sock".
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328732    6272 remote_image.go:50] parsed scheme: ""
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328735    6272 remote_image.go:50] scheme "" not registered, fallback to default scheme
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328740    6272 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock  <nil> 0 <nil>}] <nil> <nil>}
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328743    6272 clientconn.go:948] ClientConn switching balancer to "pick_first"
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328761    6272 kubelet.go:262] Adding pod path: /etc/kubernetes/manifests
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328777    6272 kubelet.go:273] Watching apiserver
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.329353    6272 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.345198    6272 kuberuntime_manager.go:216] Container runtime cri-o initialized, version: 1.20.0, apiVersion: v1alpha1
Dec 28 07:24:45 node01 kubelet[6272]: E1228 07:24:45.638795    6272 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
Dec 28 07:24:45 node01 kubelet[6272]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.640141    6272 server.go:1176] Started kubelet
Dec 28 07:24:45 node01 kubelet[6272]: E1228 07:24:45.640254    6272 kubelet.go:1271] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.642684    6272 server.go:148] Starting to listen on 0.0.0.0:10250
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.643375    6272 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.643927    6272 server.go:409] Adding debug handlers to kubelet server.
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.645578    6272 volume_manager.go:271] Starting Kubelet Volume Manager
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.647212    6272 desired_state_of_world_populator.go:142] Desired state populator starts to run
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.657581    6272 kubelet_network_linux.go:56] Initialized IPv4 iptables rules.
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.657629    6272 status_manager.go:158] Starting to sync pod status with apiserver
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.657643    6272 kubelet.go:1799] Starting kubelet main sync loop.
Dec 28 07:24:45 node01 kubelet[6272]: E1228 07:24:45.657663    6272 kubelet.go:1823] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.746645    6272 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.224.0.0/24
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.755141    6272 kubelet_node_status.go:71] Attempting to register node node01
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.755355    6272 kubelet_network.go:77] Setting Pod CIDR:  -> 10.224.0.0/24
Dec 28 07:24:45 node01 kubelet[6272]: E1228 07:24:45.762098    6272 kubelet.go:1823] skipping pod synchronization - container runtime status check may not have completed yet
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.779842    6272 kubelet_node_status.go:109] Node node01 was previously registered
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.780613    6272 kubelet_node_status.go:74] Successfully registered node node01
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.787743    6272 cpu_manager.go:193] [cpumanager] starting with none policy
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.787753    6272 cpu_manager.go:194] [cpumanager] reconciling every 10s
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.787770    6272 state_mem.go:36] [cpumanager] initializing new in-memory state store
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.787871    6272 state_mem.go:88] [cpumanager] updated default cpuset: ""
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.787944    6272 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.787959    6272 policy_none.go:43] [cpumanager] none policy: Start
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.789531    6272 setters.go:577] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-12-28 07:24:45.7895105 +0100 CET m=+11.578152801 LastTransitionTime:2020-12-28 07:24:45.7895105 +0100 CET m=+11.578152801 Reason:KubeletNotReady Message:container runtime status check may not have completed yet}
Dec 28 07:24:45 node01 kubelet[6272]: W1228 07:24:45.794173    6272 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.795919    6272 plugin_manager.go:114] Starting Kubelet Plugin Manager
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.962322    6272 topology_manager.go:187] [topologymanager] Topology Admit Handler
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.962426    6272 topology_manager.go:187] [topologymanager] Topology Admit Handler
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.962457    6272 topology_manager.go:187] [topologymanager] Topology Admit Handler
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.962477    6272 topology_manager.go:187] [topologymanager] Topology Admit Handler
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.962535    6272 topology_manager.go:187] [topologymanager] Topology Admit Handler
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.962574    6272 topology_manager.go:187] [topologymanager] Topology Admit Handler
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.962630    6272 topology_manager.go:187] [topologymanager] Topology Admit Handler
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.962672    6272 topology_manager.go:187] [topologymanager] Topology Admit Handler
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.962721    6272 topology_manager.go:187] [topologymanager] Topology Admit Handler
Dec 28 07:24:45 node01 kubelet[6272]: E1228 07:24:45.973287    6272 kubelet.go:1635] Failed creating a mirror pod for "kube-scheduler-node01_kube-system(9be8cb4627e7e5ad4c3f8acabd4b49b3)": pods "kube-scheduler-node01" already exists
Dec 28 07:24:45 node01 kubelet[6272]: E1228 07:24:45.976972    6272 kubelet.go:1635] Failed creating a mirror pod for "kube-apiserver-node01_kube-system(62167925d1ac26070e568a81a11be1b5)": pods "kube-apiserver-node01" already exists
Dec 28 07:24:45 node01 kubelet[6272]: E1228 07:24:45.977042    6272 kubelet.go:1635] Failed creating a mirror pod for "etcd-node01_kube-system(e25ea21632f580335cac4f07009e0473)": pods "etcd-node01" already exists
Dec 28 07:24:45 node01 kubelet[6272]: E1228 07:24:45.977154    6272 kubelet.go:1635] Failed creating a mirror pod for "kube-controller-manager-node01_kube-system(6a237e4472e8c04619dd54b3dc80f073)": pods "kube-controller-manager-node01" already exists
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.049900    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/62167925d1ac26070e568a81a11be1b5-ca-certs") pod "kube-apiserver-node01" (UID: "62167925d1ac26070e568a81a11be1b5")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150114    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/6a237e4472e8c04619dd54b3dc80f073-kubeconfig") pod "kube-controller-manager-node01" (UID: "6a237e4472e8c04619dd54b3dc80f073")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150156    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/332c0792-ca1a-4a27-bfd5-ed17b6b1e7bb-config-volume") pod "coredns-74ff55c5b-f2wlh" (UID: "332c0792-ca1a-4a27-bfd5-ed17b6b1e7bb")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150171    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "calico-node-token-78l9h" (UniqueName: "kubernetes.io/secret/15ae4814-32a6-4b85-82f4-6d8b18940736-calico-node-token-78l9h") pod "calico-node-2mnxp" (UID: "15ae4814-32a6-4b85-82f4-6d8b18940736")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150184    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/e25ea21632f580335cac4f07009e0473-etcd-certs") pod "etcd-node01" (UID: "e25ea21632f580335cac4f07009e0473")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150194    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/62167925d1ac26070e568a81a11be1b5-k8s-certs") pod "kube-apiserver-node01" (UID: "62167925d1ac26070e568a81a11be1b5")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150202    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/6a237e4472e8c04619dd54b3dc80f073-ca-certs") pod "kube-controller-manager-node01" (UID: "6a237e4472e8c04619dd54b3dc80f073")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150212    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-net-dir" (UniqueName: "kubernetes.io/host-path/15ae4814-32a6-4b85-82f4-6d8b18940736-cni-net-dir") pod "calico-node-2mnxp" (UID: "15ae4814-32a6-4b85-82f4-6d8b18940736")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150251    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "calico-kube-controllers-token-slf5w" (UniqueName: "kubernetes.io/secret/b504d6d5-9171-4cca-a6e7-cd8501842d7c-calico-kube-controllers-token-slf5w") pod "calico-kube-controllers-744cfdf676-qdcrd" (UID: "b504d6d5-9171-4cca-a6e7-cd8501842d7c")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150263    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-qs5ct" (UniqueName: "kubernetes.io/secret/b85e2da8-6c7e-41f1-918c-89f2f4954e72-kube-proxy-token-qs5ct") pod "kube-proxy-jq7cx" (UID: "b85e2da8-6c7e-41f1-918c-89f2f4954e72")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150271    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/e25ea21632f580335cac4f07009e0473-etcd-data") pod "etcd-node01" (UID: "e25ea21632f580335cac4f07009e0473")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150280    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "sysfs" (UniqueName: "kubernetes.io/host-path/15ae4814-32a6-4b85-82f4-6d8b18940736-sysfs") pod "calico-node-2mnxp" (UID: "15ae4814-32a6-4b85-82f4-6d8b18940736")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150288    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/6a237e4472e8c04619dd54b3dc80f073-etc-pki") pod "kube-controller-manager-node01" (UID: "6a237e4472e8c04619dd54b3dc80f073")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150296    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/6a237e4472e8c04619dd54b3dc80f073-flexvolume-dir") pod "kube-controller-manager-node01" (UID: "6a237e4472e8c04619dd54b3dc80f073")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150304    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/15ae4814-32a6-4b85-82f4-6d8b18940736-lib-modules") pod "calico-node-2mnxp" (UID: "15ae4814-32a6-4b85-82f4-6d8b18940736")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150313    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-bin-dir" (UniqueName: "kubernetes.io/host-path/15ae4814-32a6-4b85-82f4-6d8b18940736-cni-bin-dir") pod "calico-node-2mnxp" (UID: "15ae4814-32a6-4b85-82f4-6d8b18940736")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150322    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/9be8cb4627e7e5ad4c3f8acabd4b49b3-kubeconfig") pod "kube-scheduler-node01" (UID: "9be8cb4627e7e5ad4c3f8acabd4b49b3")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150332    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/b85e2da8-6c7e-41f1-918c-89f2f4954e72-xtables-lock") pod "kube-proxy-jq7cx" (UID: "b85e2da8-6c7e-41f1-918c-89f2f4954e72")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150341    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/b85e2da8-6c7e-41f1-918c-89f2f4954e72-lib-modules") pod "kube-proxy-jq7cx" (UID: "b85e2da8-6c7e-41f1-918c-89f2f4954e72")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150350    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/62167925d1ac26070e568a81a11be1b5-etc-pki") pod "kube-apiserver-node01" (UID: "62167925d1ac26070e568a81a11be1b5")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150359    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-8l5tz" (UniqueName: "kubernetes.io/secret/332c0792-ca1a-4a27-bfd5-ed17b6b1e7bb-coredns-token-8l5tz") pod "coredns-74ff55c5b-f2wlh" (UID: "332c0792-ca1a-4a27-bfd5-ed17b6b1e7bb")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150368    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/15ae4814-32a6-4b85-82f4-6d8b18940736-xtables-lock") pod "calico-node-2mnxp" (UID: "15ae4814-32a6-4b85-82f4-6d8b18940736")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150377    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "policysync" (UniqueName: "kubernetes.io/host-path/15ae4814-32a6-4b85-82f4-6d8b18940736-policysync") pod "calico-node-2mnxp" (UID: "15ae4814-32a6-4b85-82f4-6d8b18940736")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150387    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvol-driver-host" (UniqueName: "kubernetes.io/host-path/15ae4814-32a6-4b85-82f4-6d8b18940736-flexvol-driver-host") pod "calico-node-2mnxp" (UID: "15ae4814-32a6-4b85-82f4-6d8b18940736")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150397    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/b85e2da8-6c7e-41f1-918c-89f2f4954e72-kube-proxy") pod "kube-proxy-jq7cx" (UID: "b85e2da8-6c7e-41f1-918c-89f2f4954e72")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150406    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c59e0124-1fa5-4cc3-87af-93544cd6ec69-config-volume") pod "coredns-74ff55c5b-bd2ts" (UID: "c59e0124-1fa5-4cc3-87af-93544cd6ec69")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150414    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/6a237e4472e8c04619dd54b3dc80f073-k8s-certs") pod "kube-controller-manager-node01" (UID: "6a237e4472e8c04619dd54b3dc80f073")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150428    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-log-dir" (UniqueName: "kubernetes.io/host-path/15ae4814-32a6-4b85-82f4-6d8b18940736-cni-log-dir") pod "calico-node-2mnxp" (UID: "15ae4814-32a6-4b85-82f4-6d8b18940736")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150436    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-8l5tz" (UniqueName: "kubernetes.io/secret/c59e0124-1fa5-4cc3-87af-93544cd6ec69-coredns-token-8l5tz") pod "coredns-74ff55c5b-bd2ts" (UID: "c59e0124-1fa5-4cc3-87af-93544cd6ec69")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150445    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "var-lib-calico" (UniqueName: "kubernetes.io/host-path/15ae4814-32a6-4b85-82f4-6d8b18940736-var-lib-calico") pod "calico-node-2mnxp" (UID: "15ae4814-32a6-4b85-82f4-6d8b18940736")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150454    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "host-local-net-dir" (UniqueName: "kubernetes.io/host-path/15ae4814-32a6-4b85-82f4-6d8b18940736-host-local-net-dir") pod "calico-node-2mnxp" (UID: "15ae4814-32a6-4b85-82f4-6d8b18940736")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150466    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "var-run-calico" (UniqueName: "kubernetes.io/host-path/15ae4814-32a6-4b85-82f4-6d8b18940736-var-run-calico") pod "calico-node-2mnxp" (UID: "15ae4814-32a6-4b85-82f4-6d8b18940736")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150471    6272 reconciler.go:157] Reconciler: start to sync state
Dec 28 07:24:47 node01 kubelet[6272]: I1228 07:24:47.041746    6272 request.go:655] Throttling request took 1.0778311s, request: GET:https://172.16.0.1:6443/api/v1/namespaces/kube-system/pods/etcd-node01
Dec 28 07:24:47 node01 kubelet[6272]: E1228 07:24:47.206272    6272 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = container create failed: time="2020-12-28T07:24:47+01:00" level=error msg="container_linux.go:349: starting container process caused \"error adding seccomp rule for syscall socket: requested action matches default action of filter\""
Dec 28 07:24:47 node01 kubelet[6272]: container_linux.go:349: starting container process caused "error adding seccomp rule for syscall socket: requested action matches default action of filter"
Dec 28 07:24:47 node01 kubelet[6272]: E1228 07:24:47.206327    6272 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "calico-kube-controllers-744cfdf676-qdcrd_kube-system(b504d6d5-9171-4cca-a6e7-cd8501842d7c)" failed: rpc error: code = Unknown desc = container create failed: time="2020-12-28T07:24:47+01:00" level=error msg="container_linux.go:349: starting container process caused \"error adding seccomp rule for syscall socket: requested action matches default action of filter\""
Dec 28 07:24:47 node01 kubelet[6272]: container_linux.go:349: starting container process caused "error adding seccomp rule for syscall socket: requested action matches default action of filter"
Dec 28 07:24:47 node01 kubelet[6272]: E1228 07:24:47.206338    6272 kuberuntime_manager.go:755] createPodSandbox for pod "calico-kube-controllers-744cfdf676-qdcrd_kube-system(b504d6d5-9171-4cca-a6e7-cd8501842d7c)" failed: rpc error: code = Unknown desc = container create failed: time="2020-12-28T07:24:47+01:00" level=error msg="container_linux.go:349: starting container process caused \"error adding seccomp rule for syscall socket: requested action matches default action of filter\""
Dec 28 07:24:47 node01 kubelet[6272]: container_linux.go:349: starting container process caused "error adding seccomp rule for syscall socket: requested action matches default action of filter"
Dec 28 07:24:47 node01 kubelet[6272]: E1228 07:24:47.206366    6272 pod_workers.go:191] Error syncing pod b504d6d5-9171-4cca-a6e7-cd8501842d7c ("calico-kube-controllers-744cfdf676-qdcrd_kube-system(b504d6d5-9171-4cca-a6e7-cd8501842d7c)"), skipping: failed to "CreatePodSandbox" for "calico-kube-controllers-744cfdf676-qdcrd_kube-system(b504d6d5-9171-4cca-a6e7-cd8501842d7c)" with CreatePodSandboxError: "CreatePodSandbox for pod \"calico-kube-controllers-744cfdf676-qdcrd_kube-system(b504d6d5-9171-4cca-a6e7-cd8501842d7c)\" failed: rpc error: code = Unknown desc = container create failed: time=\"2020-12-28T07:24:47+01:00\" level=error msg=\"container_linux.go:349: starting container process caused \\\"error adding seccomp rule for syscall socket: requested action matches default action of filter\\\"\"\ncontainer_linux.go:349: starting container process caused \"error adding seccomp rule for syscall socket: requested action matches default action of filter\"\n"
Dec 28 07:24:47 node01 kubelet[6272]: E1228 07:24:47.251629    6272 configmap.go:200] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
Dec 28 07:24:47 node01 kubelet[6272]: E1228 07:24:47.251687    6272 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/c59e0124-1fa5-4cc3-87af-93544cd6ec69-config-volume podName:c59e0124-1fa5-4cc3-87af-93544cd6ec69 nodeName:}" failed. No retries permitted until 2020-12-28 07:24:47.7516687 +0100 CET m=+13.540311001 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c59e0124-1fa5-4cc3-87af-93544cd6ec69-config-volume\") pod \"coredns-74ff55c5b-bd2ts\" (UID: \"c59e0124-1fa5-4cc3-87af-93544cd6ec69\") : failed to sync configmap cache: timed out waiting for the condition"
Dec 28 07:24:47 node01 kubelet[6272]: E1228 07:24:47.252821    6272 configmap.go:200] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
Dec 28 07:24:47 node01 kubelet[6272]: E1228 07:24:47.252868    6272 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/332c0792-ca1a-4a27-bfd5-ed17b6b1e7bb-config-volume podName:332c0792-ca1a-4a27-bfd5-ed17b6b1e7bb nodeName:}" failed. No retries permitted until 2020-12-28 07:24:47.7528523 +0100 CET m=+13.541494601 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/332c0792-ca1a-4a27-bfd5-ed17b6b1e7bb-config-volume\") pod \"coredns-74ff55c5b-f2wlh\" (UID: \"332c0792-ca1a-4a27-bfd5-ed17b6b1e7bb\") : failed to sync configmap cache: timed out waiting for the condition"
fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot commented 3 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten

fejta-bot commented 3 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community. /close

k8s-ci-robot commented 3 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes/kubelet/issues/22#issuecomment-849427710): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.