Closed windydayc closed 2 years ago
kubelet
's log:
[root@host130 openyurt]# journalctl -u kubelet
-- Logs begin at Mon 2022-06-06 23:10:17 CST, end at Tue 2022-06-07 03:37:43 CST. --
Jun 06 23:10:24 host130 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: * Applying /usr/lib/sysctl.d/00-system.conf ...
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: net.bridge.bridge-nf-call-ip6tables = 0
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: net.bridge.bridge-nf-call-iptables = 0
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: net.bridge.bridge-nf-call-arptables = 0
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: * Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: kernel.yama.ptrace_scope = 0
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: * Applying /usr/lib/sysctl.d/50-default.conf ...
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: kernel.sysrq = 16
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: kernel.core_uses_pid = 1
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: net.ipv4.conf.default.rp_filter = 1
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: net.ipv4.conf.all.rp_filter = 1
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: net.ipv4.conf.default.accept_source_route = 0
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: net.ipv4.conf.all.accept_source_route = 0
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: net.ipv4.conf.default.promote_secondaries = 1
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: net.ipv4.conf.all.promote_secondaries = 1
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: fs.protected_hardlinks = 1
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: fs.protected_symlinks = 1
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: * Applying /etc/sysctl.d/99-sysctl.conf ...
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: * Applying /etc/sysctl.d/k8s.conf ...
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: net.bridge.bridge-nf-call-ip6tables = 1
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: net.bridge.bridge-nf-call-iptables = 1
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: net.ipv4.conf.all.rp_filter = 0
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: * Applying /etc/sysctl.conf ...
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: net.ipv4.ip_forward = 1
Jun 06 23:10:25 host130 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jun 06 23:10:37 host130 kubelet[745]: I0606 23:10:37.228550 745 server.go:411] Version: v1.19.8
Jun 06 23:10:37 host130 kubelet[745]: I0606 23:10:37.228996 745 server.go:831] Client rotation is on, will bootstrap in background
Jun 06 23:10:37 host130 kubelet[745]: I0606 23:10:37.278352 745 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Jun 06 23:10:37 host130 kubelet[745]: I0606 23:10:37.332606 745 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.014843 745 server.go:640] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.026084 745 container_manager_linux.go:276] container manager verified user specified cgroup-root exists: []
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.026241 745 container_manager_linux.go:281] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.026417 745 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.026429 745 container_manager_linux.go:311] [topologymanager] Initializing Topology Manager with none policy
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.026434 745 container_manager_linux.go:316] Creating device plugin manager: true
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.040668 745 client.go:77] Connecting to docker on unix:///var/run/docker.sock
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.040725 745 client.go:94] Start docker client with request timeout=2m0s
Jun 06 23:10:44 host130 kubelet[745]: W0606 23:10:44.053504 745 docker_service.go:570] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.053546 745 docker_service.go:242] Hairpin mode set to "hairpin-veth"
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.212073 745 docker_service.go:257] Docker cri networking managed by cni
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.247914 745 docker_service.go:264] Docker Info: &{ID:3JHM:UDEK:Q3UF:H3I3:QMHD:5YPP:3CRH:MNAT:5EHE:U75L:WEPR:2IKB Containers:191 ContainersRunning:1 ContainersPaused:0 ContainersStopped:190 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-06T23:10:44.213327077+08:00 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:3.10.0-1127.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc000a680e0 NCPU:4 MemTotal:3953971200 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:host130 Labels:[] ExperimentalBuild:false ServerVersion:19.03.14-sealer ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ea765aba0d05254012b0b9e595e995c09186427f Expected:ea765aba0d05254012b0b9e595e995c09186427f} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[]}
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.248018 745 docker_service.go:277] Setting cgroupDriver to cgroupfs
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.259984 745 remote_runtime.go:59] parsed scheme: ""
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.260040 745 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.276586 745 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>}
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.276640 745 clientconn.go:948] ClientConn switching balancer to "pick_first"
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.276745 745 remote_image.go:50] parsed scheme: ""
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.276755 745 remote_image.go:50] scheme "" not registered, fallback to default scheme
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.276771 745 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>}
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.276776 745 clientconn.go:948] ClientConn switching balancer to "pick_first"
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.276816 745 kubelet.go:264] Adding pod path: /etc/kubernetes/manifests
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.276847 745 kubelet.go:276] Watching apiserver
Jun 06 23:10:44 host130 kubelet[745]: E0606 23:10:44.280516 745 reflector.go:127] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://apiserver.cluster.local:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dhost130&limit=500&resourceVersion=0": dial tcp 192.168.152.130:6443: connect: connection refused
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.313857 745 kubelet.go:453] Kubelet client is not nil
Jun 06 23:10:44 host130 kubelet[745]: E0606 23:10:44.314735 745 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://apiserver.cluster.local:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.152.130:6443: connect: connection refused
Jun 06 23:10:44 host130 kubelet[745]: E0606 23:10:44.315320 745 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://apiserver.cluster.local:6443/api/v1/nodes?fieldSelector=metadata.name%3Dhost130&limit=500&resourceVersion=0": dial tcp 192.168.152.130:6443: connect: connection refused
Jun 06 23:10:44 host130 kubelet[745]: E0606 23:10:44.634247 745 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
Jun 06 23:10:44 host130 kubelet[745]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.683170 745 kuberuntime_manager.go:214] Container runtime docker initialized, version: 19.03.14-sealer, apiVersion: 1.40.0
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.684882 745 server.go:1147] Started kubelet
Jun 06 23:10:44 host130 kubelet[745]: E0606 23:10:44.685131 745 kubelet.go:1243] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.686073 745 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.687148 745 volume_manager.go:265] Starting Kubelet Volume Manager
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.702007 745 server.go:152] Starting to listen on 0.0.0.0:10250
Jun 06 23:10:44 host130 kubelet[745]: E0606 23:10:44.703759 745 controller.go:136] failed to ensure node lease exists, will retry in 200ms, error: Get "https://apiserver.cluster.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/host130?timeout=10s": dial tcp 192.168.152.130:6443: connect: connection refused
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.704164 745 server.go:425] Adding debug handlers to kubelet server.
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.706096 745 desired_state_of_world_populator.go:139] Desired state populator starts to run
Jun 06 23:10:44 host130 kubelet[745]: E0606 23:10:44.706863 745 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://apiserver.cluster.local:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.152.130:6443: connect: connection refused
Jun 06 23:10:44 host130 kubelet[745]: E0606 23:10:44.713591 745 event.go:273] Unable to write event: 'Post "https://apiserver.cluster.local:6443/api/v1/namespaces/default/events": dial tcp 192.168.152.130:6443: connect: connection refused' (may retry after sleeping)
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.737007 745 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: a4f2dcfaf6c68ce395981475fe104e3e8d848f7c0b0ace3d12b26431cfbebfb5
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.814019 745 kubelet.go:449] kubelet nodes not sync
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.814045 745 kubelet.go:449] kubelet nodes not sync
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.880683 745 kubelet.go:449] kubelet nodes not sync
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.880741 745 kubelet.go:449] kubelet nodes not sync
Jun 06 23:10:44 host130 kubelet[745]: E0606 23:10:44.904607 745 controller.go:136] failed to ensure node lease exists, will retry in 400ms, error: Get "https://apiserver.cluster.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/host130?timeout=10s": dial tcp 192.168.152.130:6443: connect: connection refused
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.914966 745 status_manager.go:158] Starting to sync pod status with apiserver
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.915014 745 kubelet.go:1775] Starting kubelet main sync loop.
Jun 06 23:10:44 host130 kubelet[745]: E0606 23:10:44.915071 745 kubelet.go:1799] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Jun 06 23:10:44 host130 kubelet[745]: E0606 23:10:44.921446 745 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.RuntimeClass: failed to list *v1beta1.RuntimeClass: Get "https://apiserver.cluster.local:6443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.152.130:6443: connect: connection refused
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.922398 745 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 08cf8b98c8c02ee35a4d1abeda667025266176d5d56cc886cf62ed60e2b72c3d
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.937568 745 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 929f677a755f4edb2132c7f9edb6e3e13f542c5d456dd9c3811abc3be5639ef2
Jun 06 23:10:45 host130 kubelet[745]: I0606 23:10:45.014175 745 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: dbe13e4a0689e61a634ee57f6ec90bb672345b837e0fa0c9011d7477abe44207
Jun 06 23:10:45 host130 kubelet[745]: E0606 23:10:45.015200 745 kubelet.go:1799] skipping pod synchronization - container runtime status check may not have completed yet
Jun 06 23:10:45 host130 kubelet[745]: I0606 23:10:45.053080 745 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 741bc5c08bfef3778bfadbec79ed135592118e8b773fc7558457aa97cab74f3c
Jun 06 23:10:45 host130 kubelet[745]: I0606 23:10:45.147762 745 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: b1cd5958f0e4e104b5ec17421e8d7fcb6dbcf821b8c7532d4b6eb34091c53cd0
Jun 06 23:10:45 host130 kubelet[745]: I0606 23:10:45.159402 745 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 5c86edf7ed8bbacf1f31927e74b6efb86faf2b7d0fbc56d60b353c96f6684022
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.167552 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "de71eb2eab425009323bfecc7ef8c0e8040fad8e9256fc4087770d7936b07458"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.169511 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "d52305935f2a1e43b604048490af3e606648c83589c91eca27eac398211058a3"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.171561 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "ea4a28f4ed6569551421896e02ca844f8dd3b7e77386e6098a80819cba464b6c"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.174069 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "9b6c4c1bcef08e819c5da4698376714146910d16f68a138afaf7fbb0764ffe15"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.176156 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "bf1d080d7b2ae13f8a420349c489401ed1e3fc75ace448a0696792679e059f91"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.178330 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "e6cbb6ccb8565bff17a9a178a16534398854006cac0f0a7b11c30474d9bbd292"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.180363 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "9e0d241d8c4a98c826892cd7c66d5f14311e8445782011f011209c4bb771b162"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.182308 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "0886b1dfeea4e8b03f3ea10560056f2a7626e95e707ef46218f3abc0dd35baab"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.184622 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "9d86371cd1ac7219608afa232ca2b23c99f9dd629a4bcef73f5181a614729a5b"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.186742 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "ff8721f658476e53c19815836c76ab3aeae798617d3cd98b1981c6a0016c891b"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.190103 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "7012b50d3400fa2a591528e0d8902821a2fc2b29c39b6a864603b774ef340805"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.192065 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "b63e8fc5c2b04e4c78dc62256c4f997afd1890944b4c130beb4710d510640677"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.193892 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "0eb746957cf4d046026ab7a761ab1344768c2add29fbed11b3d29bb9f9c9643b"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.195737 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "74d22a4d939b706d039eb73f74fc1f7e742b069d1965255b082ca0afcfd74e37"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.197415 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "bb00cfe5f951ad6af376cce7b3dd478564740b06d31868ad7eef57c2c7366d71"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.199065 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "2ca2f035d12ca2dc16f90652302d23a97afe424888e614e2cae75847d6af89a3"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.201087 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "5bbd852a97fb17701e22aed690a52b6e555b80de561f177ede53b43d4617c3f0"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.203245 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "bc45adfae2224ca18f1727436ac8991706594b58ab82cab2d11336461aaeaa50"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.205600 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "b3ef7b5e0b432818335ad3554a3524c5d55ce35ebc14fe539828386ad151b74a"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.209734 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "65816650c6598eec9a41b25d8d22642bb78ae3f8c3cda40e217d256f39f96508"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.212200 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "52f0bb76b9b9203f73ef7ba9622de8c29ccb962c6bc07060893eba27302c46e6"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.215003 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "f5df20b8cb08ac77fdd1e2859cd99176b5a8d8d12c3773120e2fccbcf5a4327e"
Jun 06 23:10:45 host130 kubelet[745]: E0606 23:10:45.215719 745 kubelet.go:1799] skipping pod synchronization - container runtime status check may not have completed yet
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.217029 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "32b1a69ed224e1d54933f3973ceded4edd2c4e3a8646d9948fbd9aaaee406e7d"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.219808 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "4da7b0c5f766179a4b2078eb746ccfebf562dea851eddb83e142f69e55c5ebd5"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.226337 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "f32487377aa4868e328abc5f7074a0eea527604e0f5660872545e7dcb8cde12e"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.227019 745 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "fa489830fa15effd93111ca14c6784d3b9e655747e907b5b488e16aa7baefa07"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.228886 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "d1d90e727ca7e49c317c7a4982d9ab3a17ee6170697fca52c9c21de40d585ff3"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.231156 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "496b069b37cb34d4d7f656a7abd0a61187fce885d5a39be11465e0728560d50e"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.233238 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "569ffbc5e2a60a2c66dfda4aefa0f0ba9b1766c58d9c966165e00560d0e7fd5b"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.235511 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "a2cea9239bba954376df32a12e704932db30aab6aed62277e22b668a8cb23154"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.238485 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "a570a5b54d665471dcfc668a0893d5973c51258b4c70a8bc3d33e5838c22b9c9"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.240809 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "68c709693271e133c2be685d258e8bd61bb6b366003eb7eace9e3afeba8ad9ba"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.243069 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "9fac7d9682ea687ad8e7ed6cf8baf30c5515da52bd15314a6b2c8d201aa99f94"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.248752 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "d348e0c505dcbdba915a442a443878f2c66cb8f45ab8db323203374853ef9ec8"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.276150 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "0037e11013a6f01cc511b52a969ea7e339d6c9866c28a664ac9162f8a8c5ca19"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.278760 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "2610c6554b8059623ed08da15f217349ceca3c613546b3389af50c0df339725a"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.281391 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "74accf1e930b4e72eea90d2a33bbdbce673ad71d347b8bf74ac6a3aaa429f154"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.283693 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "4e5102f9ce9b5c05e47f9d903735dfeb25d0941e43c80d6e613bef234e294dbc"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.286174 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "cdae51a3886ac8889eccb231a1c186cefb2abd97430a11d738509298d4ec47ea"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.288881 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "e22547a1c210ea41522d4b03b5ed93ca9b9198fc724b77348732249503547017"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.291139 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "c153a1d0ee29bfab1622c99c923ef94f04fc453edb7fdadf535b055e03b4a0b2"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.293170 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "ad792b9facc4baa0f27fdf529953bc869f8861dd2831960219340f2122697b2c"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.295188 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "209e874dfa79c8e1af467ee10fee003d4a51757848bef93c9baa048216b7c92a"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.297039 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "7909eb9c2e814e2decb756c13e9d84d3fb1d41be50ddd8a18c973ca3f1db7646"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.298982 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "a37fbcdcf3bc23238a732e62010ddd8e8d5eab322469ebcba29959d856d2149e"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.300851 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "30a067a689eae85b6b1e731c576853a5e8d56a58ea95dc46d7c373598474a740"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.302555 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "bb5bd46766d47352a1c66005667595c8330104362363ded3a9cdd84df019b360"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.304742 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "9432f8e349f3ced4f7b86322132efff8c19dd7420c953bfa17bf4aa45983a4f7"
Jun 06 23:10:45 host130 kubelet[745]: E0606 23:10:45.305408 745 controller.go:136] failed to ensure node lease exists, will retry in 800ms, error: Get "https://apiserver.cluster.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/host130?timeout=10s": dial tcp 192.168.152.130:6443: connect: connection refused
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.307087 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "8c7e31a22f9285e76ac086a8eab831f51f8a8707a06b7bf567e1b6186630ecd0"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.309306 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "f1e66c84569ff5e33e9b2f6e7c70aaca73c02be4df0febbc0118877de4cb9b11"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.311114 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "8a155187c686661bf2989369c6604f478f21ce367f4757a31fb027bc33b09efb"
Jun 06 23:10:45 host130 kubelet[745]: E0606 23:10:45.376840 745 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://apiserver.cluster.local:6443/api/v1/nodes?fieldSelector=metadata.name%3Dhost130&limit=500&resourceVersion=0": dial tcp 192.168.152.130:6443: connect: connection refused
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.408476 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "37a662dbceffaa9cf9a37eca0af734849551355325b414a3424bbd0beef03128"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.411636 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "4b7229ef08fb91f009ad635a35722d6d922dfcb352d6b271091abfe432852423"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.414832 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "93ed62ef986e49f93d2f818dd31816e241a6ac7fe433a0c6bd3df06775436cbf"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.418147 745 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "3ff72ceceaef6e46b4bf97423d9d4e0ea09b14a4ca3aa58914ae14d73b9eba78"
Jun 06 23:10:45 host130 kubelet[745]: E0606 23:10:45.432811 745 reflector.go:127] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://apiserver.cluster.local:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dhost130&limit=500&resourceVersion=0": dial tcp 192.168.152.130:6443: connect: connection refused
Jun 06 23:10:45 host130 kubelet[745]: E0606 23:10:45.444596 745 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://apiserver.cluster.local:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.152.130:6443: connect: connection refused
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.462544 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "dde6ee14ec6acd0f207e7cdcbbe9d33471d2cdb9af7168906fef1f7b1973037e"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.465039 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "25cfdbd900a45bfaa8521e71e78bf0b3c138c33b313c7bc95be29c5233e0ecfb"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.467732 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "19ff5db7a12840a1ff1be998b2b318db5f1461af779b33351d9dc5651ae03017"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.470049 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "6d58dd3851f358625cc355e06f93bdd2fae2132214c41ba10347eceb4f06685f"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.471055 745 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "5328b49065c19f2e7bd9f7798ff5f576406ba38e311bd34a670d096df8140ca8"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.503820 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "e17074792f8477bc3d98fb5c636fe56b9fa831b3462171263c0142b63a642dce"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.505796 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "b5f5fa7679042cb38177e8c1fa948c82ce64eb9b247189c2a6d3d81a983f4909"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.507955 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "437fe711744a873e843db29c051128e1492f002bb08063a6048aa1203445f892"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.509724 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "5f9ba53ae1b22bec420293adc2e51ca898b7b3b0d8e2c6d678154e7756b75f1b"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.511792 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "f5ea7e64761df3ea3b561907a45bfc5077f441ecddb47ca0b2a5213341d89fdc"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.512433 745 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "5401b78308c9fc69abd67e6850f97e95c6ab437a2307f58b52caf0d1852a0118"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.545526 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "7379750a00eccdecc7102f23107e224ff29c17f97e2fc0e64963afb10ff06e85"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.549530 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "784f877b23d118636beb38474f7ab6944f7d00725003a667d1707226db0d5546"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.552792 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "c1f1643b49fe184d520d1ab550095d0b91586c7fca52a47a2f3c1b61c388be82"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.554049 745 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "50cf69cfb0826c2298d6c29685ea6744f77f6875d6a1fa479a663d14c14a812c"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.587282 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "2bf19f1b7a5f9f21630cef2f57d6f5dc529f5503aa53a62b81614e0bc4f6b054"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.591036 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "26ac565f0e7779846cddea438bc433f1ef8e087a84cd3e28d46aebeda5d57867"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.593282 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "52f7ff3ad97c4c83c956bb5f34b8a30597875b0442bca375cdb7b211cc82307f"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.594955 745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "9ecab1c259aa750fbd236bb8cfdf4c0c68066a2815566f430902d706a95a7e7a"
// ...
@windydayc It looks like that yurthub component is waiting for the client certificate. so please check why client certificate generation is failed.
[root@host130 openyurt]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
csr-95xfz 38m kubernetes.io/kubelet-serving system:serviceaccount:kube-system:yurt-tunnel-server Pending
csr-bvtr8 37m kubernetes.io/kube-apiserver-client system:bootstrap:40d5lb Pending
csr-clq8l 38m kubernetes.io/kube-apiserver-client system:serviceaccount:kube-system:yurt-tunnel-server Pending
csr-drn7z 19m kubernetes.io/kube-apiserver-client system:bootstrap:40d5lb Pending
csr-f7hpc 8m15s kubernetes.io/kubelet-serving system:serviceaccount:kube-system:yurt-tunnel-server Pending
csr-g6zls 23m kubernetes.io/kube-apiserver-client system:serviceaccount:kube-system:yurt-tunnel-server Pending
csr-kqx8k 28m kubernetes.io/kube-apiserver-client system:bootstrap:40d5lb Pending
csr-n44sr 33m kubernetes.io/kube-apiserver-client system:bootstrap:40d5lb Pending
csr-qrclx 7m27s kubernetes.io/kube-apiserver-client system:bootstrap:40d5lb Pending
csr-r9fp8 23m kubernetes.io/kubelet-serving system:serviceaccount:kube-system:yurt-tunnel-server Pending
csr-rv52j 8m14s kubernetes.io/kube-apiserver-client system:serviceaccount:kube-system:yurt-tunnel-server Pending
csr-rz522 24m kubernetes.io/kube-apiserver-client system:bootstrap:40d5lb Pending
csr-svrb5 14m kubernetes.io/kube-apiserver-client system:bootstrap:40d5lb Pending
csr-wlgr2 39m kubernetes.io/kube-apiserver-client system:bootstrap:40d5lb Pending
yurt-controller-manager
's log:
[root@host130 openyurt]# kubectl logs yurt-controller-manager-7c7bf76c77-44lbq -n kube-system
yurtcontroller-manager version: projectinfo.Info{GitVersion:"-8204290", GitCommit:"8204290", BuildDate:"2022-06-06T02:18:00Z", GoVersion:"go1.17.1", Compiler:"gc", Platform:"linux/amd64"}
I0607 02:19:00.349609 1 controllermanager.go:370] FLAG: --add_dir_header="false"
I0607 02:19:00.349679 1 controllermanager.go:370] FLAG: --alsologtostderr="false"
I0607 02:19:00.349683 1 controllermanager.go:370] FLAG: --contention-profiling="false"
I0607 02:19:00.349688 1 controllermanager.go:370] FLAG: --controller-start-interval="0s"
I0607 02:19:00.349692 1 controllermanager.go:370] FLAG: --controllers="[*]"
I0607 02:19:00.349700 1 controllermanager.go:370] FLAG: --enable-leader-migration="false"
I0607 02:19:00.349703 1 controllermanager.go:370] FLAG: --enable-taint-manager="true"
I0607 02:19:00.349706 1 controllermanager.go:370] FLAG: --feature-gates=""
I0607 02:19:00.349725 1 controllermanager.go:370] FLAG: --help="false"
I0607 02:19:00.349728 1 controllermanager.go:370] FLAG: --kube-api-burst="100"
I0607 02:19:00.349960 1 controllermanager.go:370] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
I0607 02:19:00.349965 1 controllermanager.go:370] FLAG: --kube-api-qps="50"
I0607 02:19:00.349970 1 controllermanager.go:370] FLAG: --kubeconfig=""
I0607 02:19:00.349972 1 controllermanager.go:370] FLAG: --large-cluster-size-threshold="50"
I0607 02:19:00.349975 1 controllermanager.go:370] FLAG: --leader-elect="true"
I0607 02:19:00.349978 1 controllermanager.go:370] FLAG: --leader-elect-lease-duration="15s"
I0607 02:19:00.349981 1 controllermanager.go:370] FLAG: --leader-elect-renew-deadline="10s"
I0607 02:19:00.349984 1 controllermanager.go:370] FLAG: --leader-elect-resource-lock="leases"
I0607 02:19:00.349986 1 controllermanager.go:370] FLAG: --leader-elect-resource-name=""
I0607 02:19:00.349989 1 controllermanager.go:370] FLAG: --leader-elect-resource-namespace=""
I0607 02:19:00.349991 1 controllermanager.go:370] FLAG: --leader-elect-retry-period="2s"
I0607 02:19:00.349994 1 controllermanager.go:370] FLAG: --leader-migration-config=""
I0607 02:19:00.350010 1 controllermanager.go:370] FLAG: --log-flush-frequency="5s"
I0607 02:19:00.350013 1 controllermanager.go:370] FLAG: --log_backtrace_at=":0"
I0607 02:19:00.350019 1 controllermanager.go:370] FLAG: --log_dir=""
I0607 02:19:00.350022 1 controllermanager.go:370] FLAG: --log_file=""
I0607 02:19:00.350025 1 controllermanager.go:370] FLAG: --log_file_max_size="1800"
I0607 02:19:00.350045 1 controllermanager.go:370] FLAG: --logtostderr="true"
I0607 02:19:00.350064 1 controllermanager.go:370] FLAG: --master=""
I0607 02:19:00.350068 1 controllermanager.go:370] FLAG: --min-resync-period="12h0m0s"
I0607 02:19:00.350085 1 controllermanager.go:370] FLAG: --node-eviction-rate="0.1"
I0607 02:19:00.350089 1 controllermanager.go:370] FLAG: --node-monitor-grace-period="40s"
I0607 02:19:00.350091 1 controllermanager.go:370] FLAG: --node-startup-grace-period="1m0s"
I0607 02:19:00.350095 1 controllermanager.go:370] FLAG: --one_output="false"
I0607 02:19:00.350098 1 controllermanager.go:370] FLAG: --pod-eviction-timeout="5m0s"
I0607 02:19:00.350101 1 controllermanager.go:370] FLAG: --profiling="true"
I0607 02:19:00.350104 1 controllermanager.go:370] FLAG: --secondary-node-eviction-rate="0.01"
I0607 02:19:00.350107 1 controllermanager.go:370] FLAG: --skip_headers="false"
I0607 02:19:00.350109 1 controllermanager.go:370] FLAG: --skip_log_headers="false"
I0607 02:19:00.350112 1 controllermanager.go:370] FLAG: --stderrthreshold="2"
I0607 02:19:00.350115 1 controllermanager.go:370] FLAG: --unhealthy-zone-threshold="0.55"
I0607 02:19:00.350117 1 controllermanager.go:370] FLAG: --v="2"
I0607 02:19:00.350120 1 controllermanager.go:370] FLAG: --version="false"
I0607 02:19:00.350123 1 controllermanager.go:370] FLAG: --vmodule=""
W0607 02:19:00.350149 1 client_config.go:615] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0607 02:19:00.353931 1 leaderelection.go:248] attempting to acquire leader lease kube-system/yurt-controller-manager...
I0607 02:19:00.374042 1 leaderelection.go:258] successfully acquired lease kube-system/yurt-controller-manager
I0607 02:19:00.380266 1 event.go:282] Event(v1.ObjectReference{Kind:"Lease", Namespace:"kube-system", Name:"yurt-controller-manager", UID:"e4da33fc-a4f8-427b-865c-52d10bafa21a", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"536", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' host130_d4198914-facc-4dd0-9c4f-546363b9d9f6 became leader
I0607 02:19:00.380704 1 controllermanager.go:346] Starting "nodelifecycle"
I0607 02:19:00.382397 1 node_lifecycle_controller.go:390] Sending events to api server.
I0607 02:19:00.475104 1 taint_manager.go:167] Sending events to api server.
I0607 02:19:00.475180 1 node_lifecycle_controller.go:518] Controller will reconcile labels.
I0607 02:19:00.475209 1 controllermanager.go:361] Started "nodelifecycle"
I0607 02:19:00.475231 1 controllermanager.go:346] Starting "yurtcsrapprover"
I0607 02:19:00.475409 1 node_lifecycle_controller.go:552] Starting node controller
I0607 02:19:00.475431 1 shared_informer.go:240] Waiting for caches to sync for taint
I0607 02:19:00.478656 1 csrapprover.go:120] v1.CertificateSigningRequest is supported.
I0607 02:19:00.478836 1 controllermanager.go:361] Started "yurtcsrapprover"
I0607 02:19:00.479524 1 csrapprover.go:180] starting the crsapprover
I0607 02:19:00.576273 1 shared_informer.go:247] Caches are synced for taint
I0607 02:19:00.576450 1 node_lifecycle_controller.go:783] Controller observed a new Node: "host130"
I0607 02:19:00.576471 1 controller_utils.go:178] Recording Registered Node host130 in Controller event message for node host130
I0607 02:19:00.576478 1 taint_manager.go:191] Starting NoExecuteTaintManager
I0607 02:19:00.576491 1 node_lifecycle_controller.go:1411] Initializing eviction metric for zone:
W0607 02:19:00.576617 1 node_lifecycle_controller.go:1026] Missing timestamp for Node host130. Assuming now as a timestamp.
I0607 02:19:00.576638 1 node_lifecycle_controller.go:882] Node host130 is NotReady as of 2022-06-07 02:19:00.576632689 +0000 UTC m=+0.311668979. Adding it to the Taint queue.
I0607 02:19:00.576728 1 node_lifecycle_controller.go:1177] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0607 02:19:00.576960 1 controller_utils.go:127] Update ready status of pods on node [host130]
I0607 02:19:00.577053 1 controller_utils.go:127] Update ready status of pods on node [host130]
I0607 02:19:00.577057 1 controller_utils.go:127] Update ready status of pods on node [host130]
I0607 02:19:00.577102 1 controller_utils.go:149] Updating ready status of pod yurt-controller-manager-7c7bf76c77-44lbq to false
I0607 02:19:00.577182 1 event.go:282] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"host130", UID:"c5837478-65db-4de3-80e5-b7b1a6de53cc", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node host130 event: Registered Node host130 in Controller
I0607 02:19:00.577196 1 controller_utils.go:149] Updating ready status of pod kube-apiserver-host130 to false
I0607 02:19:00.577215 1 controller_utils.go:127] Update ready status of pods on node [host130]
I0607 02:19:00.577272 1 controller_utils.go:149] Updating ready status of pod yurt-hub-host130 to false
I0607 02:19:00.577569 1 controller_utils.go:127] Update ready status of pods on node [host130]
I0607 02:19:00.577700 1 controller_utils.go:127] Update ready status of pods on node [host130]
I0607 02:19:00.577757 1 controller_utils.go:149] Updating ready status of pod etcd-host130 to false
I0607 02:19:00.609828 1 controller_utils.go:127] Update ready status of pods on node [host130]
I0607 02:19:00.610206 1 controller_utils.go:127] Update ready status of pods on node [host130]
I0607 02:19:05.578105 1 node_lifecycle_controller.go:882] Node host130 is NotReady as of 2022-06-07 02:19:05.578094749 +0000 UTC m=+5.313131038. Adding it to the Taint queue.
I0607 02:19:09.132964 1 controller_utils.go:127] Update ready status of pods on node [host130]
I0607 02:19:10.578479 1 node_lifecycle_controller.go:906] Node host130 is healthy again, removing all taints
I0607 02:19:10.578506 1 node_lifecycle_controller.go:1204] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0607 02:19:11.095300 1 csrapprover.go:163] non-approved and non-denied csr, enqueue: csr-wlgr2
E0607 02:19:11.122635 1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:11.122947 1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:11.137668 1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:11.137703 1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:11.161940 1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:11.162002 1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:11.207218 1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:11.207244 1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:11.291975 1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:11.292019 1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:11.456906 1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:11.456931 1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:11.782191 1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:11.782239 1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:12.427328 1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:12.427386 1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:13.711437 1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:13.711477 1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:16.275968 1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:16.276021 1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:21.400319 1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:21.400391 1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:31.645151 1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:31.645175 1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:52.132350 1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:52.132390 1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:20:33.096231 1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:20:33.096271 1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
I0607 02:20:46.578195 1 csrapprover.go:163] non-approved and non-denied csr, enqueue: csr-95xfz
I0607 02:20:46.580881 1 csrapprover.go:163] non-approved and non-denied csr, enqueue: csr-clq8l
E0607 02:20:46.592516 1 csrapprover.go:274] failed to approve yurt-csr(csr-clq8l), certificatesigningrequests.certificates.k8s.io "csr-clq8l" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:20:46.592577 1 csrapprover.go:206] sync csr csr-clq8l failed with : certificatesigningrequests.certificates.k8s.io "csr-clq8l" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:20:46.592640 1 csrapprover.go:274] failed to approve yurt-csr(csr-95xfz), certificatesigningrequests.certificates.k8s.io "csr-95xfz" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kubelet-serving"
E0607 02:20:46.592650 1 csrapprover.go:206] sync csr csr-95xfz failed with : certificatesigningrequests.certificates.k8s.io "csr-95xfz" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kubelet-serving"
E0607 02:20:46.618502 1 csrapprover.go:274] failed to approve yurt-csr(csr-clq8l), certificatesigningrequests.certificates.k8s.io "csr-clq8l" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:20:46.618531 1 csrapprover.go:206] sync csr csr-clq8l failed with : certificatesigningrequests.certificates.k8s.io "csr-clq8l" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:20:46.620667 1 csrapprover.go:274] failed to approve yurt-csr(csr-95xfz), certificatesigningrequests.certificates.k8s.io "csr-95xfz" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kubelet-serving"
E0607 02:20:46.620693 1 csrapprover.go:206] sync csr csr-95xfz failed with : certificatesigningrequests.certificates.k8s.io "csr-95xfz" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kubelet-serving"
//...
@windydayc As you seen, yurt-controller-manager have no permission to approve the csr. the detail logs as following:
failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
I think that RBAC setting for yurt-controller-manager is not correct.
[root@host130 openyurt]# kubectl get clusterrole.rbac.authorization.k8s.io/yurt-controller-manager -oyaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{"rbac.authorization.kubernetes.io/autoupdate":"true"},"name":"yurt-controller-manager"},"rules":[{"apiGroups":[""],"resources":["nodes"],"verbs":["delete","get","list","patch","update","watch"]},{"apiGroups":[""],"resources":["nodes/status"],"verbs":["patch","update"]},{"apiGroups":[""],"resources":["pods/status"],"verbs":["update"]},{"apiGroups":[""],"resources":["pods"],"verbs":["delete","list","watch"]},{"apiGroups":["","events.k8s.io"],"resources":["events"],"verbs":["create","patch","update"]},{"apiGroups":["coordination.k8s.io"],"resources":["leases"],"verbs":["create","delete","get","patch","update","list","watch"]},{"apiGroups":["","apps"],"resources":["daemonsets"],"verbs":["list","watch"]},{"apiGroups":["certificates.k8s.io"],"resources":["certificatesigningrequests"],"verbs":["get","list","watch"]},{"apiGroups":["certificates.k8s.io"],"resources":["certificatesigningrequests/approval"],"verbs":["update"]},{"apiGroups":["certificates.k8s.io"],"resourceNames":["kubernetes.io/legacy-unknown"],"resources":["signers"],"verbs":["approve"]}]}
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: "2022-06-07T02:18:37Z"
managedFields:
- apiVersion: rbac.authorization.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:rbac.authorization.kubernetes.io/autoupdate: {}
f:rules: {}
manager: kubectl-client-side-apply
operation: Update
time: "2022-06-07T02:18:37Z"
name: yurt-controller-manager
resourceVersion: "241"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/yurt-controller-manager
uid: 779b42e5-ebe0-490e-b5ae-1ee7081de4f1
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
- update
- apiGroups:
- ""
resources:
- pods/status
verbs:
- update
- apiGroups:
- ""
resources:
- pods
verbs:
- delete
- list
- watch
- apiGroups:
- ""
- events.k8s.io
resources:
- events
verbs:
- create
- patch
- update
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- delete
- get
- patch
- update
- list
- watch
- apiGroups:
- ""
- apps
resources:
- daemonsets
verbs:
- list
- watch
- apiGroups:
- certificates.k8s.io
resources:
- certificatesigningrequests
verbs:
- get
- list
- watch
- apiGroups:
- certificates.k8s.io
resources:
- certificatesigningrequests/approval
verbs:
- update
- apiGroups:
- certificates.k8s.io
resourceNames:
- kubernetes.io/legacy-unknown
resources:
- signers
verbs:
- approve
[root@host130 openyurt]# kubectl get clusterrolebinding.rbac.authorization.k8s.io/yurt-controller-manager -oyaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"yurt-controller-manager"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"yurt-controller-manager"},"subjects":[{"kind":"ServiceAccount","name":"yurt-controller-manager","namespace":"kube-system"}]}
creationTimestamp: "2022-06-07T02:18:37Z"
managedFields:
- apiVersion: rbac.authorization.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:roleRef:
f:apiGroup: {}
f:kind: {}
f:name: {}
f:subjects: {}
manager: kubectl-client-side-apply
operation: Update
time: "2022-06-07T02:18:37Z"
name: yurt-controller-manager
resourceVersion: "242"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/yurt-controller-manager
uid: b25d9b61-6249-4707-92f3-49ded09dcbab
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: yurt-controller-manager
subjects:
- kind: ServiceAccount
name: yurt-controller-manager
namespace: kube-system
- apiGroups:
- certificates.k8s.io
resourceNames:
- kubernetes.io/legacy-unknown
resources:
- signers
verbs:
- approve
It should be
- apiGroups:
- certificates.k8s.io
resources:
- signers
resourceNames:
- kubernetes.io/kube-apiserver-client
- kubernetes.io/kubelet-serving
verbs:
- approve
You can recreate the rbac with what in config/setup/yurt-controller-manager.yaml
.
BTW, is it a bug when using yurtadm init
?
BTW, is it a bug when using
yurtadm init
?
I think so too. Because I didn't do anything other than use yurtadm init
BTW, is it a bug when using
yurtadm init
?I think so too. Because I didn't do anything other than use
yurtadm init
@windydayc The reason maybe is that yurthub version is not match with yurt-controller-manager.
BTW, is it a bug when using
yurtadm init
?I think so too. Because I didn't do anything other than use
yurtadm init
@windydayc The reason maybe is that yurthub version is not match with yurt-controller-manager.
@rambohe-ch How to judge whether the two versions match? I find that these two images are:
- apiGroups: - certificates.k8s.io resourceNames: - kubernetes.io/legacy-unknown resources: - signers verbs: - approve
It should be
- apiGroups: - certificates.k8s.io resources: - signers resourceNames: - kubernetes.io/kube-apiserver-client - kubernetes.io/kubelet-serving verbs: - approve
You can recreate the rbac with what in
config/setup/yurt-controller-manager.yaml
.
According to @Congrool , I solved this problem by re-applying the yaml:
[root@host130 openyurt]# kubectl apply -f config/setup/yurt-controller-manager.yaml
And I restart the kubelet, then I create a pod but the KUBERNETES env in it is not yurthub address 169.254.2.1:10268
.
[root@host130 openyurt]# kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 21m 100.64.0.8 host130 <none> <none>
[root@host130 openyurt]# kubectl exec -it nginx bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@nginx:/# env | grep KUBERNETES
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
@rambohe-ch Seems yurthub still has some problems?
@windydayc please check /etc/kubernetes/cache/kubelet/service/default/kubernetes file and default/kubernetes
service is mutated or not?
In yurthub's log: Because it is a cloud node, cache manager is disabled.
@windydayc Would you be able to update the last status about this issue?
@windydayc Would you be able to update the last status about this issue?
yurtadm init
did not reset kubelet, thus causing the above problems. I will improve the yurtadm
command later.
What happened: I used
yurtadm init
to install an openyurt cluster as this document described, and it ran successfully:node info:
But the yurthub kept restarting (this machine's hostname is "host130"):
yurt-hub-host130
's log:yurt-hub.yaml
:kubeconfig
:What you expected to happen: yurthub runs succefully instead of being restarted all the time.
Environment:
cat /etc/os-release
): CentOS7uname -a
): Linux host130 3.10.0-1127.el7.x86_64#1
SMP Tue Mar 31 23:36:51 UTC 2020 x86_64 x86_64 x86_64 GNU/Linuxothers Before I use
yurtadm init
to install, I have cleaned up the environment follow [this article](FAQ | sealer) and deleted the/var/lib/kubelet
,/var/lib/yurthub
,/var/lib/yurttunnel-server
directory./kind bug