openyurtio / openyurt

OpenYurt - Extending your native Kubernetes to edge(project under CNCF)
https://openyurt.io
Apache License 2.0
1.7k stars 401 forks source link

[BUG] Cluster installed successfully using `yurtadm init`, but yurthub kept restarting #872

Closed windydayc closed 2 years ago

windydayc commented 2 years ago

What happened: I used yurtadm init to install an openyurt cluster as this document described, and it ran successfully:

[root@host130 openyurt]# _output/local/bin/linux/amd64/yurtadm init --apiserver-advertise-address 192.168.152.130 --openyurt-version latest --passwd 123
I0607 02:43:04.861578    8656 init.go:188] Check and install sealer
I0607 02:43:05.015962    8656 init.go:198] Sealer v0.6.1 already exist, skip install.
I0607 02:43:05.015997    8656 init.go:236] generate Clusterfile for openyurt
I0607 02:43:05.016417    8656 init.go:228] init an openyurt cluster
2022-06-07 02:43:05 [INFO] [local.go:238] Start to create a new cluster
2022-06-07 02:49:35 [INFO] [kube_certs.go:234] APIserver altNames :  {map[apiserver.cluster.local:apiserver.cluster.local host130:host130 kubernetes:kubernetes kubernetes.default:kubernetes.default kubernetes.default.svc:kubernetes.default.svc kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local localhost:localhost] map[10.103.97.2:10.103.97.2 10.96.0.1:10.96.0.1 127.0.0.1:127.0.0.1 192.168.152.130:192.168.152.130]}
2022-06-07 02:49:35 [INFO] [kube_certs.go:254] Etcd altnames : {map[host130:host130 localhost:localhost] map[127.0.0.1:127.0.0.1 192.168.152.130:192.168.152.130 ::1:::1]}, commonName : host130
2022-06-07 02:51:03 [INFO] [kubeconfig.go:267] [kubeconfig] Writing "admin.conf" kubeconfig file

2022-06-07 02:51:03 [INFO] [kubeconfig.go:267] [kubeconfig] Writing "controller-manager.conf" kubeconfig file

2022-06-07 02:51:03 [INFO] [kubeconfig.go:267] [kubeconfig] Writing "scheduler.conf" kubeconfig file

2022-06-07 02:51:03 [INFO] [kubeconfig.go:267] [kubeconfig] Writing "kubelet.conf" kubeconfig file

2022-06-07 02:58:11 [INFO] [init.go:228] start to init master0...
2022-06-07 02:59:44 [INFO] [init.go:233] W0607 02:58:12.182775    9521 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubelet.config.k8s.io", Version:"v1beta1", Kind:"KubeletConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "shutdownGracePeriod"
W0607 02:58:12.364950    9521 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.8
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING FileExisting-socat]: socat not found in system path
        [WARNING Hostname]: hostname "host130" could not be reached
        [WARNING Hostname]: hostname "host130": lookup host130 on 192.168.152.2:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
W0607 02:58:23.970999    9521 kubeconfig.go:242] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://192.168.152.130:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
W0607 02:58:24.038237    9521 kubeconfig.go:242] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://192.168.152.130:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 77.504407 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
72bea8ed6ab6e7c166a1a45520f5109937ffc056a6c4b7c8da959c45215ba9cd
[mark-control-plane] Marking the node host130 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node host130 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 4zcwso.b7df7slikommdbxp
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join apiserver.cluster.local:6443 --token 4zcwso.b7df7slikommdbxp \
    --discovery-token-ca-cert-hash sha256:b4ebf15be698c9b275fe066929bcbf45204de47137ae19550b30abdb214597ed \
    --control-plane --certificate-key 72bea8ed6ab6e7c166a1a45520f5109937ffc056a6c4b7c8da959c45215ba9cd

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join apiserver.cluster.local:6443 --token 4zcwso.b7df7slikommdbxp \
    --discovery-token-ca-cert-hash sha256:b4ebf15be698c9b275fe066929bcbf45204de47137ae19550b30abdb214597ed

2022-06-07 02:59:44 [INFO] [init.go:183] join command is: kubeadm join  apiserver.cluster.local:6443 --token 4zcwso.b7df7slikommdbxp \
    --discovery-token-ca-cert-hash sha256:b4ebf15be698c9b275fe066929bcbf45204de47137ae19550b30abdb214597ed \
    --control-plane --certificate-key 72bea8ed6ab6e7c166a1a45520f5109937ffc056a6c4b7c8da959c45215ba9cd

2022-06-07 03:03:52 [INFO] [local.go:248] Succeeded in creating a new cluster, enjoy it!

node info:

[root@host130 ~]# kubectl get node -owide
NAME      STATUS   ROLES    AGE   VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
host130   Ready    master   20m   v1.19.8   192.168.152.130   <none>        CentOS Linux 7 (Core)   3.10.0-1127.el7.x86_64   docker://19.3.14

But the yurthub kept restarting (this machine's hostname is "host130"):

[root@host130 openyurt]# kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS             RESTARTS   AGE
kube-system   coredns-b4bf78944-clwm6                    1/1     Running            0          21m
kube-system   coredns-b4bf78944-ct6tr                    1/1     Running            0          21m
kube-system   etcd-host130                               1/1     Running            0          25m
kube-system   kube-apiserver-host130                     1/1     Running            0          25m
kube-system   kube-controller-manager-host130            1/1     Running            0          25m
kube-system   kube-flannel-ds-8r4wr                      1/1     Running            0          24m
kube-system   kube-proxy-r2wtd                           1/1     Running            0          21m
kube-system   kube-scheduler-host130                     1/1     Running            0          25m
kube-system   yurt-app-manager-67f95668df-lggns          1/1     Running            0          24m
kube-system   yurt-controller-manager-7c7bf76c77-4pkqh   1/1     Running            0          24m
kube-system   yurt-hub-host130                           0/1     CrashLoopBackOff   4          22m
kube-system   yurt-tunnel-server-65bbc86566-7jdc5        1/1     Running            0          24m

yurt-hub-host130's log:

[root@host130 openyurt]# kubectl logs yurt-hub-host130 -n kube-system
yurthub version: projectinfo.Info{GitVersion:"-9873e10", GitCommit:"9873e10", BuildDate:"2022-06-03T02:07:48Z", GoVersion:"go1.17.1", Compiler:"gc", Platform:"linux/amd64"}
I0606 19:32:22.438123       1 start.go:60] FLAG: --access-server-through-hub="true"
I0606 19:32:22.438171       1 start.go:60] FLAG: --add_dir_header="false"
I0606 19:32:22.438178       1 start.go:60] FLAG: --alsologtostderr="false"
I0606 19:32:22.438186       1 start.go:60] FLAG: --bind-address="127.0.0.1"
I0606 19:32:22.438191       1 start.go:60] FLAG: --cert-mgr-mode="hubself"
I0606 19:32:22.438194       1 start.go:60] FLAG: --disabled-resource-filters="[]"
I0606 19:32:22.438201       1 start.go:60] FLAG: --disk-cache-path="/etc/kubernetes/cache/"
I0606 19:32:22.438205       1 start.go:60] FLAG: --dummy-if-ip=""
I0606 19:32:22.438208       1 start.go:60] FLAG: --dummy-if-name="yurthub-dummy0"
I0606 19:32:22.438210       1 start.go:60] FLAG: --enable-dummy-if="true"
I0606 19:32:22.438216       1 start.go:60] FLAG: --enable-iptables="true"
I0606 19:32:22.438219       1 start.go:60] FLAG: --enable-node-pool="true"
I0606 19:32:22.438221       1 start.go:60] FLAG: --enable-resource-filter="true"
I0606 19:32:22.438225       1 start.go:60] FLAG: --gc-frequency="120"
I0606 19:32:22.438230       1 start.go:60] FLAG: --heartbeat-failed-retry="3"
I0606 19:32:22.438233       1 start.go:60] FLAG: --heartbeat-healthy-threshold="2"
I0606 19:32:22.438236       1 start.go:60] FLAG: --heartbeat-timeout-seconds="2"
I0606 19:32:22.438240       1 start.go:60] FLAG: --help="false"
I0606 19:32:22.438243       1 start.go:60] FLAG: --hub-cert-organizations=""
I0606 19:32:22.438246       1 start.go:60] FLAG: --join-token="2d96hl.l2hkfrihj88pguup"
I0606 19:32:22.438250       1 start.go:60] FLAG: --kubelet-ca-file="/etc/kubernetes/pki/ca.crt"
I0606 19:32:22.438253       1 start.go:60] FLAG: --kubelet-client-certificate="/var/lib/kubelet/pki/kubelet-client-current.pem"
I0606 19:32:22.438256       1 start.go:60] FLAG: --kubelet-health-grace-period="40s"
I0606 19:32:22.438261       1 start.go:60] FLAG: --lb-mode="rr"
I0606 19:32:22.438265       1 start.go:60] FLAG: --log-flush-frequency="5s"
I0606 19:32:22.438269       1 start.go:60] FLAG: --log_backtrace_at=":0"
I0606 19:32:22.438275       1 start.go:60] FLAG: --log_dir=""
I0606 19:32:22.438279       1 start.go:60] FLAG: --log_file=""
I0606 19:32:22.438281       1 start.go:60] FLAG: --log_file_max_size="1800"
I0606 19:32:22.438284       1 start.go:60] FLAG: --logtostderr="true"
I0606 19:32:22.438287       1 start.go:60] FLAG: --max-requests-in-flight="250"
I0606 19:32:22.438293       1 start.go:60] FLAG: --node-name="host130"
I0606 19:32:22.438296       1 start.go:60] FLAG: --nodepool-name=""
I0606 19:32:22.438298       1 start.go:60] FLAG: --one_output="false"
I0606 19:32:22.438301       1 start.go:60] FLAG: --profiling="true"
I0606 19:32:22.438306       1 start.go:60] FLAG: --proxy-port="10261"
I0606 19:32:22.438309       1 start.go:60] FLAG: --proxy-secure-port="10268"
I0606 19:32:22.438312       1 start.go:60] FLAG: --root-dir="/var/lib/yurthub"
I0606 19:32:22.438316       1 start.go:60] FLAG: --serve-port="10267"
I0606 19:32:22.438319       1 start.go:60] FLAG: --server-addr="https://apiserver.cluster.local:6443"
I0606 19:32:22.438329       1 start.go:60] FLAG: --skip_headers="false"
I0606 19:32:22.438332       1 start.go:60] FLAG: --skip_log_headers="false"
I0606 19:32:22.438335       1 start.go:60] FLAG: --stderrthreshold="2"
I0606 19:32:22.438337       1 start.go:60] FLAG: --v="2"
I0606 19:32:22.438340       1 start.go:60] FLAG: --version="false"
I0606 19:32:22.438345       1 start.go:60] FLAG: --vmodule=""
I0606 19:32:22.438348       1 start.go:60] FLAG: --working-mode="cloud"
I0606 19:32:22.438417       1 options.go:182] dummy ip not set, will use 169.254.2.1 as default
I0606 19:32:22.438449       1 config.go:208] yurthub would connect remote servers: https://apiserver.cluster.local:6443
I0606 19:32:22.438636       1 restmapper.go:86] initialize an empty DynamicRESTMapper
I0606 19:32:22.440720       1 filter.go:94] Filter servicetopology registered successfully
I0606 19:32:22.440743       1 filter.go:94] Filter masterservice registered successfully
I0606 19:32:22.440750       1 filter.go:94] Filter discardcloudservice registered successfully
I0606 19:32:22.440754       1 filter.go:94] Filter endpoints registered successfully
I0606 19:32:22.440979       1 filter.go:74] prepare list/watch to sync node(host130) for cloud working mode
I0606 19:32:22.441097       1 filter.go:72] Filter servicetopology initialize successfully
I0606 19:32:22.442686       1 filter.go:72] Filter masterservice initialize successfully
I0606 19:32:22.442764       1 filter.go:74] prepare list/watch to sync node(host130) for cloud working mode
I0606 19:32:22.442781       1 filter.go:72] Filter endpoints initialize successfully
I0606 19:32:22.442978       1 approver.go:198] current filter setting: map[kube-proxy/endpointslices/list:servicetopology kube-proxy/endpointslices/watch:servicetopology kube-proxy/services/list:discardcloudservice kube-proxy/services/watch:discardcloudservice kubelet/services/list:masterservice kubelet/services/watch:masterservice nginx-ingress-controller/endpoints/list:endpoints nginx-ingress-controller/endpoints/watch:endpoints] after init
I0606 19:32:22.443049       1 start.go:70] yurthub cfg: &config.YurtHubConfiguration{LBMode:"rr", RemoteServers:[]*url.URL{(*url.URL)(0xc0001b5a70)}, YurtHubServerAddr:"127.0.0.1:10267", YurtHubCertOrganizations:[]string{}, YurtHubProxyServerAddr:"127.0.0.1:10261", YurtHubProxyServerSecureAddr:"127.0.0.1:10268", YurtHubProxyServerDummyAddr:"169.254.2.1:10261", YurtHubProxyServerSecureDummyAddr:"169.254.2.1:10268", GCFrequency:120, CertMgrMode:"hubself", KubeletRootCAFilePath:"/etc/kubernetes/pki/ca.crt", KubeletPairFilePath:"/var/lib/kubelet/pki/kubelet-client-current.pem", NodeName:"host130", HeartbeatFailedRetry:3, HeartbeatHealthyThreshold:2, HeartbeatTimeoutSeconds:2, MaxRequestInFlight:250, JoinToken:"2d96hl.l2hkfrihj88pguup", RootDir:"/var/lib/yurthub", EnableProfiling:true, EnableDummyIf:true, EnableIptables:true, HubAgentDummyIfName:"yurthub-dummy0", StorageWrapper:(*cachemanager.storageWrapper)(0xc0003aee80), SerializerManager:(*serializer.SerializerManager)(0xc0003aeec0), RESTMapperManager:(*meta.RESTMapperManager)(0xc0003aef40), TLSConfig:(*tls.Config)(nil), SharedFactory:(*informers.sharedInformerFactory)(0xc000147360), YurtSharedFactory:(*externalversions.sharedInformerFactory)(0xc000147400), WorkingMode:"cloud", KubeletHealthGracePeriod:40000000000, FilterManager:(*filter.Manager)(0xc00012c7e0), CertIPs:[]net.IP{net.IP{0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xff, 0xff, 0xa9, 0xfe, 0x2, 0x1}, net.IP{0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xff, 0xff, 0x7f, 0x0, 0x0, 0x1}}}
I0606 19:32:22.443108       1 start.go:85] 1. register cert managers
I0606 19:32:22.443125       1 certificate.go:60] Registered certificate manager hubself
I0606 19:32:22.443129       1 start.go:90] 2. create cert manager with hubself mode
I0606 19:32:22.444516       1 cert_mgr.go:148] apiServer name https://apiserver.cluster.local:6443 not changed
I0606 19:32:22.444576       1 cert_mgr.go:260] /var/lib/yurthub/pki/ca.crt file already exists, check with server
I0606 19:32:22.455678       1 cert_mgr.go:317] /var/lib/yurthub/pki/ca.crt file matched with server's, reuse it
I0606 19:32:22.455737       1 cert_mgr.go:171] use /var/lib/yurthub/pki/ca.crt ca file to bootstrap yurthub
I0606 19:32:22.455867       1 cert_mgr.go:353] yurthub bootstrap conf file already exists, skip init bootstrap
W0606 19:32:22.456074       1 filestore_wrapper.go:49] unexpected error occurred when loading the certificate: no cert/key files read at "/var/lib/yurthub/pki/yurthub-current.pem", ("", "") or ("/var/lib/yurthub/pki", "/var/lib/yurthub/pki"), will regenerate it
I0606 19:32:22.456101       1 certificate_manager.go:270] kubernetes.io/kube-apiserver-client: Certificate rotation is enabled
I0606 19:32:22.456149       1 cert_mgr.go:481] yurthub config file already exists, skip init config file
I0606 19:32:22.456164       1 certificate.go:83] waiting for preparing client certificate
I0606 19:32:22.456205       1 certificate_manager.go:270] kubernetes.io/kube-apiserver-client: Rotating certificates
I0606 19:32:27.456768       1 certificate.go:83] waiting for preparing client certificate
I0606 19:32:32.457813       1 certificate.go:83] waiting for preparing client certificate
I0606 19:32:37.457794       1 certificate.go:83] waiting for preparing client certificate
I0606 19:32:42.456567       1 certificate.go:83] waiting for preparing client certificate
I0606 19:32:47.456661       1 certificate.go:83] waiting for preparing client certificate
I0606 19:32:52.456599       1 certificate.go:83] waiting for preparing client certificate
I0606 19:32:52.458017       1 cert_mgr.go:471] avoid tcp conn leak, close old tcp conn that used to rotate certificate
I0606 19:32:52.458112       1 connrotation.go:110] forcibly close 0 connections on apiserver.cluster.local:6443 for hub certificate manager dialer
I0606 19:32:52.460037       1 connrotation.go:151] create a connection from 192.168.152.130:38962 to apiserver.cluster.local:6443, total 1 connections in hub certificate manager dialer
I0606 19:32:57.456410       1 certificate.go:83] waiting for preparing client certificate
I0606 19:33:02.457056       1 certificate.go:83] waiting for preparing client certificate
I0606 19:33:07.456956       1 certificate.go:83] waiting for preparing client certificate
I0606 19:33:12.457429       1 certificate.go:83] waiting for preparing client certificate
I0606 19:33:17.456387       1 certificate.go:83] waiting for preparing client certificate
I0606 19:33:22.457187       1 certificate.go:83] waiting for preparing client certificate
I0606 19:33:27.456754       1 certificate.go:83] waiting for preparing client certificate
I0606 19:33:32.457666       1 certificate.go:83] waiting for preparing client certificate
I0606 19:33:37.457018       1 certificate.go:83] waiting for preparing client certificate
I0606 19:33:42.456498       1 certificate.go:83] waiting for preparing client certificate
I0606 19:33:47.456568       1 certificate.go:83] waiting for preparing client certificate
I0606 19:33:52.457048       1 certificate.go:83] waiting for preparing client certificate
I0606 19:33:57.456429       1 certificate.go:83] waiting for preparing client certificate
I0606 19:34:02.458012       1 certificate.go:83] waiting for preparing client certificate
I0606 19:34:07.457251       1 certificate.go:83] waiting for preparing client certificate
I0606 19:34:12.456692       1 certificate.go:83] waiting for preparing client certificate
I0606 19:34:17.456393       1 certificate.go:83] waiting for preparing client certificate
I0606 19:34:22.457189       1 certificate.go:83] waiting for preparing client certificate
I0606 19:34:27.456752       1 certificate.go:83] waiting for preparing client certificate
I0606 19:34:32.457642       1 certificate.go:83] waiting for preparing client certificate
I0606 19:34:37.456297       1 certificate.go:83] waiting for preparing client certificate
I0606 19:34:42.456465       1 certificate.go:83] waiting for preparing client certificate
I0606 19:34:47.456978       1 certificate.go:83] waiting for preparing client certificate
I0606 19:34:52.456956       1 certificate.go:83] waiting for preparing client certificate
I0606 19:34:57.456770       1 certificate.go:83] waiting for preparing client certificate
I0606 19:35:02.456274       1 certificate.go:83] waiting for preparing client certificate
I0606 19:35:07.456317       1 certificate.go:83] waiting for preparing client certificate
I0606 19:35:12.457742       1 certificate.go:83] waiting for preparing client certificate
I0606 19:35:17.456341       1 certificate.go:83] waiting for preparing client certificate
I0606 19:35:22.456346       1 certificate.go:83] waiting for preparing client certificate
I0606 19:35:27.458130       1 certificate.go:83] waiting for preparing client certificate
I0606 19:35:32.456513       1 certificate.go:83] waiting for preparing client certificate
I0606 19:35:37.456810       1 certificate.go:83] waiting for preparing client certificate
I0606 19:35:42.456690       1 certificate.go:83] waiting for preparing client certificate
I0606 19:35:47.456764       1 certificate.go:83] waiting for preparing client certificate
I0606 19:35:52.458030       1 certificate.go:83] waiting for preparing client certificate
I0606 19:35:57.456544       1 certificate.go:83] waiting for preparing client certificate
I0606 19:36:02.456434       1 certificate.go:83] waiting for preparing client certificate
I0606 19:36:07.457276       1 certificate.go:83] waiting for preparing client certificate
I0606 19:36:12.456343       1 certificate.go:83] waiting for preparing client certificate
I0606 19:36:17.457643       1 certificate.go:83] waiting for preparing client certificate
I0606 19:36:22.457454       1 certificate.go:83] waiting for preparing client certificate
I0606 19:36:22.457539       1 certificate.go:83] waiting for preparing client certificate
E0606 19:36:22.457547       1 certificate.go:87] client certificate preparation failed, timed out waiting for the condition
F0606 19:36:22.457561       1 start.go:73] run yurthub failed, could not create certificate manager, timed out waiting for the condition
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0x1)
        /go/pkg/mod/k8s.io/klog/v2@v2.9.0/klog.go:1026 +0x8a
k8s.io/klog/v2.(*loggingT).output(0x2dd08e0, 0x3, {0x0, 0x0}, 0xc000407b90, 0x0, {0x235aa6d, 0xc000456060}, 0x0, 0x0)
        /go/pkg/mod/k8s.io/klog/v2@v2.9.0/klog.go:975 +0x63d
k8s.io/klog/v2.(*loggingT).printf(0x1c07404, 0x65fd18, {0x0, 0x0}, {0x0, 0x0}, {0x1c19b36, 0x11}, {0xc000456060, 0x2, ...})
        /go/pkg/mod/k8s.io/klog/v2@v2.9.0/klog.go:753 +0x1e5
k8s.io/klog/v2.Fatalf(...)
        /go/pkg/mod/k8s.io/klog/v2@v2.9.0/klog.go:1514
github.com/openyurtio/openyurt/cmd/yurthub/app.NewCmdStartYurtHub.func1(0xc00013e500, {0x1c07d28, 0x5, 0x5})
        /build/cmd/yurthub/app/start.go:73 +0x5a5
github.com/spf13/cobra.(*Command).execute(0xc00013e500, {0xc000134130, 0x5, 0x5})
        /go/pkg/mod/github.com/spf13/cobra@v1.2.1/command.go:860 +0x5f8
github.com/spf13/cobra.(*Command).ExecuteC(0xc00013e500)
        /go/pkg/mod/github.com/spf13/cobra@v1.2.1/command.go:974 +0x3bc
github.com/spf13/cobra.(*Command).Execute(...)
        /go/pkg/mod/github.com/spf13/cobra@v1.2.1/command.go:902
main.main()
        /build/cmd/yurthub/yurthub.go:33 +0xaf

goroutine 18 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x0)
        /go/pkg/mod/k8s.io/klog/v2@v2.9.0/klog.go:1169 +0x6a
created by k8s.io/klog/v2.init.0
        /go/pkg/mod/k8s.io/klog/v2@v2.9.0/klog.go:420 +0xfb

goroutine 48 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0007b14e0, 0x0)
        /usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x10)
        /usr/local/go/src/sync/cond.go:56 +0x8c
golang.org/x/net/http2.(*pipe).Read(0xc0007b14c8, {0xc00046e000, 0x200, 0x200})
        /go/pkg/mod/golang.org/x/net@v0.0.0-20210520170846-37e1c6afe023/http2/pipe.go:65 +0xeb
golang.org/x/net/http2.transportResponseBody.Read({0xc000287bb8}, {0xc00046e000, 0x534ace, 0xc000287c20})
        /go/pkg/mod/golang.org/x/net@v0.0.0-20210520170846-37e1c6afe023/http2/transport.go:2110 +0x77
encoding/json.(*Decoder).refill(0xc00046a000)
        /usr/local/go/src/encoding/json/stream.go:165 +0x17f
encoding/json.(*Decoder).readValue(0xc00046a000)
        /usr/local/go/src/encoding/json/stream.go:140 +0xbb
encoding/json.(*Decoder).Decode(0xc00046a000, {0x1998b40, 0xc000450090})
        /usr/local/go/src/encoding/json/stream.go:63 +0x78
k8s.io/apimachinery/pkg/util/framer.(*jsonFrameReader).Read(0xc00043e270, {0xc00046c000, 0x400, 0x400})
        /go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/framer/framer.go:152 +0x19c
k8s.io/apimachinery/pkg/runtime/serializer/streaming.(*decoder).Decode(0xc0004420f0, 0x0, {0x1e90720, 0xc000448240})
        /go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/runtime/serializer/streaming/streaming.go:77 +0xa7
k8s.io/client-go/rest/watch.(*Decoder).Decode(0xc000456040)
        /go/pkg/mod/k8s.io/client-go@v0.22.3/rest/watch/decoder.go:49 +0x4f
k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc000448200)
        /go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/watch/streamwatcher.go:105 +0x11c
created by k8s.io/apimachinery/pkg/watch.NewStreamWatcher
        /go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/watch/streamwatcher.go:76 +0x135

goroutine 86 [syscall, 4 minutes]:
os/signal.signal_recv()
        /usr/local/go/src/runtime/sigqueue.go:169 +0x98
os/signal.loop()
        /usr/local/go/src/os/signal/signal_unix.go:24 +0x19
created by os/signal.Notify.func1.1
        /usr/local/go/src/os/signal/signal.go:151 +0x2c

goroutine 87 [chan receive, 4 minutes]:
k8s.io/apiserver/pkg/server.SetupSignalContext.func1()
        /go/pkg/mod/k8s.io/apiserver@v0.22.3/pkg/server/signal.go:48 +0x2b
created by k8s.io/apiserver/pkg/server.SetupSignalContext
        /go/pkg/mod/k8s.io/apiserver@v0.22.3/pkg/server/signal.go:47 +0xe7

goroutine 47 [select, 4 minutes]:
golang.org/x/net/http2.awaitRequestCancel(0xc0001adb00, 0xc000115620)
        /go/pkg/mod/golang.org/x/net@v0.0.0-20210520170846-37e1c6afe023/http2/transport.go:318 +0xfa
golang.org/x/net/http2.(*clientStream).awaitRequestCancel(0xc0007b14a0, 0x0)
        /go/pkg/mod/golang.org/x/net@v0.0.0-20210520170846-37e1c6afe023/http2/transport.go:344 +0x2b
created by golang.org/x/net/http2.(*clientConnReadLoop).handleResponse
        /go/pkg/mod/golang.org/x/net@v0.0.0-20210520170846-37e1c6afe023/http2/transport.go:2056 +0x638

goroutine 80 [select, 4 minutes]:
k8s.io/client-go/tools/watch.UntilWithoutRetry({0x1eab020, 0xc0007a6000}, {0x1e909c8, 0xc000740c60}, {0xc00055f9d0, 0x1, 0xc8})
        /go/pkg/mod/k8s.io/client-go@v0.22.3/tools/watch/until.go:73 +0x2f0
k8s.io/client-go/tools/watch.UntilWithSync({0x1eab020, 0xc0007a6000}, {0x1e931a0, 0xc000659ae8}, {0x1e8eb00, 0xc00000ab40}, 0x0, {0xc0005c19d0, 0x1, 0x1})
        /go/pkg/mod/k8s.io/client-go@v0.22.3/tools/watch/until.go:151 +0x268
k8s.io/client-go/util/certificate/csr.WaitForCertificate({0x1eab020, 0xc0007a6000}, {0x1efd8f0, 0xc0004202c0}, {0xc00079e070, 0x9}, {0xc0007891a0, 0x24})
        /go/pkg/mod/k8s.io/client-go@v0.22.3/util/certificate/csr/csr.go:225 +0x96c
k8s.io/client-go/util/certificate.(*manager).rotateCerts(0xc000504000)
        /go/pkg/mod/k8s.io/client-go@v0.22.3/util/certificate/certificate_manager.go:486 +0x5b9
k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x40ce54, 0xc000286cc8})
        /go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:217 +0x1b
k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x1eaafe8, 0xc00013c008}, 0x46af53)
        /go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:230 +0x7c
k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0x1a0ca40)
        /go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:223 +0x39
k8s.io/apimachinery/pkg/util/wait.ExponentialBackoff({0x77359400, 0x4000000000000000, 0x3fb999999999999a, 0x5, 0x0}, 0x2dd0440)
        /go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:418 +0x5f
k8s.io/client-go/util/certificate.(*manager).Start.func1()
        /go/pkg/mod/k8s.io/client-go@v0.22.3/util/certificate/certificate_manager.go:353 +0x3f8
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x7fdc6d3836a0)
        /go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0, {0x1e72580, 0xc0004cbc80}, 0x1, 0xc00039a240)
        /go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0, 0x3b9aca00, 0x0, 0x0, 0x0)
        /go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x0, 0x0, 0x0)
        /go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/util/certificate.(*manager).Start
        /go/pkg/mod/k8s.io/client-go@v0.22.3/util/certificate/certificate_manager.go:321 +0x18f

goroutine 106 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc00073a450, 0x1)
        /usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0xc0005b6f60)
        /usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/tools/watch.(*eventProcessor).takeBatch(0xc00073f320)
        /go/pkg/mod/k8s.io/client-go@v0.22.3/tools/watch/informerwatcher.go:64 +0xa5
k8s.io/client-go/tools/watch.(*eventProcessor).run(0x0)
        /go/pkg/mod/k8s.io/client-go@v0.22.3/tools/watch/informerwatcher.go:51 +0x25
created by k8s.io/client-go/tools/watch.NewIndexerInformerWatcher
        /go/pkg/mod/k8s.io/client-go@v0.22.3/tools/watch/informerwatcher.go:140 +0x31f

goroutine 103 [IO wait]:
internal/poll.runtime_pollWait(0x7fdc6d42a5c8, 0x72)
        /usr/local/go/src/runtime/netpoll.go:229 +0x89
internal/poll.(*pollDesc).wait(0xc0004cd080, 0xc000710000, 0x0)
        /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
        /usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0004cd080, {0xc000710000, 0x902, 0x902})
        /usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a
net.(*netFD).Read(0xc0004cd080, {0xc000710000, 0x8fd, 0xc000655a40})
        /usr/local/go/src/net/fd_posix.go:56 +0x29
net.(*conn).Read(0xc00040c018, {0xc000710000, 0xc000710000, 0x5})
        /usr/local/go/src/net/net.go:183 +0x45
crypto/tls.(*atLeastReader).Read(0xc000792048, {0xc000710000, 0x0, 0x409b8d})
        /usr/local/go/src/crypto/tls/conn.go:777 +0x3d
bytes.(*Buffer).ReadFrom(0xc0001a0278, {0x1e6f060, 0xc000792048})
        /usr/local/go/src/bytes/buffer.go:204 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0001a0000, {0x7fdc6d3c31d8, 0xc00043e060}, 0x902)
        /usr/local/go/src/crypto/tls/conn.go:799 +0xe5
crypto/tls.(*Conn).readRecordOrCCS(0xc0001a0000, 0x0)
        /usr/local/go/src/crypto/tls/conn.go:606 +0x112
crypto/tls.(*Conn).readRecord(...)
        /usr/local/go/src/crypto/tls/conn.go:574
crypto/tls.(*Conn).Read(0xc0001a0000, {0xc00071f000, 0x1000, 0xc000286cc0})
        /usr/local/go/src/crypto/tls/conn.go:1277 +0x16f
bufio.(*Reader).Read(0xc0005a96e0, {0xc0006eaab8, 0x9, 0x18})
        /usr/local/go/src/bufio/bufio.go:227 +0x1b4
io.ReadAtLeast({0x1e6ee80, 0xc0005a96e0}, {0xc0006eaab8, 0x9, 0x9}, 0x9)
        /usr/local/go/src/io/io.go:328 +0x9a
io.ReadFull(...)
        /usr/local/go/src/io/io.go:347
golang.org/x/net/http2.readFrameHeader({0xc0006eaab8, 0x9, 0x8c21ce}, {0x1e6ee80, 0xc0005a96e0})
        /go/pkg/mod/golang.org/x/net@v0.0.0-20210520170846-37e1c6afe023/http2/frame.go:237 +0x6e
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0006eaa80)
        /go/pkg/mod/golang.org/x/net@v0.0.0-20210520170846-37e1c6afe023/http2/frame.go:492 +0x95
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000286fa0)
        /go/pkg/mod/golang.org/x/net@v0.0.0-20210520170846-37e1c6afe023/http2/transport.go:1821 +0x165
golang.org/x/net/http2.(*ClientConn).readLoop(0xc0006f8780)
        /go/pkg/mod/golang.org/x/net@v0.0.0-20210520170846-37e1c6afe023/http2/transport.go:1743 +0x79
created by golang.org/x/net/http2.(*Transport).newClientConn
        /go/pkg/mod/golang.org/x/net@v0.0.0-20210520170846-37e1c6afe023/http2/transport.go:695 +0xb45

goroutine 107 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc00064d248, 0x1)
        /usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0xc00078f580)
        /usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop(0xc00064d220, 0xc00073f440)
        /go/pkg/mod/k8s.io/client-go@v0.22.3/tools/cache/delta_fifo.go:525 +0x1f6
k8s.io/client-go/tools/cache.(*controller).processLoop(0xc0006fefc0)
        /go/pkg/mod/k8s.io/client-go@v0.22.3/tools/cache/controller.go:183 +0x36
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x7fdc6d399a00)
        /go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xae71c8, {0x1e72580, 0xc00073f4a0}, 0x1, 0xc00039afc0)
        /go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006ff028, 0x3b9aca00, 0x0, 0x40, 0x7fdc6d23f1b0)
        /go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
        /go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*controller).Run(0xc0006fefc0, 0xc00039afc0)
        /go/pkg/mod/k8s.io/client-go@v0.22.3/tools/cache/controller.go:154 +0x2fb
k8s.io/client-go/tools/watch.NewIndexerInformerWatcher.func4()
        /go/pkg/mod/k8s.io/client-go@v0.22.3/tools/watch/informerwatcher.go:146 +0x8d
created by k8s.io/client-go/tools/watch.NewIndexerInformerWatcher
        /go/pkg/mod/k8s.io/client-go@v0.22.3/tools/watch/informerwatcher.go:143 +0x3d1

goroutine 108 [chan receive, 4 minutes]:
k8s.io/client-go/tools/cache.(*controller).Run.func1()
        /go/pkg/mod/k8s.io/client-go@v0.22.3/tools/cache/controller.go:130 +0x28
created by k8s.io/client-go/tools/cache.(*controller).Run
        /go/pkg/mod/k8s.io/client-go@v0.22.3/tools/cache/controller.go:129 +0x105

goroutine 109 [select, 4 minutes]:
k8s.io/client-go/tools/cache.(*Reflector).watchHandler(0xc0006eab60, {0x0, 0x0, 0x2dd0440}, {0x1e909f0, 0xc000448200}, 0xc000563ba0, 0xc0007a6180, 0xc00039afc0)
        /go/pkg/mod/k8s.io/client-go@v0.22.3/tools/cache/reflector.go:468 +0x1b6
k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch(0xc0006eab60, 0xc00039afc0)
        /go/pkg/mod/k8s.io/client-go@v0.22.3/tools/cache/reflector.go:428 +0x6b6
k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
        /go/pkg/mod/k8s.io/client-go@v0.22.3/tools/cache/reflector.go:221 +0x26
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x7fdc6d399a00)
        /go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000715740, {0x1e72560, 0xc000657220}, 0x1, 0xc00039afc0)
        /go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:156 +0xb6
k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0006eab60, 0xc00039afc0)
        /go/pkg/mod/k8s.io/client-go@v0.22.3/tools/cache/reflector.go:220 +0x237
k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
        /go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:56 +0x22
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
        /go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
        /go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:71 +0x88

goroutine 29 [select, 4 minutes]:
k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func2()
        /go/pkg/mod/k8s.io/client-go@v0.22.3/tools/cache/reflector.go:373 +0x139
created by k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch
        /go/pkg/mod/k8s.io/client-go@v0.22.3/tools/cache/reflector.go:367 +0x3a5

yurt-hub.yaml:

[root@host130 manifests]# cat /etc/kubernetes/manifests/yurt-hub.yaml

apiVersion: v1
kind: Pod
metadata:
  labels:
    k8s-app: yurt-hub
  name: yurt-hub
  namespace: kube-system
spec:
  volumes:
  - name: hub-dir
    hostPath:
      path: /var/lib/yurthub
      type: DirectoryOrCreate
  - name: kubernetes
    hostPath:
      path: /etc/kubernetes
      type: Directory
  - name: pem-dir
    hostPath:
      path: /var/lib/kubelet/pki
      type: Directory
  containers:
  - name: yurt-hub
    image: openyurt/yurthub:latest
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: hub-dir
      mountPath: /var/lib/yurthub
    - name: kubernetes
      mountPath: /etc/kubernetes
    - name: pem-dir
      mountPath: /var/lib/kubelet/pki
    command:
    - yurthub
    - --v=2
    - --server-addr=https://apiserver.cluster.local:6443
    - --node-name=$(NODE_NAME)
    - --join-token=2d96hl.l2hkfrihj88pguup
    - --working-mode=cloud

    livenessProbe:
      httpGet:
        host: 127.0.0.1
        path: /v1/healthz
        port: 10267
      initialDelaySeconds: 300
      periodSeconds: 5
      failureThreshold: 3
    resources:
      requests:
        cpu: 150m
        memory: 150Mi
      limits:
        memory: 300Mi
    securityContext:
      capabilities:
        add: ["NET_ADMIN", "NET_RAW"]
    env:
    - name: NODE_NAME
      valueFrom:
        fieldRef:
          fieldPath: spec.nodeName
  hostNetwork: true
  priorityClassName: system-node-critical
  priority: 2000001000

kubeconfig:

[root@host130 openyurt]# cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM2VENDQWRHZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQ0FYRFRJeU1EWXdOakU0TkRrek5Wb1lEekl4TWpJd05URXpNVGcwT1RNMVdqQVZNUk13RVFZRApWUVFERXdwcmRXSmxjbTVsZEdWek1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBCnVXSlIvZ1h4NnhsbE5jQ2NTVVBYYzl3NFZxSExLU0t0aGYxblZxTkk1N2lmalNMNW9UQTBvTTJzWUZIakJQTXoKcGlMSlJ6THdULzlRWmFDeHExOWphMUIzbk5OL2d5a014bXRObHNLY3BkRXk3T1pGSWhxZGg0aTVIQWxIR0RsUApUT2RjRktST091aEQyRjdybnQ5RzBUZWp0V2lFenlTYTh2QVE2dDhncUMvQ1hTWWc2eXNJRy9xekppT0xDaXJQCjJxbVV1VDRvZnpndHJFdE8wcy80NDZIdEdTbWI1VWF1NEU2bUdObXRyenhXekJlVWVWWEZEWXRYREtiQnVUdG8KQjcxamM0YXdqV2pyWGRmOVRRUmlzd09jMmp4QmZwV2ptelAwamJDcXFjSkE2aWt2cmc2ZWZrQ202ZGJYUHlUQgpVc0tib3FJZHVOMWtHVFlyNXFJQXZ3SURBUUFCbzBJd1FEQU9CZ05WSFE4QkFmOEVCQU1DQXFRd0R3WURWUjBUCkFRSC9CQVV3QXdFQi96QWRCZ05WSFE0RUZnUVVSajZYaGNEdk9IVFJBVlRFUmp4Z3BFUnVPUU13RFFZSktvWkkKaHZjTkFRRUxCUUFEZ2dFQkFKejN0WGRNS29FMW1oNlRGUkR1KzNkRis3QU1jeUtpWENiQ3c2WGlKTDNDTkt6awpRNmVIVFYwRnk2dHgxVXFCUXRSK1dFZ0xkR2NTb2NWL2lobUFBRlpIaitXRXFUQXhLS1ZnbW5jZE4xZ29oN0ZKCmVsNGg4WU5kUkxFUzAyN1NrR21DMWRsRU85Zmdxbm1Db21DNlhzeHY0aEQ0ZnN3MHltVXdIVnFuYkdpRXA5Q3EKcVhMMDVzLzZ2cU5nNHIzSktRclROMG5pQUdFVHdRbEl2R3FxV3Qyak5VT3FicXRVbEpxL0o4Zkcra2Z4TFF2cQowVzdJMUhaM1N0SWh4TmNyY2ZOMHdrejJmTDg0eDlWS2JRUzQzWC9QNEdoRjRBV01vVW9uNCs5cnFSSDF0S1JGCkpEUDZGT2kvaVFWYmJRSDhmcFhReTU1VDU4NzYrZTlEczdtclJOaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://apiserver.cluster.local:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURGVENDQWYyZ0F3SUJBZ0lJVWZ6RlRHdlMvbGd3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWdGdzB5TWpBMk1EWXhPRFE1TXpWYUdBOHlNVEl5TURVeE16RTROVEV3TTFvdwpOREVYTUJVR0ExVUVDaE1PYzNsemRHVnRPbTFoYzNSbGNuTXhHVEFYQmdOVkJBTVRFR3QxWW1WeWJtVjBaWE10CllXUnRhVzR3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRREtyTzhrT2pGYlI2bjUKb01rNU9vUHFqNnlUVS9KT2pmRjlJb0hMaitpTlFQT2JOWURHN2l4RHdpYVdCTXVjRXNxVCtMVXNWcnZtOTZodwp3S0hXbkIrNWJKYUh6QU5xMnhiYm5QTUlnYk9veEh5YU9iZnNVdUVJY3dWQlB5RXVvN0t3bWdYYnhMZ3dsblhqClYzMGxqVlNxRGxMUkkyVWEraitWQ0JWLzlHMmJSUEdyRE5aU1M5Q2dLaXppUFE5WG1Xb0E5b3J4WHE0a2kzNVkKM0IweS9mTDRjQW8xaWVBTlh4MFY3SzdoV3hEeldyRzVPMEhjV1VXQVpPYWZBcDVuNEIvd3QzYUZIOWNESXlNdQovcFMxSnhFQU5aTlJpQnY5Wlhyc05yUktPdTJnNWdkNnFjaFV3WTVnUkJ6MlIvemZrMGZUaDArSWFjYWtRbkh5Ck5ja25pdjZYQWdNQkFBR2pTREJHTUE0R0ExVWREd0VCL3dRRUF3SUZvREFUQmdOVkhTVUVEREFLQmdnckJnRUYKQlFjREFqQWZCZ05WSFNNRUdEQVdnQlJHUHBlRndPODRkTkVCVk1SR1BHQ2tSRzQ1QXpBTkJna3Foa2lHOXcwQgpBUXNGQUFPQ0FRRUFHb1Z0V0djZEZ1M1ZyZVpGVyswYWFwaWZzSFlOaFpTWm1EeUhKZkhZaEpnRzY0M3J2TEVyCklHRUozSm1XenNOajhQRTF6aXQ3Q0ZBd29FcXBYWnEydXVmVlJHS3MrTEo5YnlKR3VpQjFmZ1liTTg2QVJRM3MKZ2V0TTNXSlFpVDdENGJoZkM4M0VMNkRJUEZJdHp3UEpxSTFFcFZ4a04ycmY4cG9RdTNNVEd2eHhhRldrU01SUwpzUWZMYXc5UENhOWRBU21iMmkyaTBCVmZVOEdqQWVsZDltdDFTVFB4eEJ2aVJ5aVB0elIvcTZRS3ViTmFKVWoyCk5PMG9uemtJbTBWR2xyVlBURkNROENPaHVnZGs1c0s4YnUrNW9hZEJsMklMMzZSMVQ5UnpQTDNjOGxKdmpkbGsKV0czS1dpZmhVLzZxU3hJNGFEd2lBTFJQb3RHcmE3ajlXUT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBeXF6dkpEb3hXMGVwK2FESk9UcUQ2bytzazFQeVRvM3hmU0tCeTQvb2pVRHpteldBCnh1NHNROEltbGdUTG5CTEtrL2kxTEZhNzV2ZW9jTUNoMXB3ZnVXeVdoOHdEYXRzVzI1enpDSUd6cU1SOG1qbTMKN0ZMaENITUZRVDhoTHFPeXNKb0YyOFM0TUpaMTQxZDlKWTFVcWc1UzBTTmxHdm8vbFFnVmYvUnRtMFR4cXd6VwpVa3ZRb0NvczRqMFBWNWxxQVBhSzhWNnVKSXQrV053ZE12M3krSEFLTlluZ0RWOGRGZXl1NFZzUTgxcXh1VHRCCjNGbEZnR1RtbndLZVorQWY4TGQyaFIvWEF5TWpMdjZVdFNjUkFEV1RVWWdiL1dWNjdEYTBTanJ0b09ZSGVxbkkKVk1HT1lFUWM5a2Y4MzVOSDA0ZFBpR25HcEVKeDhqWEpKNHIrbHdJREFRQUJBb0lCQVFDZkdUM28zRjJlWUJWSQpSalZ2M1VWc3ZqZ2t0d05CTXgvY3NWZmVhaXVOcHUwVWE5MlpTNkluMXFMZnBRZ0lqcC9EcExya0FYb2poMG9NCnFNcmlZMUJzQ0pmcUpmYVF6VWVXUWhCdUh4TGZhczY5YW8yODBCcWl2VmZrcmgvb01zeTA0Vk96L3lydnlVemwKbCtvL3JrQkY5bFNBcEI1Y0hSSUlkWDRiSWM5ZzBFcFRqcFpib2tGQ0xRakZHR1o1RW1iMkpYVmxzUG5PaGxPSAp2aCtBWFNWdmZIdWpZRjJVZEVLTWtMK1NEQVhsejRsektDelVUcHlUaEJ1bVlmUERyUEJFNmJ2OXNjMWJ3eXpqCm9EQjBBK0RHL2d2d3RySUtyZnA1ZmVuVnFEdE1VOEtEcWxFOHFNcGtLM1ZOUUFodXJuOFQzSXZXMHVlM2x6MkEKNUJmbXIyd2hBb0dCQU8wei9FNFBjamVpcUdxTjlXRjdxV3VCTlRnVlJLVWFOaGN4aGhQOFVUK0VIL01rM1ZZWQpUNVVENExpUEJTcHZYRW12RUdPcEN2UGFDY3grSGhMTXJrOFp6c2lXWVJsQ0hFWHFiNlc0UnlBQTZnb2cxaVhNCnAyNHpkUUhkRm1Ieml1WmNZUVRsWnpTbzUwbmFEUHNrTTlwa1Y2OUZMbXJrV1JrdlAxVkVIR2xwQW9HQkFOcTgKZ2FFWnhqZGQvZWhxWTA3ODB6aml3ZVNwMmUwRWV5YTdtdjBNc1czbE9JU0tNemVQSHdxL25jVmM1WmNndThlOAppRWQ3YnFPbExaaEtOTXlMeTRtWHNVK1pSRTNpT1lIN2ZuWllycGd6RlFYdFZldU4rL24wVzdyeWtrT0FUQlBWCndpOTlySXBSWkszM0Y1NU9XaGdrYitPRnlkSTZXVFF0Tys5VTF5Zi9Bb0dCQU16WmRtMmJuVkk2NFNPVWtYT3MKcmpXdmtseHEwYXVjSlZhR2FIcGVEM1RCRUM2Vmlhak91ZnZCSzVOM3dFaFRmK29LakNibFdCWWNHUlpIWElWegp5cDE1ZGtGNHpVWlk5NzNScHJZQm5Uc2dUdjZNT1NUUHgxQytrN0FXVlR3bWJiQmYyMUcxSkJvd08vNWxsNHhVClNZdXoySjMvS3dVWlMzRWFncUdLZnRieEFvR0JBTGJDcG5UYXVqbHN1VWZHREcvazR2ODJ4OWFzN0Q4VGJXcHgKZWhCUTJMYi92UGRSR1hZa2lVVkwwU0VrZTFpSXF4MDZNNHUyWUQwdk9DZDBhU1UyOEx0b0dXaHVvUm1LR1k2MwplWFNjcUZUVzZZdm9QOC91OUVobW1YWmNVMFUvSDFHN1d1S2ZXTmpCSlNRTnZwZ3cweW8wMTUvOUd5SWlTb0pFCkFUMzVYMFExQW9HQVdRSWtUKzc5ZlR3UDVLNWZqZkVYY2ZtRjRNb21sckVSMEtTdmVMbWgyUThQclJHMWZMZW0KNHJJQWdBSFluL2t3Vk5IM2dOUnBPWURYR05LQjB3Rk56S0RybVpWbTJRODNnOHppWkR4bys2Tk1sNTEyOUxscQpLM0tJQmowWjJsemcxbFVjbGlIY3h3UmhWNDBpeE5GZksxYUp6NUNpc1g4dXVwK1JCalF4K01FPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

What you expected to happen: yurthub runs succefully instead of being restarted all the time.

Environment:

others Before I use yurtadm init to install, I have cleaned up the environment follow [this article](FAQ | sealer) and deleted the /var/lib/kubelet, /var/lib/yurthub, /var/lib/yurttunnel-server directory.

/kind bug

windydayc commented 2 years ago

kubelet's log:

[root@host130 openyurt]# journalctl -u kubelet
-- Logs begin at Mon 2022-06-06 23:10:17 CST, end at Tue 2022-06-07 03:37:43 CST. --
Jun 06 23:10:24 host130 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: * Applying /usr/lib/sysctl.d/00-system.conf ...
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: net.bridge.bridge-nf-call-ip6tables = 0
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: net.bridge.bridge-nf-call-iptables = 0
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: net.bridge.bridge-nf-call-arptables = 0
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: * Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: kernel.yama.ptrace_scope = 0
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: * Applying /usr/lib/sysctl.d/50-default.conf ...
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: kernel.sysrq = 16
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: kernel.core_uses_pid = 1
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: net.ipv4.conf.default.rp_filter = 1
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: net.ipv4.conf.all.rp_filter = 1
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: net.ipv4.conf.default.accept_source_route = 0
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: net.ipv4.conf.all.accept_source_route = 0
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: net.ipv4.conf.default.promote_secondaries = 1
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: net.ipv4.conf.all.promote_secondaries = 1
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: fs.protected_hardlinks = 1
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: fs.protected_symlinks = 1
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: * Applying /etc/sysctl.d/99-sysctl.conf ...
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: * Applying /etc/sysctl.d/k8s.conf ...
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: net.bridge.bridge-nf-call-ip6tables = 1
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: net.bridge.bridge-nf-call-iptables = 1
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: net.ipv4.conf.all.rp_filter = 0
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: * Applying /etc/sysctl.conf ...
Jun 06 23:10:24 host130 kubelet-pre-start.sh[703]: net.ipv4.ip_forward = 1
Jun 06 23:10:25 host130 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jun 06 23:10:37 host130 kubelet[745]: I0606 23:10:37.228550     745 server.go:411] Version: v1.19.8
Jun 06 23:10:37 host130 kubelet[745]: I0606 23:10:37.228996     745 server.go:831] Client rotation is on, will bootstrap in background
Jun 06 23:10:37 host130 kubelet[745]: I0606 23:10:37.278352     745 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Jun 06 23:10:37 host130 kubelet[745]: I0606 23:10:37.332606     745 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.014843     745 server.go:640] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.026084     745 container_manager_linux.go:276] container manager verified user specified cgroup-root exists: []
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.026241     745 container_manager_linux.go:281] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.026417     745 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.026429     745 container_manager_linux.go:311] [topologymanager] Initializing Topology Manager with none policy
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.026434     745 container_manager_linux.go:316] Creating device plugin manager: true
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.040668     745 client.go:77] Connecting to docker on unix:///var/run/docker.sock
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.040725     745 client.go:94] Start docker client with request timeout=2m0s
Jun 06 23:10:44 host130 kubelet[745]: W0606 23:10:44.053504     745 docker_service.go:570] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.053546     745 docker_service.go:242] Hairpin mode set to "hairpin-veth"
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.212073     745 docker_service.go:257] Docker cri networking managed by cni
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.247914     745 docker_service.go:264] Docker Info: &{ID:3JHM:UDEK:Q3UF:H3I3:QMHD:5YPP:3CRH:MNAT:5EHE:U75L:WEPR:2IKB Containers:191 ContainersRunning:1 ContainersPaused:0 ContainersStopped:190 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-06T23:10:44.213327077+08:00 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:3.10.0-1127.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc000a680e0 NCPU:4 MemTotal:3953971200 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:host130 Labels:[] ExperimentalBuild:false ServerVersion:19.03.14-sealer ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ea765aba0d05254012b0b9e595e995c09186427f Expected:ea765aba0d05254012b0b9e595e995c09186427f} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[]}
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.248018     745 docker_service.go:277] Setting cgroupDriver to cgroupfs
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.259984     745 remote_runtime.go:59] parsed scheme: ""
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.260040     745 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.276586     745 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <nil> <nil>}
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.276640     745 clientconn.go:948] ClientConn switching balancer to "pick_first"
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.276745     745 remote_image.go:50] parsed scheme: ""
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.276755     745 remote_image.go:50] scheme "" not registered, fallback to default scheme
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.276771     745 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <nil> <nil>}
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.276776     745 clientconn.go:948] ClientConn switching balancer to "pick_first"
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.276816     745 kubelet.go:264] Adding pod path: /etc/kubernetes/manifests
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.276847     745 kubelet.go:276] Watching apiserver
Jun 06 23:10:44 host130 kubelet[745]: E0606 23:10:44.280516     745 reflector.go:127] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://apiserver.cluster.local:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dhost130&limit=500&resourceVersion=0": dial tcp 192.168.152.130:6443: connect: connection refused
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.313857     745 kubelet.go:453] Kubelet client is not nil
Jun 06 23:10:44 host130 kubelet[745]: E0606 23:10:44.314735     745 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://apiserver.cluster.local:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.152.130:6443: connect: connection refused
Jun 06 23:10:44 host130 kubelet[745]: E0606 23:10:44.315320     745 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://apiserver.cluster.local:6443/api/v1/nodes?fieldSelector=metadata.name%3Dhost130&limit=500&resourceVersion=0": dial tcp 192.168.152.130:6443: connect: connection refused
Jun 06 23:10:44 host130 kubelet[745]: E0606 23:10:44.634247     745 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
Jun 06 23:10:44 host130 kubelet[745]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.683170     745 kuberuntime_manager.go:214] Container runtime docker initialized, version: 19.03.14-sealer, apiVersion: 1.40.0
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.684882     745 server.go:1147] Started kubelet
Jun 06 23:10:44 host130 kubelet[745]: E0606 23:10:44.685131     745 kubelet.go:1243] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.686073     745 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.687148     745 volume_manager.go:265] Starting Kubelet Volume Manager
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.702007     745 server.go:152] Starting to listen on 0.0.0.0:10250
Jun 06 23:10:44 host130 kubelet[745]: E0606 23:10:44.703759     745 controller.go:136] failed to ensure node lease exists, will retry in 200ms, error: Get "https://apiserver.cluster.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/host130?timeout=10s": dial tcp 192.168.152.130:6443: connect: connection refused
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.704164     745 server.go:425] Adding debug handlers to kubelet server.
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.706096     745 desired_state_of_world_populator.go:139] Desired state populator starts to run
Jun 06 23:10:44 host130 kubelet[745]: E0606 23:10:44.706863     745 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://apiserver.cluster.local:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.152.130:6443: connect: connection refused
Jun 06 23:10:44 host130 kubelet[745]: E0606 23:10:44.713591     745 event.go:273] Unable to write event: 'Post "https://apiserver.cluster.local:6443/api/v1/namespaces/default/events": dial tcp 192.168.152.130:6443: connect: connection refused' (may retry after sleeping)
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.737007     745 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: a4f2dcfaf6c68ce395981475fe104e3e8d848f7c0b0ace3d12b26431cfbebfb5
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.814019     745 kubelet.go:449] kubelet nodes not sync
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.814045     745 kubelet.go:449] kubelet nodes not sync
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.880683     745 kubelet.go:449] kubelet nodes not sync
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.880741     745 kubelet.go:449] kubelet nodes not sync
Jun 06 23:10:44 host130 kubelet[745]: E0606 23:10:44.904607     745 controller.go:136] failed to ensure node lease exists, will retry in 400ms, error: Get "https://apiserver.cluster.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/host130?timeout=10s": dial tcp 192.168.152.130:6443: connect: connection refused
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.914966     745 status_manager.go:158] Starting to sync pod status with apiserver
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.915014     745 kubelet.go:1775] Starting kubelet main sync loop.
Jun 06 23:10:44 host130 kubelet[745]: E0606 23:10:44.915071     745 kubelet.go:1799] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Jun 06 23:10:44 host130 kubelet[745]: E0606 23:10:44.921446     745 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.RuntimeClass: failed to list *v1beta1.RuntimeClass: Get "https://apiserver.cluster.local:6443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.152.130:6443: connect: connection refused
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.922398     745 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 08cf8b98c8c02ee35a4d1abeda667025266176d5d56cc886cf62ed60e2b72c3d
Jun 06 23:10:44 host130 kubelet[745]: I0606 23:10:44.937568     745 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 929f677a755f4edb2132c7f9edb6e3e13f542c5d456dd9c3811abc3be5639ef2
Jun 06 23:10:45 host130 kubelet[745]: I0606 23:10:45.014175     745 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: dbe13e4a0689e61a634ee57f6ec90bb672345b837e0fa0c9011d7477abe44207
Jun 06 23:10:45 host130 kubelet[745]: E0606 23:10:45.015200     745 kubelet.go:1799] skipping pod synchronization - container runtime status check may not have completed yet
Jun 06 23:10:45 host130 kubelet[745]: I0606 23:10:45.053080     745 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 741bc5c08bfef3778bfadbec79ed135592118e8b773fc7558457aa97cab74f3c
Jun 06 23:10:45 host130 kubelet[745]: I0606 23:10:45.147762     745 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: b1cd5958f0e4e104b5ec17421e8d7fcb6dbcf821b8c7532d4b6eb34091c53cd0
Jun 06 23:10:45 host130 kubelet[745]: I0606 23:10:45.159402     745 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 5c86edf7ed8bbacf1f31927e74b6efb86faf2b7d0fbc56d60b353c96f6684022
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.167552     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "de71eb2eab425009323bfecc7ef8c0e8040fad8e9256fc4087770d7936b07458"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.169511     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "d52305935f2a1e43b604048490af3e606648c83589c91eca27eac398211058a3"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.171561     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "ea4a28f4ed6569551421896e02ca844f8dd3b7e77386e6098a80819cba464b6c"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.174069     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "9b6c4c1bcef08e819c5da4698376714146910d16f68a138afaf7fbb0764ffe15"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.176156     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "bf1d080d7b2ae13f8a420349c489401ed1e3fc75ace448a0696792679e059f91"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.178330     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "e6cbb6ccb8565bff17a9a178a16534398854006cac0f0a7b11c30474d9bbd292"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.180363     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "9e0d241d8c4a98c826892cd7c66d5f14311e8445782011f011209c4bb771b162"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.182308     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "0886b1dfeea4e8b03f3ea10560056f2a7626e95e707ef46218f3abc0dd35baab"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.184622     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "9d86371cd1ac7219608afa232ca2b23c99f9dd629a4bcef73f5181a614729a5b"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.186742     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "ff8721f658476e53c19815836c76ab3aeae798617d3cd98b1981c6a0016c891b"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.190103     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "7012b50d3400fa2a591528e0d8902821a2fc2b29c39b6a864603b774ef340805"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.192065     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "b63e8fc5c2b04e4c78dc62256c4f997afd1890944b4c130beb4710d510640677"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.193892     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "0eb746957cf4d046026ab7a761ab1344768c2add29fbed11b3d29bb9f9c9643b"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.195737     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "74d22a4d939b706d039eb73f74fc1f7e742b069d1965255b082ca0afcfd74e37"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.197415     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "bb00cfe5f951ad6af376cce7b3dd478564740b06d31868ad7eef57c2c7366d71"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.199065     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "2ca2f035d12ca2dc16f90652302d23a97afe424888e614e2cae75847d6af89a3"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.201087     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "5bbd852a97fb17701e22aed690a52b6e555b80de561f177ede53b43d4617c3f0"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.203245     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "bc45adfae2224ca18f1727436ac8991706594b58ab82cab2d11336461aaeaa50"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.205600     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "b3ef7b5e0b432818335ad3554a3524c5d55ce35ebc14fe539828386ad151b74a"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.209734     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "65816650c6598eec9a41b25d8d22642bb78ae3f8c3cda40e217d256f39f96508"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.212200     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "52f0bb76b9b9203f73ef7ba9622de8c29ccb962c6bc07060893eba27302c46e6"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.215003     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "f5df20b8cb08ac77fdd1e2859cd99176b5a8d8d12c3773120e2fccbcf5a4327e"
Jun 06 23:10:45 host130 kubelet[745]: E0606 23:10:45.215719     745 kubelet.go:1799] skipping pod synchronization - container runtime status check may not have completed yet
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.217029     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "32b1a69ed224e1d54933f3973ceded4edd2c4e3a8646d9948fbd9aaaee406e7d"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.219808     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "4da7b0c5f766179a4b2078eb746ccfebf562dea851eddb83e142f69e55c5ebd5"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.226337     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "f32487377aa4868e328abc5f7074a0eea527604e0f5660872545e7dcb8cde12e"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.227019     745 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "fa489830fa15effd93111ca14c6784d3b9e655747e907b5b488e16aa7baefa07"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.228886     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "d1d90e727ca7e49c317c7a4982d9ab3a17ee6170697fca52c9c21de40d585ff3"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.231156     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "496b069b37cb34d4d7f656a7abd0a61187fce885d5a39be11465e0728560d50e"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.233238     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "569ffbc5e2a60a2c66dfda4aefa0f0ba9b1766c58d9c966165e00560d0e7fd5b"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.235511     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "a2cea9239bba954376df32a12e704932db30aab6aed62277e22b668a8cb23154"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.238485     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "a570a5b54d665471dcfc668a0893d5973c51258b4c70a8bc3d33e5838c22b9c9"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.240809     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "68c709693271e133c2be685d258e8bd61bb6b366003eb7eace9e3afeba8ad9ba"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.243069     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "9fac7d9682ea687ad8e7ed6cf8baf30c5515da52bd15314a6b2c8d201aa99f94"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.248752     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "d348e0c505dcbdba915a442a443878f2c66cb8f45ab8db323203374853ef9ec8"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.276150     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "0037e11013a6f01cc511b52a969ea7e339d6c9866c28a664ac9162f8a8c5ca19"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.278760     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "2610c6554b8059623ed08da15f217349ceca3c613546b3389af50c0df339725a"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.281391     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "74accf1e930b4e72eea90d2a33bbdbce673ad71d347b8bf74ac6a3aaa429f154"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.283693     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "4e5102f9ce9b5c05e47f9d903735dfeb25d0941e43c80d6e613bef234e294dbc"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.286174     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "cdae51a3886ac8889eccb231a1c186cefb2abd97430a11d738509298d4ec47ea"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.288881     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "e22547a1c210ea41522d4b03b5ed93ca9b9198fc724b77348732249503547017"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.291139     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "c153a1d0ee29bfab1622c99c923ef94f04fc453edb7fdadf535b055e03b4a0b2"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.293170     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "ad792b9facc4baa0f27fdf529953bc869f8861dd2831960219340f2122697b2c"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.295188     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "209e874dfa79c8e1af467ee10fee003d4a51757848bef93c9baa048216b7c92a"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.297039     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "7909eb9c2e814e2decb756c13e9d84d3fb1d41be50ddd8a18c973ca3f1db7646"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.298982     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "a37fbcdcf3bc23238a732e62010ddd8e8d5eab322469ebcba29959d856d2149e"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.300851     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "30a067a689eae85b6b1e731c576853a5e8d56a58ea95dc46d7c373598474a740"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.302555     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "bb5bd46766d47352a1c66005667595c8330104362363ded3a9cdd84df019b360"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.304742     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "9432f8e349f3ced4f7b86322132efff8c19dd7420c953bfa17bf4aa45983a4f7"
Jun 06 23:10:45 host130 kubelet[745]: E0606 23:10:45.305408     745 controller.go:136] failed to ensure node lease exists, will retry in 800ms, error: Get "https://apiserver.cluster.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/host130?timeout=10s": dial tcp 192.168.152.130:6443: connect: connection refused
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.307087     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "8c7e31a22f9285e76ac086a8eab831f51f8a8707a06b7bf567e1b6186630ecd0"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.309306     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "f1e66c84569ff5e33e9b2f6e7c70aaca73c02be4df0febbc0118877de4cb9b11"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.311114     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-b4bf78944-zhmcx_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "8a155187c686661bf2989369c6604f478f21ce367f4757a31fb027bc33b09efb"
Jun 06 23:10:45 host130 kubelet[745]: E0606 23:10:45.376840     745 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://apiserver.cluster.local:6443/api/v1/nodes?fieldSelector=metadata.name%3Dhost130&limit=500&resourceVersion=0": dial tcp 192.168.152.130:6443: connect: connection refused
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.408476     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "37a662dbceffaa9cf9a37eca0af734849551355325b414a3424bbd0beef03128"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.411636     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "4b7229ef08fb91f009ad635a35722d6d922dfcb352d6b271091abfe432852423"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.414832     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "93ed62ef986e49f93d2f818dd31816e241a6ac7fe433a0c6bd3df06775436cbf"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.418147     745 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "3ff72ceceaef6e46b4bf97423d9d4e0ea09b14a4ca3aa58914ae14d73b9eba78"
Jun 06 23:10:45 host130 kubelet[745]: E0606 23:10:45.432811     745 reflector.go:127] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://apiserver.cluster.local:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dhost130&limit=500&resourceVersion=0": dial tcp 192.168.152.130:6443: connect: connection refused
Jun 06 23:10:45 host130 kubelet[745]: E0606 23:10:45.444596     745 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://apiserver.cluster.local:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.152.130:6443: connect: connection refused
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.462544     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "dde6ee14ec6acd0f207e7cdcbbe9d33471d2cdb9af7168906fef1f7b1973037e"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.465039     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "25cfdbd900a45bfaa8521e71e78bf0b3c138c33b313c7bc95be29c5233e0ecfb"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.467732     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "19ff5db7a12840a1ff1be998b2b318db5f1461af779b33351d9dc5651ae03017"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.470049     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "6d58dd3851f358625cc355e06f93bdd2fae2132214c41ba10347eceb4f06685f"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.471055     745 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "5328b49065c19f2e7bd9f7798ff5f576406ba38e311bd34a670d096df8140ca8"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.503820     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "e17074792f8477bc3d98fb5c636fe56b9fa831b3462171263c0142b63a642dce"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.505796     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "b5f5fa7679042cb38177e8c1fa948c82ce64eb9b247189c2a6d3d81a983f4909"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.507955     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "437fe711744a873e843db29c051128e1492f002bb08063a6048aa1203445f892"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.509724     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "5f9ba53ae1b22bec420293adc2e51ca898b7b3b0d8e2c6d678154e7756b75f1b"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.511792     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "f5ea7e64761df3ea3b561907a45bfc5077f441ecddb47ca0b2a5213341d89fdc"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.512433     745 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "5401b78308c9fc69abd67e6850f97e95c6ab437a2307f58b52caf0d1852a0118"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.545526     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "7379750a00eccdecc7102f23107e224ff29c17f97e2fc0e64963afb10ff06e85"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.549530     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "784f877b23d118636beb38474f7ab6944f7d00725003a667d1707226db0d5546"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.552792     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "c1f1643b49fe184d520d1ab550095d0b91586c7fca52a47a2f3c1b61c388be82"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.554049     745 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "50cf69cfb0826c2298d6c29685ea6744f77f6875d6a1fa479a663d14c14a812c"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.587282     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "2bf19f1b7a5f9f21630cef2f57d6f5dc529f5503aa53a62b81614e0bc4f6b054"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.591036     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "26ac565f0e7779846cddea438bc433f1ef8e087a84cd3e28d46aebeda5d57867"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.593282     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "52f7ff3ad97c4c83c956bb5f34b8a30597875b0442bca375cdb7b211cc82307f"
Jun 06 23:10:45 host130 kubelet[745]: W0606 23:10:45.594955     745 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "yurt-app-manager-67f95668df-mwclt_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "9ecab1c259aa750fbd236bb8cfdf4c0c68066a2815566f430902d706a95a7e7a"
// ...
rambohe-ch commented 2 years ago

@windydayc It looks like that yurthub component is waiting for the client certificate. so please check why client certificate generation is failed.

windydayc commented 2 years ago
[root@host130 openyurt]# kubectl get csr
NAME        AGE     SIGNERNAME                            REQUESTOR                                              CONDITION
csr-95xfz   38m     kubernetes.io/kubelet-serving         system:serviceaccount:kube-system:yurt-tunnel-server   Pending
csr-bvtr8   37m     kubernetes.io/kube-apiserver-client   system:bootstrap:40d5lb                                Pending
csr-clq8l   38m     kubernetes.io/kube-apiserver-client   system:serviceaccount:kube-system:yurt-tunnel-server   Pending
csr-drn7z   19m     kubernetes.io/kube-apiserver-client   system:bootstrap:40d5lb                                Pending
csr-f7hpc   8m15s   kubernetes.io/kubelet-serving         system:serviceaccount:kube-system:yurt-tunnel-server   Pending
csr-g6zls   23m     kubernetes.io/kube-apiserver-client   system:serviceaccount:kube-system:yurt-tunnel-server   Pending
csr-kqx8k   28m     kubernetes.io/kube-apiserver-client   system:bootstrap:40d5lb                                Pending
csr-n44sr   33m     kubernetes.io/kube-apiserver-client   system:bootstrap:40d5lb                                Pending
csr-qrclx   7m27s   kubernetes.io/kube-apiserver-client   system:bootstrap:40d5lb                                Pending
csr-r9fp8   23m     kubernetes.io/kubelet-serving         system:serviceaccount:kube-system:yurt-tunnel-server   Pending
csr-rv52j   8m14s   kubernetes.io/kube-apiserver-client   system:serviceaccount:kube-system:yurt-tunnel-server   Pending
csr-rz522   24m     kubernetes.io/kube-apiserver-client   system:bootstrap:40d5lb                                Pending
csr-svrb5   14m     kubernetes.io/kube-apiserver-client   system:bootstrap:40d5lb                                Pending
csr-wlgr2   39m     kubernetes.io/kube-apiserver-client   system:bootstrap:40d5lb                                Pending

yurt-controller-manager 's log:

[root@host130 openyurt]# kubectl logs yurt-controller-manager-7c7bf76c77-44lbq -n kube-system
yurtcontroller-manager version: projectinfo.Info{GitVersion:"-8204290", GitCommit:"8204290", BuildDate:"2022-06-06T02:18:00Z", GoVersion:"go1.17.1", Compiler:"gc", Platform:"linux/amd64"}
I0607 02:19:00.349609       1 controllermanager.go:370] FLAG: --add_dir_header="false"
I0607 02:19:00.349679       1 controllermanager.go:370] FLAG: --alsologtostderr="false"
I0607 02:19:00.349683       1 controllermanager.go:370] FLAG: --contention-profiling="false"
I0607 02:19:00.349688       1 controllermanager.go:370] FLAG: --controller-start-interval="0s"
I0607 02:19:00.349692       1 controllermanager.go:370] FLAG: --controllers="[*]"
I0607 02:19:00.349700       1 controllermanager.go:370] FLAG: --enable-leader-migration="false"
I0607 02:19:00.349703       1 controllermanager.go:370] FLAG: --enable-taint-manager="true"
I0607 02:19:00.349706       1 controllermanager.go:370] FLAG: --feature-gates=""
I0607 02:19:00.349725       1 controllermanager.go:370] FLAG: --help="false"
I0607 02:19:00.349728       1 controllermanager.go:370] FLAG: --kube-api-burst="100"
I0607 02:19:00.349960       1 controllermanager.go:370] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
I0607 02:19:00.349965       1 controllermanager.go:370] FLAG: --kube-api-qps="50"
I0607 02:19:00.349970       1 controllermanager.go:370] FLAG: --kubeconfig=""
I0607 02:19:00.349972       1 controllermanager.go:370] FLAG: --large-cluster-size-threshold="50"
I0607 02:19:00.349975       1 controllermanager.go:370] FLAG: --leader-elect="true"
I0607 02:19:00.349978       1 controllermanager.go:370] FLAG: --leader-elect-lease-duration="15s"
I0607 02:19:00.349981       1 controllermanager.go:370] FLAG: --leader-elect-renew-deadline="10s"
I0607 02:19:00.349984       1 controllermanager.go:370] FLAG: --leader-elect-resource-lock="leases"
I0607 02:19:00.349986       1 controllermanager.go:370] FLAG: --leader-elect-resource-name=""
I0607 02:19:00.349989       1 controllermanager.go:370] FLAG: --leader-elect-resource-namespace=""
I0607 02:19:00.349991       1 controllermanager.go:370] FLAG: --leader-elect-retry-period="2s"
I0607 02:19:00.349994       1 controllermanager.go:370] FLAG: --leader-migration-config=""
I0607 02:19:00.350010       1 controllermanager.go:370] FLAG: --log-flush-frequency="5s"
I0607 02:19:00.350013       1 controllermanager.go:370] FLAG: --log_backtrace_at=":0"
I0607 02:19:00.350019       1 controllermanager.go:370] FLAG: --log_dir=""
I0607 02:19:00.350022       1 controllermanager.go:370] FLAG: --log_file=""
I0607 02:19:00.350025       1 controllermanager.go:370] FLAG: --log_file_max_size="1800"
I0607 02:19:00.350045       1 controllermanager.go:370] FLAG: --logtostderr="true"
I0607 02:19:00.350064       1 controllermanager.go:370] FLAG: --master=""
I0607 02:19:00.350068       1 controllermanager.go:370] FLAG: --min-resync-period="12h0m0s"
I0607 02:19:00.350085       1 controllermanager.go:370] FLAG: --node-eviction-rate="0.1"
I0607 02:19:00.350089       1 controllermanager.go:370] FLAG: --node-monitor-grace-period="40s"
I0607 02:19:00.350091       1 controllermanager.go:370] FLAG: --node-startup-grace-period="1m0s"
I0607 02:19:00.350095       1 controllermanager.go:370] FLAG: --one_output="false"
I0607 02:19:00.350098       1 controllermanager.go:370] FLAG: --pod-eviction-timeout="5m0s"
I0607 02:19:00.350101       1 controllermanager.go:370] FLAG: --profiling="true"
I0607 02:19:00.350104       1 controllermanager.go:370] FLAG: --secondary-node-eviction-rate="0.01"
I0607 02:19:00.350107       1 controllermanager.go:370] FLAG: --skip_headers="false"
I0607 02:19:00.350109       1 controllermanager.go:370] FLAG: --skip_log_headers="false"
I0607 02:19:00.350112       1 controllermanager.go:370] FLAG: --stderrthreshold="2"
I0607 02:19:00.350115       1 controllermanager.go:370] FLAG: --unhealthy-zone-threshold="0.55"
I0607 02:19:00.350117       1 controllermanager.go:370] FLAG: --v="2"
I0607 02:19:00.350120       1 controllermanager.go:370] FLAG: --version="false"
I0607 02:19:00.350123       1 controllermanager.go:370] FLAG: --vmodule=""
W0607 02:19:00.350149       1 client_config.go:615] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0607 02:19:00.353931       1 leaderelection.go:248] attempting to acquire leader lease kube-system/yurt-controller-manager...
I0607 02:19:00.374042       1 leaderelection.go:258] successfully acquired lease kube-system/yurt-controller-manager
I0607 02:19:00.380266       1 event.go:282] Event(v1.ObjectReference{Kind:"Lease", Namespace:"kube-system", Name:"yurt-controller-manager", UID:"e4da33fc-a4f8-427b-865c-52d10bafa21a", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"536", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' host130_d4198914-facc-4dd0-9c4f-546363b9d9f6 became leader
I0607 02:19:00.380704       1 controllermanager.go:346] Starting "nodelifecycle"
I0607 02:19:00.382397       1 node_lifecycle_controller.go:390] Sending events to api server.
I0607 02:19:00.475104       1 taint_manager.go:167] Sending events to api server.
I0607 02:19:00.475180       1 node_lifecycle_controller.go:518] Controller will reconcile labels.
I0607 02:19:00.475209       1 controllermanager.go:361] Started "nodelifecycle"
I0607 02:19:00.475231       1 controllermanager.go:346] Starting "yurtcsrapprover"
I0607 02:19:00.475409       1 node_lifecycle_controller.go:552] Starting node controller
I0607 02:19:00.475431       1 shared_informer.go:240] Waiting for caches to sync for taint
I0607 02:19:00.478656       1 csrapprover.go:120] v1.CertificateSigningRequest is supported.
I0607 02:19:00.478836       1 controllermanager.go:361] Started "yurtcsrapprover"
I0607 02:19:00.479524       1 csrapprover.go:180] starting the crsapprover
I0607 02:19:00.576273       1 shared_informer.go:247] Caches are synced for taint
I0607 02:19:00.576450       1 node_lifecycle_controller.go:783] Controller observed a new Node: "host130"
I0607 02:19:00.576471       1 controller_utils.go:178] Recording Registered Node host130 in Controller event message for node host130
I0607 02:19:00.576478       1 taint_manager.go:191] Starting NoExecuteTaintManager
I0607 02:19:00.576491       1 node_lifecycle_controller.go:1411] Initializing eviction metric for zone:
W0607 02:19:00.576617       1 node_lifecycle_controller.go:1026] Missing timestamp for Node host130. Assuming now as a timestamp.
I0607 02:19:00.576638       1 node_lifecycle_controller.go:882] Node host130 is NotReady as of 2022-06-07 02:19:00.576632689 +0000 UTC m=+0.311668979. Adding it to the Taint queue.
I0607 02:19:00.576728       1 node_lifecycle_controller.go:1177] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0607 02:19:00.576960       1 controller_utils.go:127] Update ready status of pods on node [host130]
I0607 02:19:00.577053       1 controller_utils.go:127] Update ready status of pods on node [host130]
I0607 02:19:00.577057       1 controller_utils.go:127] Update ready status of pods on node [host130]
I0607 02:19:00.577102       1 controller_utils.go:149] Updating ready status of pod yurt-controller-manager-7c7bf76c77-44lbq to false
I0607 02:19:00.577182       1 event.go:282] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"host130", UID:"c5837478-65db-4de3-80e5-b7b1a6de53cc", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node host130 event: Registered Node host130 in Controller
I0607 02:19:00.577196       1 controller_utils.go:149] Updating ready status of pod kube-apiserver-host130 to false
I0607 02:19:00.577215       1 controller_utils.go:127] Update ready status of pods on node [host130]
I0607 02:19:00.577272       1 controller_utils.go:149] Updating ready status of pod yurt-hub-host130 to false
I0607 02:19:00.577569       1 controller_utils.go:127] Update ready status of pods on node [host130]
I0607 02:19:00.577700       1 controller_utils.go:127] Update ready status of pods on node [host130]
I0607 02:19:00.577757       1 controller_utils.go:149] Updating ready status of pod etcd-host130 to false
I0607 02:19:00.609828       1 controller_utils.go:127] Update ready status of pods on node [host130]
I0607 02:19:00.610206       1 controller_utils.go:127] Update ready status of pods on node [host130]
I0607 02:19:05.578105       1 node_lifecycle_controller.go:882] Node host130 is NotReady as of 2022-06-07 02:19:05.578094749 +0000 UTC m=+5.313131038. Adding it to the Taint queue.
I0607 02:19:09.132964       1 controller_utils.go:127] Update ready status of pods on node [host130]
I0607 02:19:10.578479       1 node_lifecycle_controller.go:906] Node host130 is healthy again, removing all taints
I0607 02:19:10.578506       1 node_lifecycle_controller.go:1204] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0607 02:19:11.095300       1 csrapprover.go:163] non-approved and non-denied csr, enqueue: csr-wlgr2
E0607 02:19:11.122635       1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:11.122947       1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:11.137668       1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:11.137703       1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:11.161940       1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:11.162002       1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:11.207218       1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:11.207244       1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:11.291975       1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:11.292019       1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:11.456906       1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:11.456931       1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:11.782191       1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:11.782239       1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:12.427328       1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:12.427386       1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:13.711437       1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:13.711477       1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:16.275968       1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:16.276021       1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:21.400319       1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:21.400391       1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:31.645151       1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:31.645175       1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:52.132350       1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:19:52.132390       1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:20:33.096231       1 csrapprover.go:274] failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:20:33.096271       1 csrapprover.go:206] sync csr csr-wlgr2 failed with : certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
I0607 02:20:46.578195       1 csrapprover.go:163] non-approved and non-denied csr, enqueue: csr-95xfz
I0607 02:20:46.580881       1 csrapprover.go:163] non-approved and non-denied csr, enqueue: csr-clq8l
E0607 02:20:46.592516       1 csrapprover.go:274] failed to approve yurt-csr(csr-clq8l), certificatesigningrequests.certificates.k8s.io "csr-clq8l" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:20:46.592577       1 csrapprover.go:206] sync csr csr-clq8l failed with : certificatesigningrequests.certificates.k8s.io "csr-clq8l" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:20:46.592640       1 csrapprover.go:274] failed to approve yurt-csr(csr-95xfz), certificatesigningrequests.certificates.k8s.io "csr-95xfz" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kubelet-serving"
E0607 02:20:46.592650       1 csrapprover.go:206] sync csr csr-95xfz failed with : certificatesigningrequests.certificates.k8s.io "csr-95xfz" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kubelet-serving"
E0607 02:20:46.618502       1 csrapprover.go:274] failed to approve yurt-csr(csr-clq8l), certificatesigningrequests.certificates.k8s.io "csr-clq8l" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:20:46.618531       1 csrapprover.go:206] sync csr csr-clq8l failed with : certificatesigningrequests.certificates.k8s.io "csr-clq8l" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"
E0607 02:20:46.620667       1 csrapprover.go:274] failed to approve yurt-csr(csr-95xfz), certificatesigningrequests.certificates.k8s.io "csr-95xfz" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kubelet-serving"
E0607 02:20:46.620693       1 csrapprover.go:206] sync csr csr-95xfz failed with : certificatesigningrequests.certificates.k8s.io "csr-95xfz" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kubelet-serving"
//...
rambohe-ch commented 2 years ago

@windydayc As you seen, yurt-controller-manager have no permission to approve the csr. the detail logs as following:

failed to approve yurt-csr(csr-wlgr2), certificatesigningrequests.certificates.k8s.io "csr-wlgr2" is forbidden: user not permitted to approve requests with signerName "kubernetes.io/kube-apiserver-client"

I think that RBAC setting for yurt-controller-manager is not correct.

windydayc commented 2 years ago
[root@host130 openyurt]# kubectl get clusterrole.rbac.authorization.k8s.io/yurt-controller-manager -oyaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{"rbac.authorization.kubernetes.io/autoupdate":"true"},"name":"yurt-controller-manager"},"rules":[{"apiGroups":[""],"resources":["nodes"],"verbs":["delete","get","list","patch","update","watch"]},{"apiGroups":[""],"resources":["nodes/status"],"verbs":["patch","update"]},{"apiGroups":[""],"resources":["pods/status"],"verbs":["update"]},{"apiGroups":[""],"resources":["pods"],"verbs":["delete","list","watch"]},{"apiGroups":["","events.k8s.io"],"resources":["events"],"verbs":["create","patch","update"]},{"apiGroups":["coordination.k8s.io"],"resources":["leases"],"verbs":["create","delete","get","patch","update","list","watch"]},{"apiGroups":["","apps"],"resources":["daemonsets"],"verbs":["list","watch"]},{"apiGroups":["certificates.k8s.io"],"resources":["certificatesigningrequests"],"verbs":["get","list","watch"]},{"apiGroups":["certificates.k8s.io"],"resources":["certificatesigningrequests/approval"],"verbs":["update"]},{"apiGroups":["certificates.k8s.io"],"resourceNames":["kubernetes.io/legacy-unknown"],"resources":["signers"],"verbs":["approve"]}]}
    rbac.authorization.kubernetes.io/autoupdate: "true"
  creationTimestamp: "2022-06-07T02:18:37Z"
  managedFields:
  - apiVersion: rbac.authorization.k8s.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
          f:rbac.authorization.kubernetes.io/autoupdate: {}
      f:rules: {}
    manager: kubectl-client-side-apply
    operation: Update
    time: "2022-06-07T02:18:37Z"
  name: yurt-controller-manager
  resourceVersion: "241"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/yurt-controller-manager
  uid: 779b42e5-ebe0-490e-b5ae-1ee7081de4f1
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - delete
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
  - update
- apiGroups:
  - ""
  resources:
  - pods/status
  verbs:
  - update
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - delete
  - list
  - watch
- apiGroups:
  - ""
  - events.k8s.io
  resources:
  - events
  verbs:
  - create
  - patch
  - update
- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - create
  - delete
  - get
  - patch
  - update
  - list
  - watch
- apiGroups:
  - ""
  - apps
  resources:
  - daemonsets
  verbs:
  - list
  - watch
- apiGroups:
  - certificates.k8s.io
  resources:
  - certificatesigningrequests
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - certificates.k8s.io
  resources:
  - certificatesigningrequests/approval
  verbs:
  - update
- apiGroups:
  - certificates.k8s.io
  resourceNames:
  - kubernetes.io/legacy-unknown
  resources:
  - signers
  verbs:
  - approve
[root@host130 openyurt]# kubectl get clusterrolebinding.rbac.authorization.k8s.io/yurt-controller-manager -oyaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"yurt-controller-manager"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"yurt-controller-manager"},"subjects":[{"kind":"ServiceAccount","name":"yurt-controller-manager","namespace":"kube-system"}]}
  creationTimestamp: "2022-06-07T02:18:37Z"
  managedFields:
  - apiVersion: rbac.authorization.k8s.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
      f:roleRef:
        f:apiGroup: {}
        f:kind: {}
        f:name: {}
      f:subjects: {}
    manager: kubectl-client-side-apply
    operation: Update
    time: "2022-06-07T02:18:37Z"
  name: yurt-controller-manager
  resourceVersion: "242"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/yurt-controller-manager
  uid: b25d9b61-6249-4707-92f3-49ded09dcbab
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: yurt-controller-manager
subjects:
- kind: ServiceAccount
  name: yurt-controller-manager
  namespace: kube-system
Congrool commented 2 years ago
- apiGroups:
  - certificates.k8s.io
  resourceNames:
  - kubernetes.io/legacy-unknown
  resources:
  - signers
  verbs:
  - approve

It should be

  - apiGroups:
      - certificates.k8s.io
    resources:
      - signers
    resourceNames:
      - kubernetes.io/kube-apiserver-client
      - kubernetes.io/kubelet-serving
    verbs:
      - approve

You can recreate the rbac with what in config/setup/yurt-controller-manager.yaml.

Congrool commented 2 years ago

BTW, is it a bug when using yurtadm init?

windydayc commented 2 years ago

BTW, is it a bug when using yurtadm init?

I think so too. Because I didn't do anything other than use yurtadm init

rambohe-ch commented 2 years ago

BTW, is it a bug when using yurtadm init?

I think so too. Because I didn't do anything other than use yurtadm init

@windydayc The reason maybe is that yurthub version is not match with yurt-controller-manager.

windydayc commented 2 years ago

BTW, is it a bug when using yurtadm init?

I think so too. Because I didn't do anything other than use yurtadm init

@windydayc The reason maybe is that yurthub version is not match with yurt-controller-manager.

@rambohe-ch How to judge whether the two versions match? I find that these two images are:

windydayc commented 2 years ago
- apiGroups:
  - certificates.k8s.io
  resourceNames:
  - kubernetes.io/legacy-unknown
  resources:
  - signers
  verbs:
  - approve

It should be

  - apiGroups:
      - certificates.k8s.io
    resources:
      - signers
    resourceNames:
      - kubernetes.io/kube-apiserver-client
      - kubernetes.io/kubelet-serving
    verbs:
      - approve

You can recreate the rbac with what in config/setup/yurt-controller-manager.yaml.

According to @Congrool , I solved this problem by re-applying the yaml:

[root@host130 openyurt]# kubectl apply -f config/setup/yurt-controller-manager.yaml

And I restart the kubelet, then I create a pod but the KUBERNETES env in it is not yurthub address 169.254.2.1:10268.

[root@host130 openyurt]# kubectl get pod -owide
NAME    READY   STATUS    RESTARTS   AGE   IP           NODE      NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          21m   100.64.0.8   host130   <none>           <none>

[root@host130 openyurt]# kubectl exec -it nginx bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@nginx:/# env | grep KUBERNETES
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443

@rambohe-ch Seems yurthub still has some problems?

rambohe-ch commented 2 years ago

@windydayc please check /etc/kubernetes/cache/kubelet/service/default/kubernetes file and default/kubernetes service is mutated or not?

windydayc commented 2 years ago

In yurthub's log: image Because it is a cloud node, cache manager is disabled.

rambohe-ch commented 2 years ago

@windydayc Would you be able to update the last status about this issue?

windydayc commented 2 years ago

@windydayc Would you be able to update the last status about this issue?

yurtadm init did not reset kubelet, thus causing the above problems. I will improve the yurtadm command later.