kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.36k stars 4.88k forks source link

Error: X Exiting due to MK_USAGE: Due to networking limitations of driver none, ingress addon is not supported. Try using a different driver. #9301

Closed zzguang520 closed 3 years ago

zzguang520 commented 4 years ago

Steps to reproduce the issue:

1.minikube start --vm=true --driver=none --kubernetes-version=v1.19.2

* minikube v1.13.1 on Redhat 7.7
* Using the none driver based on user configuration
* Starting control plane node minikube in cluster minikube
* Running on localhost (CPUs=8, Memory=15969MB, Disk=456241MB) ...
* OS release is Red Hat Enterprise Linux Workstation 7.7 (Maipo)
* Preparing Kubernetes v1.19.2 on Docker 19.03.13 ...
* Configuring local host environment ...
*
! The 'none' driver is designed for experts who need to integrate with an existing VM
* Most users should use the newer 'docker' driver instead, which does not require root!
* For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
*
! kubectl and minikube configuration will be stored in /root
! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
*
  - sudo mv /root/.kube /root/.minikube $HOME
  - sudo chown -R $USER $HOME/.kube $HOME/.minikube
*
* This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
* Verifying Kubernetes components...
* Enabled addons: default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "minikube" by default
  1. minikube status
    minikube
    type: Control Plane
    host: Running
    kubelet: Running
    apiserver: Running
    kubeconfig: Configured

    3.minikube version

    minikube version: v1.13.1
    commit: 1fd1f67f338cbab4b3e5a6e4c71c551f522ca138-dirty

    4.minikube addons enable ingress

    X Exiting due to MK_USAGE: Due to networking limitations of driver none, ingress addon is not supported. Try using a different driver.

Full output of failed command:

Full output of minikube start command used, if not already included:

Optional: Full output of minikube logs command:

``` * ==> Docker <== * -- Logs begin at Tue 2020-08-11 02:19:54 CST, end at Tue 2020-09-22 16:41:32 CST. -- * Sep 22 13:42:27 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T13:42:27.088585642+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 13:43:18 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T13:43:18.792852575+08:00" level=error msg="Handler for GET /images/json returned error: write unix /var/run/docker.sock->@: write: broken pipe" * Sep 22 13:43:18 oc2542575527.ibm.com dockerd[10259]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11) * Sep 22 13:43:22 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T13:43:21.492127324+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 13:43:26 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T13:43:26.203284357+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 13:43:26 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T13:43:26.204147165+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 13:43:26 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T13:43:26.204210790+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 13:43:26 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T13:43:26.205586449+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 13:43:26 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T13:43:26.207436150+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 13:43:26 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T13:43:26.207514272+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 13:43:26 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T13:43:26.211988401+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 13:43:26 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T13:43:26.212058608+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 13:43:26 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T13:43:26.212127008+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 13:43:26 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T13:43:26.212183289+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 13:43:26 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T13:43:26.212245744+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 13:43:26 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T13:43:26.212291038+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 13:43:30 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T13:43:30.876225422+08:00" level=info msg="Container 1534776299a8cae48ef64ddc050ca63444f5c3abc99fcb6cfa997d3f17a1f898 failed to exit within 10 seconds of signal 15 - using the force" * Sep 22 13:43:31 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T13:43:31.393640058+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:06:00 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:05:59.348491981+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:06:00 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:05:59.772506992+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:06:00 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:05:59.802469264+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:06:00 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:05:59.802535705+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:06:00 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:05:59.802590776+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:06:00 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:05:59.804574811+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:06:00 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:06:00.201531639+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:06:00 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:06:00.201602818+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:06:00 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:06:00.232486206+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:06:00 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:06:00.232530768+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:06:00 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:06:00.232562096+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:06:00 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:06:00.259554717+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:06:03 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:06:03.884829645+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:06:08 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:06:08.641051210+08:00" level=info msg="Container 77813fc209fa7c85b6cd560996f0b3da0c4d6d41c0e53641ea6349757ce9b232 failed to exit within 10 seconds of signal 15 - using the force" * Sep 22 16:06:09 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:06:09.070821177+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:09:32 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:09:31.132464286+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:09:32 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:09:31.689946454+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:09:32 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:09:31.689996539+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:09:32 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:09:31.962907102+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:09:32 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:09:32.107252063+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:09:32 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:09:32.107296093+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:09:32 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:09:32.295545096+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:09:32 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:09:32.295614556+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:09:32 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:09:32.458057980+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:09:32 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:09:32.614004648+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:09:32 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:09:32.618701698+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:09:32 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:09:32.785817100+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:09:36 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:09:36.190925981+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:09:36 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:09:36.363091801+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:30:38 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:30:37.268114093+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:30:38 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:30:38.013471451+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:30:38 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:30:38.316107987+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:30:38 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:30:38.440500929+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:30:38 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:30:38.441532580+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:30:38 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:30:38.443106610+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:30:38 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:30:38.443393990+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:30:38 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:30:38.444012888+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:30:38 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:30:38.450938101+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:30:38 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:30:38.452047580+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:30:42 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:30:42.315463877+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 22 16:30:47 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:30:47.080403193+08:00" level=info msg="Container b314f64a37167c77e7fac2842d05f05f525553081d829c498d200ceacb1027d7 failed to exit within 10 seconds of signal 15 - using the force" * Sep 22 16:30:47 oc2542575527.ibm.com dockerd[10259]: time="2020-09-22T16:30:47.390897642+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * which: no crictl in (/root/.minikube/bin:/usr/lib64/qt-3.3/bin:/opt/ibm/java-x86_64-80/bin:/opt/apache-maven-3.6.1/bin:/opt/apache-ant-1.10.5/bin:/usr/local/tomcat/apache-tomcat-9.0.34/webapps/jenkins:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/ibm/c4eb/bin:/usr/kerberos/bin:/usr/local/python3/bin:/usr/local/git/bin:/usr/local/nodejs/bin:/root/bin) * sudo: crictl: command not found * CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES * 2341996643e5 bfe3a36ebd25 "/coredns -conf /etc…" 6 minutes ago Up 6 minutes k8s_coredns_coredns-f9fd979d6-j7bdn_kube-system_72a623d8-cd59-486b-816b-9f4c377cfc12_0 * b4c38d2bad28 bad58561c4be "/storage-provisioner" 6 minutes ago Up 6 minutes k8s_storage-provisioner_storage-provisioner_kube-system_d923a82e-8aa8-4e68-baee-6fc1905f5b8f_0 * ac2ad529c80a k8s.gcr.io/pause:3.2 "/pause" 6 minutes ago Up 6 minutes k8s_POD_coredns-f9fd979d6-j7bdn_kube-system_72a623d8-cd59-486b-816b-9f4c377cfc12_0 * b0ec727aa8ee k8s.gcr.io/pause:3.2 "/pause" 6 minutes ago Up 6 minutes k8s_POD_storage-provisioner_kube-system_d923a82e-8aa8-4e68-baee-6fc1905f5b8f_0 * 8f52419b9c64 d373dd5a8593 "/usr/local/bin/kube…" 6 minutes ago Up 6 minutes k8s_kube-proxy_kube-proxy-jg4m9_kube-system_61890f23-85b6-404d-be85-4a8d50f2beb5_0 * 33955b08cc50 k8s.gcr.io/pause:3.2 "/pause" 6 minutes ago Up 6 minutes k8s_POD_kube-proxy-jg4m9_kube-system_61890f23-85b6-404d-be85-4a8d50f2beb5_0 * 53a7d57e73ec 607331163122 "kube-apiserver --ad…" 7 minutes ago Up 7 minutes k8s_kube-apiserver_kube-apiserver-oc2542575527.ibm.com_kube-system_4d03a166b53c03150d184d584142348b_0 * 07809560c5db 0369cf4303ff "etcd --advertise-cl…" 7 minutes ago Up 7 minutes k8s_etcd_etcd-oc2542575527.ibm.com_kube-system_5414e0f6e1fd22aab1e6f90c20d3ca64_0 * eb0b1c0f5ce8 2f32d66b884f "kube-scheduler --au…" 7 minutes ago Up 7 minutes k8s_kube-scheduler_kube-scheduler-oc2542575527.ibm.com_kube-system_ff7d12f9e4f14e202a85a7c5534a3129_0 * c540fc1c7b70 8603821e1a7a "kube-controller-man…" 7 minutes ago Up 7 minutes k8s_kube-controller-manager_kube-controller-manager-oc2542575527.ibm.com_kube-system_e325ceb4265ecd0f1a16967a783fe6be_0 * 904e776002d4 k8s.gcr.io/pause:3.2 "/pause" 7 minutes ago Up 7 minutes k8s_POD_kube-scheduler-oc2542575527.ibm.com_kube-system_ff7d12f9e4f14e202a85a7c5534a3129_0 * e1cd70288feb k8s.gcr.io/pause:3.2 "/pause" 7 minutes ago Up 7 minutes k8s_POD_kube-controller-manager-oc2542575527.ibm.com_kube-system_e325ceb4265ecd0f1a16967a783fe6be_0 * 7f95c98b5e31 k8s.gcr.io/pause:3.2 "/pause" 7 minutes ago Up 7 minutes k8s_POD_kube-apiserver-oc2542575527.ibm.com_kube-system_4d03a166b53c03150d184d584142348b_0 * a8ea9b6316c4 k8s.gcr.io/pause:3.2 "/pause" 7 minutes ago Up 7 minutes k8s_POD_etcd-oc2542575527.ibm.com_kube-system_5414e0f6e1fd22aab1e6f90c20d3ca64_0 * 3817048ab211 e9045f04a53f "/bin/sh -c 'mvn -f …" 5 weeks ago Exited (1) 5 weeks ago zealous_ritchie * 4a82559b209f 4d83886dfcfd "/bin/sh -c 'mvn -f …" 5 weeks ago Exited (1) 5 weeks ago naughty_bartik * 05f6e0577453 48aa23d4c1c4 "/bin/sh -c 'mvn cle…" 5 weeks ago Exited (1) 5 weeks ago cranky_banach * 29ffb01e23c3 48aa23d4c1c4 "/bin/sh -c 'mvn -Pn…" 5 weeks ago Exited (1) 5 weeks ago silly_agnesi * 74d687046b77 48aa23d4c1c4 "/bin/sh -c 'mvn com…" 5 weeks ago Exited (1) 5 weeks ago angry_jennings * 7807d36e239f 48aa23d4c1c4 "/bin/sh -c 'mvn com…" 5 weeks ago Exited (1) 5 weeks ago pedantic_montalcini * 6957d3f40f96 4d83886dfcfd "/bin/sh -c 'mvn com…" 5 weeks ago Exited (1) 5 weeks ago trusting_swanson * e40a58e41442 4d83886dfcfd "/bin/sh -c 'mvn -f …" 5 weeks ago Exited (1) 5 weeks ago epic_bartik * c46ca0913a33 4d83886dfcfd "/bin/sh -c 'mvn -f …" 5 weeks ago Exited (1) 5 weeks ago stupefied_chatelet * 15edb351e692 4d83886dfcfd "/bin/sh -c 'mvn -f …" 5 weeks ago Exited (1) 5 weeks ago pedantic_grothendieck * ef37c2ddcf32 4d83886dfcfd "/bin/sh -c 'mvn -f …" 5 weeks ago Exited (1) 5 weeks ago busy_herschel * 490b625e3ad4 9288e6e275f7 "/bin/sh -c 'cp /roo…" 5 weeks ago Exited (1) 5 weeks ago sad_gagarin * a24a404d2d42 9288e6e275f7 "/bin/sh -c 'cp /roo…" 5 weeks ago Exited (1) 5 weeks ago recursing_visvesvaraya * 51c7857a9cd8 9288e6e275f7 "/bin/sh -c 'cp /roo…" 5 weeks ago Exited (1) 5 weeks ago infallible_jennings * 0e6d4f19db53 9288e6e275f7 "/bin/sh -c 'cp /roo…" 5 weeks ago Exited (1) 5 weeks ago serene_hertz * 282d500c6112 9288e6e275f7 "/bin/sh -c 'cp /roo…" 5 weeks ago Exited (1) 5 weeks ago confident_ritchie * f0f0399ddd47 9288e6e275f7 "/bin/sh -c 'cp /roo…" 5 weeks ago Exited (1) 5 weeks ago frosty_tu * c870eeb4a441 9288e6e275f7 "/bin/sh -c 'cp pom.…" 5 weeks ago Exited (1) 5 weeks ago goofy_snyder * 8bb3fc9f5079 24cb0887af6b "/docker-entrypoint.…" 7 weeks ago Exited (255) 6 weeks ago 0.0.0.0:8081->80/tcp distracted_dewdney * 3052f776cb93 24cb0887af6b "/docker-entrypoint.…" 7 weeks ago Created frosty_albattani * c6c4d611a360 24cb0887af6b "/docker-entrypoint.…" 7 weeks ago Created nervous_jennings * 892e7ad50409 24cb0887af6b "/docker-entrypoint.…" 7 weeks ago Exited (0) 7 weeks ago ecstatic_easley * 39cadedca6b3 harbor.iocc.ibm.com:7443/imsf/baseservice-apigateway:v2.0 "/bin/sh" 7 weeks ago Exited (0) 7 weeks ago laughing_gould * a3a782820941 665b23dc1c2c "/bin/sh" 7 weeks ago Exited (0) 7 weeks ago confident_blackwell * f8b70afba4c0 9455697288f7 "/bin/sh -c 'npm ins…" 8 weeks ago Exited (1) 8 weeks ago naughty_wright * c0778bcc4334 a003ddb37605 "/bin/sh -c 'chown +…" 2 months ago Exited (1) 2 months ago interesting_cannon * 0e296db6507b 84ea55cade78 "ls -la /deployments" 2 months ago Exited (0) 2 months ago lucid_feistel * a9e4382645a1 1fd7569ba965 "ls -la /deployments" 2 months ago Exited (0) 2 months ago confident_mendeleev * 67ba59375b36 ac043f394fce "ls -la /deployments" 2 months ago Exited (0) 2 months ago modest_solomon * c21fe1a772db c4fd5f0dbe02 "ls -la /deployments" 2 months ago Exited (0) 2 months ago admiring_meninsky * feb6516208ce c4fd5f0dbe02 "whoami" 2 months ago Exited (0) 2 months ago romantic_khayyam * 2afdbd4432a5 c4fd5f0dbe02 "stty size" 2 months ago Exited (0) 2 months ago wonderful_liskov * de2f92dcdff6 c4fd5f0dbe02 "whoami" 2 months ago Exited (0) 2 months ago interesting_ptolemy * 98d2923753b8 eb6baff64ce8 "/bin/sh -c 'npm ins…" 2 months ago Exited (1) 2 months ago quizzical_neumann * 55f78176a520 eb6baff64ce8 "/bin/sh -c 'npm ins…" 2 months ago Exited (1) 2 months ago happy_elion * 1992adb2a3ae eb6baff64ce8 "/bin/sh -c 'npm ins…" 3 months ago Exited (1) 3 months ago cool_roentgen * b90e5ca61ded eb6baff64ce8 "/bin/sh -c 'npm ins…" 3 months ago Exited (1) 3 months ago lucid_tu * c3d0082a0314 lachlanevenson/k8s-kubectl:latest "kubectl --help" 3 months ago Exited (0) 3 months ago friendly_wu * 4e17287d05e7 lachlanevenson/k8s-kubectl:latest "kubectl kubectl" 3 months ago Exited (1) 3 months ago eager_cartwright * cdc1685c6351 lachlanevenson/k8s-kubectl:latest "kubectl kubectl ver…" 3 months ago Exited (1) 3 months ago brave_volhard * 1016e6a5f8f8 lachlanevenson/k8s-kubectl:latest "kubectl /bin/sh" 3 months ago Exited (1) 3 months ago reverent_merkle * * ==> coredns [2341996643e5] <== * .:53 * [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 * CoreDNS-1.7.0 * linux/amd64, go1.14.4, f59c03d * * ==> describe nodes <== * Name: oc2542575527.ibm.com * Roles: master * Labels: beta.kubernetes.io/arch=amd64 * beta.kubernetes.io/os=linux * kubernetes.io/arch=amd64 * kubernetes.io/hostname=oc2542575527.ibm.com * kubernetes.io/os=linux * minikube.k8s.io/commit=1fd1f67f338cbab4b3e5a6e4c71c551f522ca138-dirty * minikube.k8s.io/name=minikube * minikube.k8s.io/updated_at=2020_09_22T16_34_31_0700 * minikube.k8s.io/version=v1.13.1 * node-role.kubernetes.io/master= * Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock * node.alpha.kubernetes.io/ttl: 0 * volumes.kubernetes.io/controller-managed-attach-detach: true * CreationTimestamp: Tue, 22 Sep 2020 16:34:28 +0800 * Taints: * Unschedulable: false * Lease: * HolderIdentity: oc2542575527.ibm.com * AcquireTime: * RenewTime: Tue, 22 Sep 2020 16:41:28 +0800 * Conditions: * Type Status LastHeartbeatTime LastTransitionTime Reason Message * ---- ------ ----------------- ------------------ ------ ------- * MemoryPressure False Tue, 22 Sep 2020 16:39:49 +0800 Tue, 22 Sep 2020 16:34:28 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available * DiskPressure False Tue, 22 Sep 2020 16:39:49 +0800 Tue, 22 Sep 2020 16:34:28 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure * PIDPressure False Tue, 22 Sep 2020 16:39:49 +0800 Tue, 22 Sep 2020 16:34:28 +0800 KubeletHasSufficientPID kubelet has sufficient PID available * Ready True Tue, 22 Sep 2020 16:39:49 +0800 Tue, 22 Sep 2020 16:34:48 +0800 KubeletReady kubelet is posting ready status * Addresses: * InternalIP: 9.110.168.156 * Hostname: oc2542575527.ibm.com * Capacity: * cpu: 8 * ephemeral-storage: 467191416Ki * hugepages-2Mi: 0 * memory: 16352424Ki * pods: 110 * Allocatable: * cpu: 8 * ephemeral-storage: 467191416Ki * hugepages-2Mi: 0 * memory: 16352424Ki * pods: 110 * System Info: * Machine ID: 4478e22538ec4e70bd54c72f05eb5aa7 * System UUID: 8FF33050-4A9B-11E2-B8A8-469BD5942000 * Boot ID: fe03ba59-6af1-4364-95e5-d6ad9dbde660 * Kernel Version: 3.10.0-1062.18.1.el7.x86_64 * OS Image: Red Hat Enterprise Linux Workstation 7.7 (Maipo) * Operating System: linux * Architecture: amd64 * Container Runtime Version: docker://19.3.13 * Kubelet Version: v1.19.2 * Kube-Proxy Version: v1.19.2 * Non-terminated Pods: (7 in total) * Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE * --------- ---- ------------ ---------- --------------- ------------- --- * kube-system coredns-f9fd979d6-j7bdn 100m (1%) 0 (0%) 70Mi (0%) 170Mi (1%) 6m58s * kube-system etcd-oc2542575527.ibm.com 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m57s * kube-system kube-apiserver-oc2542575527.ibm.com 250m (3%) 0 (0%) 0 (0%) 0 (0%) 6m57s * kube-system kube-controller-manager-oc2542575527.ibm.com 200m (2%) 0 (0%) 0 (0%) 0 (0%) 6m57s * kube-system kube-proxy-jg4m9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m58s * kube-system kube-scheduler-oc2542575527.ibm.com 100m (1%) 0 (0%) 0 (0%) 0 (0%) 6m57s * kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m2s * Allocated resources: * (Total limits may be over 100 percent, i.e., overcommitted.) * Resource Requests Limits * -------- -------- ------ * cpu 650m (8%) 0 (0%) * memory 70Mi (0%) 170Mi (1%) * ephemeral-storage 0 (0%) 0 (0%) * hugepages-2Mi 0 (0%) 0 (0%) * Events: * Type Reason Age From Message * ---- ------ ---- ---- ------- * Normal NodeHasSufficientMemory 7m20s (x7 over 7m20s) kubelet Node oc2542575527.ibm.com status is now: NodeHasSufficientMemory * Normal NodeHasNoDiskPressure 7m20s (x7 over 7m20s) kubelet Node oc2542575527.ibm.com status is now: NodeHasNoDiskPressure * Normal NodeHasSufficientPID 7m20s (x7 over 7m20s) kubelet Node oc2542575527.ibm.com status is now: NodeHasSufficientPID * Normal Starting 6m58s kubelet Starting kubelet. * Normal NodeHasSufficientMemory 6m58s kubelet Node oc2542575527.ibm.com status is now: NodeHasSufficientMemory * Normal NodeHasNoDiskPressure 6m58s kubelet Node oc2542575527.ibm.com status is now: NodeHasNoDiskPressure * Normal NodeHasSufficientPID 6m58s kubelet Node oc2542575527.ibm.com status is now: NodeHasSufficientPID * Normal NodeAllocatableEnforced 6m57s kubelet Updated Node Allocatable limit across pods * Normal Starting 6m55s kube-proxy Starting kube-proxy. * Normal NodeReady 6m47s kubelet Node oc2542575527.ibm.com status is now: NodeReady * * ==> dmesg <== * dmesg: invalid option -- '=' * * Usage: * dmesg [options] * * Options: * -C, --clear clear the kernel ring buffer * -c, --read-clear read and clear all messages * -D, --console-off disable printing messages to console * -d, --show-delta show time delta between printed messages * -e, --reltime show local time and time delta in readable format * -E, --console-on enable printing messages to console * -F, --file use the file instead of the kernel log buffer * -f, --facility restrict output to defined facilities * -H, --human human readable output * -k, --kernel display kernel messages * -L, --color colorize messages * -l, --level restrict output to defined levels * -n, --console-level set level of messages printed to console * -P, --nopager do not pipe output into a pager * -r, --raw print the raw message buffer * -S, --syslog force to use syslog(2) rather than /dev/kmsg * -s, --buffer-size buffer size to query the kernel ring buffer * -T, --ctime show human readable timestamp (could be * inaccurate if you have used SUSPEND/RESUME) * -t, --notime don't print messages timestamp * -u, --userspace display userspace messages * -w, --follow wait for new messages * -x, --decode decode facility and level to readable string * * -h, --help display this help and exit * -V, --version output version information and exit * * Supported log facilities: * kern - kernel messages * user - random user-level messages * mail - mail system * daemon - system daemons * auth - security/authorization messages * syslog - messages generated internally by syslogd * lpr - line printer subsystem * news - network news subsystem * * Supported log levels (priorities): * emerg - system is unusable * alert - action must be taken immediately * crit - critical conditions * err - error conditions * warn - warning conditions * notice - normal but significant condition * info - informational * debug - debug-level messages * * * For more details see dmesg(q). * * ==> etcd [07809560c5db] <== * 2020-09-22 08:34:28.435935 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/system:controller:generic-garbage-collector\" " with result "range_response_count:0 size:4" took too long (101.169491ms) to execute * 2020-09-22 08:34:29.067917 W | etcdserver: read-only range request "key:\"/registry/rolebindings/kube-system/system::extension-apiserver-authentication-reader\" " with result "range_response_count:0 size:5" took too long (139.718623ms) to execute * 2020-09-22 08:34:34.000774 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" " with result "range_response_count:1 size:199" took too long (108.025665ms) to execute * 2020-09-22 08:34:40.756038 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:34:47.791511 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:34:49.442019 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (123.232482ms) to execute * 2020-09-22 08:34:57.791567 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:34:59.130812 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:612" took too long (158.965364ms) to execute * 2020-09-22 08:35:07.791536 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:35:09.464982 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (167.972379ms) to execute * 2020-09-22 08:35:09.465073 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (145.668414ms) to execute * 2020-09-22 08:35:17.428061 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (108.914788ms) to execute * 2020-09-22 08:35:17.791703 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:35:27.791627 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:35:31.422122 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (103.495748ms) to execute * 2020-09-22 08:35:37.791798 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:35:47.791680 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:35:57.791600 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:36:07.791720 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:36:09.495033 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (176.537814ms) to execute * 2020-09-22 08:36:17.791716 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:36:19.468230 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (150.030664ms) to execute * 2020-09-22 08:36:27.791572 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:36:37.791730 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:36:41.914548 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (145.138013ms) to execute * 2020-09-22 08:36:47.791580 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:36:57.858633 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:37:07.882508 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:37:17.791666 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:37:26.203791 W | etcdserver: read-only range request "key:\"/registry/ranges/serviceips\" " with result "range_response_count:1 size:118" took too long (166.643241ms) to execute * 2020-09-22 08:37:26.203846 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:263" took too long (137.642762ms) to execute * 2020-09-22 08:37:26.204059 W | etcdserver: read-only range request "key:\"/registry/ranges/servicenodeports\" " with result "range_response_count:1 size:120" took too long (167.879667ms) to execute * 2020-09-22 08:37:27.791593 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:37:37.795351 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:37:47.791686 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:37:57.791714 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:38:07.791682 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:38:17.791700 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:38:27.791849 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:38:37.791523 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:38:47.791449 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:38:57.791793 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:39:07.791666 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:39:17.791522 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:39:19.446749 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (128.079564ms) to execute * 2020-09-22 08:39:27.791595 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:39:37.791634 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:39:47.791692 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:39:49.457466 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (158.194756ms) to execute * 2020-09-22 08:39:49.457576 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (139.433529ms) to execute * 2020-09-22 08:39:57.791706 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:40:07.791679 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:40:17.791695 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:40:27.791686 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:40:37.791600 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:40:47.791632 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:40:57.791609 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:41:07.791492 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:41:17.791513 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2020-09-22 08:41:27.791656 I | etcdserver/api/etcdhttp: /health OK (status code 200) * * ==> kernel <== * 16:41:36 up 42 days, 22:05, 3 users, load average: 0.42, 0.52, 0.49 * Linux oc2542575527.ibm.com 3.10.0-1062.18.1.el7.x86_64 #1 SMP Wed Feb 12 14:08:31 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux * PRETTY_NAME="Red Hat Enterprise Linux Workstation 7.7 (Maipo)" * * ==> kube-apiserver [53a7d57e73ec] <== * I0922 08:34:25.703852 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller * I0922 08:34:25.703865 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller * I0922 08:34:25.703897 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt * I0922 08:34:25.703943 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt * I0922 08:34:25.704135 1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key * E0922 08:34:25.704725 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/9.110.168.156, ResourceVersion: 0, AdditionalErrorMsg: * I0922 08:34:25.803426 1 cache.go:39] Caches are synced for AvailableConditionController controller * I0922 08:34:25.803462 1 shared_informer.go:247] Caches are synced for crd-autoregister * I0922 08:34:25.803471 1 cache.go:39] Caches are synced for autoregister controller * I0922 08:34:25.803508 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller * I0922 08:34:25.804026 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller * I0922 08:34:26.702448 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). * I0922 08:34:26.702480 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). * I0922 08:34:26.707737 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000 * I0922 08:34:26.770443 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000 * I0922 08:34:26.770467 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist. * I0922 08:34:28.800773 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io * I0922 08:34:29.071641 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io * W0922 08:34:29.431598 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [9.110.168.156] * I0922 08:34:29.432805 1 controller.go:606] quota admission added evaluator for: endpoints * I0922 08:34:29.470521 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io * I0922 08:34:30.128458 1 controller.go:606] quota admission added evaluator for: serviceaccounts * I0922 08:34:31.198769 1 controller.go:606] quota admission added evaluator for: deployments.apps * I0922 08:34:31.324624 1 controller.go:606] quota admission added evaluator for: daemonsets.apps * I0922 08:34:37.243069 1 controller.go:606] quota admission added evaluator for: replicasets.apps * I0922 08:34:37.296105 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps * I0922 08:34:37.834531 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io * I0922 08:34:54.799516 1 client.go:360] parsed scheme: "passthrough" * I0922 08:34:54.799576 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0922 08:34:54.799593 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0922 08:35:26.794609 1 client.go:360] parsed scheme: "passthrough" * I0922 08:35:26.795214 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0922 08:35:26.795242 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0922 08:36:11.351713 1 client.go:360] parsed scheme: "passthrough" * I0922 08:36:11.351772 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0922 08:36:11.351787 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0922 08:36:42.227429 1 client.go:360] parsed scheme: "passthrough" * I0922 08:36:42.227479 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0922 08:36:42.227495 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0922 08:37:23.688179 1 client.go:360] parsed scheme: "passthrough" * I0922 08:37:23.688248 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0922 08:37:23.688266 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0922 08:37:55.290241 1 client.go:360] parsed scheme: "passthrough" * I0922 08:37:55.290299 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0922 08:37:55.290316 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0922 08:38:36.460220 1 client.go:360] parsed scheme: "passthrough" * I0922 08:38:36.461301 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0922 08:38:36.461322 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0922 08:39:06.983204 1 client.go:360] parsed scheme: "passthrough" * I0922 08:39:06.983262 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0922 08:39:06.983277 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0922 08:39:42.859157 1 client.go:360] parsed scheme: "passthrough" * I0922 08:39:42.859219 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0922 08:39:42.859242 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0922 08:40:25.594981 1 client.go:360] parsed scheme: "passthrough" * I0922 08:40:25.595037 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0922 08:40:25.595054 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0922 08:41:01.466385 1 client.go:360] parsed scheme: "passthrough" * I0922 08:41:01.466942 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0922 08:41:01.466960 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * * ==> kube-controller-manager [c540fc1c7b70] <== * I0922 08:34:36.491894 1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator * I0922 08:34:36.491918 1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator * I0922 08:34:36.741645 1 controllermanager.go:549] Started "pv-protection" * I0922 08:34:36.741717 1 pv_protection_controller.go:83] Starting PV protection controller * I0922 08:34:36.741734 1 shared_informer.go:240] Waiting for caches to sync for PV protection * I0922 08:34:36.891515 1 controllermanager.go:549] Started "csrcleaner" * I0922 08:34:36.891571 1 core.go:240] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true. * W0922 08:34:36.891584 1 controllermanager.go:541] Skipping "route" * I0922 08:34:36.891618 1 cleaner.go:83] Starting CSR cleaner controller * I0922 08:34:37.041943 1 node_lifecycle_controller.go:77] Sending events to api server * E0922 08:34:37.042007 1 core.go:230] failed to start cloud node lifecycle controller: no cloud provider provided * W0922 08:34:37.042032 1 controllermanager.go:541] Skipping "cloud-node-lifecycle" * I0922 08:34:37.042581 1 shared_informer.go:240] Waiting for caches to sync for resource quota * W0922 08:34:37.054106 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="oc2542575527.ibm.com" does not exist * I0922 08:34:37.091829 1 shared_informer.go:247] Caches are synced for service account * I0922 08:34:37.091993 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator * I0922 08:34:37.111316 1 shared_informer.go:247] Caches are synced for bootstrap_signer * I0922 08:34:37.112737 1 shared_informer.go:247] Caches are synced for namespace * I0922 08:34:37.119764 1 shared_informer.go:247] Caches are synced for expand * I0922 08:34:37.141833 1 shared_informer.go:247] Caches are synced for PV protection * I0922 08:34:37.141896 1 shared_informer.go:247] Caches are synced for TTL * I0922 08:34:37.158344 1 shared_informer.go:247] Caches are synced for certificate-csrapproving * I0922 08:34:37.177304 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving * I0922 08:34:37.177622 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client * I0922 08:34:37.177824 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client * I0922 08:34:37.178210 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown * E0922 08:34:37.213151 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again * I0922 08:34:37.239702 1 shared_informer.go:247] Caches are synced for disruption * I0922 08:34:37.239719 1 disruption.go:339] Sending events to api server. * I0922 08:34:37.240436 1 shared_informer.go:247] Caches are synced for HPA * I0922 08:34:37.240992 1 shared_informer.go:247] Caches are synced for deployment * I0922 08:34:37.241615 1 shared_informer.go:247] Caches are synced for ReplicationController * I0922 08:34:37.241858 1 shared_informer.go:247] Caches are synced for GC * I0922 08:34:37.242039 1 shared_informer.go:247] Caches are synced for job * I0922 08:34:37.242265 1 shared_informer.go:247] Caches are synced for persistent volume * I0922 08:34:37.249823 1 shared_informer.go:247] Caches are synced for ReplicaSet * I0922 08:34:37.254851 1 shared_informer.go:247] Caches are synced for PVC protection * I0922 08:34:37.262383 1 shared_informer.go:247] Caches are synced for attach detach * I0922 08:34:37.264271 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-f9fd979d6 to 1" * I0922 08:34:37.287166 1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-f9fd979d6-j7bdn" * I0922 08:34:37.290924 1 shared_informer.go:247] Caches are synced for stateful set * I0922 08:34:37.292168 1 shared_informer.go:247] Caches are synced for daemon sets * I0922 08:34:37.302947 1 shared_informer.go:247] Caches are synced for resource quota * I0922 08:34:37.314934 1 shared_informer.go:247] Caches are synced for taint * I0922 08:34:37.315001 1 taint_manager.go:187] Starting NoExecuteTaintManager * I0922 08:34:37.315013 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: * W0922 08:34:37.315090 1 node_lifecycle_controller.go:1044] Missing timestamp for Node oc2542575527.ibm.com. Assuming now as a timestamp. * I0922 08:34:37.315133 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode. * I0922 08:34:37.315192 1 event.go:291] "Event occurred" object="oc2542575527.ibm.com" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node oc2542575527.ibm.com event: Registered Node oc2542575527.ibm.com in Controller" * I0922 08:34:37.342022 1 shared_informer.go:247] Caches are synced for endpoint_slice * I0922 08:34:37.343215 1 shared_informer.go:247] Caches are synced for resource quota * I0922 08:34:37.368509 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-jg4m9" * I0922 08:34:37.391874 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring * I0922 08:34:37.392258 1 shared_informer.go:247] Caches are synced for endpoint * I0922 08:34:37.397071 1 shared_informer.go:240] Waiting for caches to sync for garbage collector * E0922 08:34:37.500308 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"4ba37db0-c119-4f35-8fd8-02f2d50ef215", ResourceVersion:"223", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63736360471, loc:(*time.Location)(0x6a59c80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00091e000), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00091e020)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00091e040), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc000d7e940), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00091e060), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00091e080), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.19.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00091e0c0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0001c0d20), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0006eb9a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00029e620), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000e02338)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0006eb9f8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again * I0922 08:34:37.697290 1 shared_informer.go:247] Caches are synced for garbage collector * I0922 08:34:37.740598 1 shared_informer.go:247] Caches are synced for garbage collector * I0922 08:34:37.740626 1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage * I0922 08:34:52.315935 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode. * * ==> kube-proxy [8f52419b9c64] <== * I0922 08:34:40.681888 1 node.go:136] Successfully retrieved node IP: 9.110.168.156 * I0922 08:34:40.681970 1 server_others.go:111] kube-proxy node IP is an IPv4 address (9.110.168.156), assume IPv4 operation * W0922 08:34:40.799379 1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy * I0922 08:34:40.799653 1 server_others.go:186] Using iptables Proxier. * W0922 08:34:40.799671 1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined * I0922 08:34:40.799678 1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local * I0922 08:34:40.800110 1 server.go:650] Version: v1.19.2 * I0922 08:34:40.801202 1 conntrack.go:52] Setting nf_conntrack_max to 262144 * I0922 08:34:40.801591 1 config.go:315] Starting service config controller * I0922 08:34:40.801619 1 config.go:224] Starting endpoint slice config controller * I0922 08:34:40.801660 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config * I0922 08:34:40.801624 1 shared_informer.go:240] Waiting for caches to sync for service config * I0922 08:34:40.901815 1 shared_informer.go:247] Caches are synced for service config * I0922 08:34:40.901850 1 shared_informer.go:247] Caches are synced for endpoint slice config * * ==> kube-scheduler [eb0b1c0f5ce8] <== * I0922 08:34:21.934845 1 registry.go:173] Registering SelectorSpread plugin * I0922 08:34:21.934891 1 registry.go:173] Registering SelectorSpread plugin * I0922 08:34:22.170991 1 serving.go:331] Generated self-signed cert in-memory * W0922 08:34:25.726414 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' * W0922 08:34:25.726501 1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" * W0922 08:34:25.726540 1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous. * W0922 08:34:25.726584 1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false * I0922 08:34:25.737750 1 registry.go:173] Registering SelectorSpread plugin * I0922 08:34:25.737764 1 registry.go:173] Registering SelectorSpread plugin * I0922 08:34:25.740147 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0922 08:34:25.740195 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0922 08:34:25.740861 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 * I0922 08:34:25.741157 1 tlsconfig.go:240] Starting DynamicServingCertificateController * E0922 08:34:25.741814 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" * E0922 08:34:25.742632 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope * E0922 08:34:25.742642 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope * E0922 08:34:25.742642 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope * E0922 08:34:25.742679 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope * E0922 08:34:25.742757 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope * E0922 08:34:25.742806 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope * E0922 08:34:25.742840 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope * E0922 08:34:25.742869 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope * E0922 08:34:25.742879 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope * E0922 08:34:25.742870 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope * E0922 08:34:25.742908 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope * E0922 08:34:25.742987 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope * E0922 08:34:26.567514 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" * E0922 08:34:26.606419 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope * E0922 08:34:26.679672 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope * E0922 08:34:26.863864 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope * E0922 08:34:26.956130 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope * E0922 08:34:26.988423 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope * E0922 08:34:27.027919 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope * E0922 08:34:27.080650 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope * E0922 08:34:27.196696 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope * E0922 08:34:27.217053 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope * E0922 08:34:27.224971 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope * E0922 08:34:27.286717 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope * E0922 08:34:27.337947 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope * E0922 08:34:29.000053 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" * I0922 08:34:33.440329 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Tue 2020-08-11 02:19:54 CST, end at Tue 2020-09-22 16:41:36 CST. -- * Sep 22 16:40:57 oc2542575527.ibm.com kubelet[32008]: W0922 16:40:57.827384 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:40:57 oc2542575527.ibm.com kubelet[32008]: W0922 16:40:57.827446 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:40:57 oc2542575527.ibm.com kubelet[32008]: E0922 16:40:57.827477 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them. * Sep 22 16:40:59 oc2542575527.ibm.com kubelet[32008]: W0922 16:40:59.830444 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:40:59 oc2542575527.ibm.com kubelet[32008]: W0922 16:40:59.830483 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:40:59 oc2542575527.ibm.com kubelet[32008]: E0922 16:40:59.830510 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them. * Sep 22 16:41:01 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:01.833544 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:01 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:01.833584 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:01 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:01.833612 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them. * Sep 22 16:41:03 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:03.831930 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:03 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:03.831982 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:03 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:03.832068 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them. * Sep 22 16:41:05 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:05.831233 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:05 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:05.831278 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:05 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:05.831308 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them. * Sep 22 16:41:07 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:07.831378 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:07 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:07.831432 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:07 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:07.831460 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them. * Sep 22 16:41:09 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:09.828086 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:09 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:09.828124 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:09 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:09.828148 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them. * Sep 22 16:41:11 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:11.834723 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:11 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:11.834772 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:11 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:11.834802 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them. * Sep 22 16:41:13 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:13.825937 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:13 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:13.825977 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:13 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:13.826007 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them. * Sep 22 16:41:15 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:15.838347 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:15 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:15.838388 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:15 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:15.838426 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them. * Sep 22 16:41:17 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:17.825661 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:17 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:17.825700 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:17 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:17.825727 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them. * Sep 22 16:41:19 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:19.832538 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:19 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:19.832580 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:19 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:19.832608 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them. * Sep 22 16:41:21 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:21.827497 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:21 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:21.827551 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:21 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:21.827582 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them. * Sep 22 16:41:23 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:23.835765 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:23 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:23.835818 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:23 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:23.835858 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them. * Sep 22 16:41:25 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:25.826185 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:25 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:25.826227 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:25 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:25.826255 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them. * Sep 22 16:41:27 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:27.832780 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:27 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:27.832827 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:27 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:27.832858 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them. * Sep 22 16:41:29 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:29.827069 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:29 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:29.827129 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:29 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:29.827189 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them. * Sep 22 16:41:31 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:31.831257 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:31 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:31.831284 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:31 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:31.831313 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them. * Sep 22 16:41:33 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:33.822068 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:33 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:33.822100 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:33 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:33.822120 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them. * Sep 22 16:41:35 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:35.831895 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:35 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:35.831933 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist * Sep 22 16:41:35 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:35.831961 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them. * * ==> storage-provisioner [b4c38d2bad28] <== * I0922 08:34:58.964710 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... * I0922 08:34:58.971041 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath * I0922 08:34:58.971109 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"be0e2d96-5fda-44c6-8c2c-56da1934baf9", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' oc2542575527.ibm.com_dc5554f4-9b91-49c1-9643-d8ed71255bee became leader * I0922 08:34:58.971180 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_oc2542575527.ibm.com_dc5554f4-9b91-49c1-9643-d8ed71255bee! * I0922 08:34:59.071487 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_oc2542575527.ibm.com_dc5554f4-9b91-49c1-9643-d8ed71255bee! ```
RA489 commented 4 years ago

/triage support

priyawadhwa commented 3 years ago

Hey @zzguang520 thank you for opening this issue! I'm going to close this as a duplicate of https://github.com/kubernetes/minikube/issues/9322, where a fix for this is being tracked. We currently have a PR open as well which should resolve the issue: https://github.com/kubernetes/minikube/pull/9577