kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.29k stars 4.87k forks source link

Cannot use local storage persistent volumes after v1.8.0 upgrade (from older v1.7.3) #7056

Closed alex-gab closed 4 years ago

alex-gab commented 4 years ago

The exact command to reproduce the issue:

kubectl describe pod mongo-0

The full output of the command that failed:

Warning FailedScheduling 11s (x2 over 11s) default-scheduler 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.

The output of the minikube logs command:

==> Docker <== -- Logs begin at Sun 2020-03-15 13:43:59 UTC, end at Sun 2020-03-15 13:47:06 UTC. -- Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.238211380Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1 Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.238285854Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1 Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.238343856Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2 Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.238373750Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1 Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.238636065Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1 Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.238653137Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1 Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.238689841Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1 Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.238698470Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1 Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.238706455Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1 Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.238713734Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1 Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.238720499Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1 Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.238743192Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1 Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.238752760Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1 Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.238759855Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1 Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.238768237Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1 Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.238791235Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1 Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.238803704Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1 Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.238811349Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1 Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.238818306Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1 Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.238924904Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock" Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.238956357Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock" Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.238963928Z" level=info msg="containerd successfully booted in 0.003258s" Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.248828026Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.248854210Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.248868484Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.248875820Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.249442995Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.249496793Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.249509818Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.249520626Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.367811022Z" level=warning msg="Your kernel does not support cgroup blkio weight" Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.367845143Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.367851997Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.367856929Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.367862285Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.367867537Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.367990630Z" level=info msg="Loading containers: start." Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.552644247Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 15 13:44:14 minikube dockerd[2202]: time="2020-03-15T13:44:14.634213270Z" level=info msg="Loading containers: done." Mar 15 13:44:15 minikube dockerd[2202]: time="2020-03-15T13:44:15.160364358Z" level=info msg="Docker daemon" commit=369ce74a3c graphdriver(s)=overlay2 version=19.03.6 Mar 15 13:44:15 minikube dockerd[2202]: time="2020-03-15T13:44:15.160439109Z" level=info msg="Daemon has completed initialization" Mar 15 13:44:15 minikube dockerd[2202]: time="2020-03-15T13:44:15.220945102Z" level=info msg="API listen on /var/run/docker.sock" Mar 15 13:44:15 minikube systemd[1]: Started Docker Application Container Engine. Mar 15 13:44:15 minikube dockerd[2202]: time="2020-03-15T13:44:15.221035824Z" level=info msg="API listen on [::]:2376" Mar 15 13:44:24 minikube dockerd[2202]: time="2020-03-15T13:44:24.143760086Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/db8f8c5177ce044cb177d84b61e84316ea33cfac846be81a1ccadd7f25f630ca/shim.sock" debug=false pid=3102 Mar 15 13:44:24 minikube dockerd[2202]: time="2020-03-15T13:44:24.333083508Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3ee03dbf78d43bdc0c3110358832f31bb0e14d16219a78f679503b41fb81e3d1/shim.sock" debug=false pid=3139 Mar 15 13:44:24 minikube dockerd[2202]: time="2020-03-15T13:44:24.465555150Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/372f4141d9137e86cb5bee69d1ad414f383776b8143e987d353753f2fab7e338/shim.sock" debug=false pid=3197 Mar 15 13:44:24 minikube dockerd[2202]: time="2020-03-15T13:44:24.768511071Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d75760de8521841d609ea52ae4dc5bfac74e2a7e81327fb71d3ea9282787450c/shim.sock" debug=false pid=3234 Mar 15 13:44:25 minikube dockerd[2202]: time="2020-03-15T13:44:25.138298036Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a1ed6a6dedbb3a7b0d74f18a58fb66736ab5fd79e85f238a7c18226dc2d78db9/shim.sock" debug=false pid=3267 Mar 15 13:44:25 minikube dockerd[2202]: time="2020-03-15T13:44:25.615955597Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fba3779726a06d009b789f0fba18e3e1e4ad5c137e8aeb1fbad3461639c0f97f/shim.sock" debug=false pid=3335 Mar 15 13:44:25 minikube dockerd[2202]: time="2020-03-15T13:44:25.646200912Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ab542694c3d896f307a86085fa2e07731a85d826ba040e2f1588c98b1b3fc3a3/shim.sock" debug=false pid=3357 Mar 15 13:44:25 minikube dockerd[2202]: time="2020-03-15T13:44:25.704277715Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/048665b628e2cacbacf5c209d96f2d8c63e700e04e97aa254acb09d39ba8d0b1/shim.sock" debug=false pid=3389 Mar 15 13:44:41 minikube dockerd[2202]: time="2020-03-15T13:44:41.565079339Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b43e2c3c314652f28fe9563b49877e9cbf402670282e4b87ca2a6eb0cfd64963/shim.sock" debug=false pid=3969 Mar 15 13:44:42 minikube dockerd[2202]: time="2020-03-15T13:44:42.531502073Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b56fad7f9d6c8e0c03884e718e9c99a65c0d3acea7e2f3782859884d873dc7ad/shim.sock" debug=false pid=4030 Mar 15 13:44:42 minikube dockerd[2202]: time="2020-03-15T13:44:42.743997163Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d0c2d3ba6416c55200e55ddff183c283503c7e61c48adfbaaf7babd59ce97851/shim.sock" debug=false pid=4079 Mar 15 13:44:43 minikube dockerd[2202]: time="2020-03-15T13:44:43.285750626Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1ce00149d20bc7a404106b4620ce6fcd863ef1d8112b0b3cad8a50b959f8c02c/shim.sock" debug=false pid=4161 Mar 15 13:44:43 minikube dockerd[2202]: time="2020-03-15T13:44:43.445720880Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/106015078cc63c6f7b66226b8376f7956ce46c8b9562bcf411e4d74214370321/shim.sock" debug=false pid=4209 Mar 15 13:44:43 minikube dockerd[2202]: time="2020-03-15T13:44:43.696495256Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dcc32e267491398ec697da292f67aa4e7c6f43e7d968d7413d2b8968249f6c68/shim.sock" debug=false pid=4271 Mar 15 13:44:44 minikube dockerd[2202]: time="2020-03-15T13:44:44.547446121Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1ec0b67d5cca3eeec33dc1ffca3c68dd92395b8fddb9b8b7b688da129e65a1e8/shim.sock" debug=false pid=4314 Mar 15 13:44:44 minikube dockerd[2202]: time="2020-03-15T13:44:44.776359568Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4e715245cd887727df9dd87e8113ba9c7561ee131800fe8ad884edb4485b3171/shim.sock" debug=false pid=4357

==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 4e715245cd887 70f311871ae12 2 minutes ago Running coredns 0 106015078cc63 1ec0b67d5cca3 70f311871ae12 2 minutes ago Running coredns 0 1ce00149d20bc dcc32e2674913 4689081edb103 2 minutes ago Running storage-provisioner 0 d0c2d3ba6416c b56fad7f9d6c8 ae853e93800dc 2 minutes ago Running kube-proxy 0 b43e2c3c31465 048665b628e2c b0f1517c1f4bb 2 minutes ago Running kube-controller-manager 0 d75760de85218 ab542694c3d89 90d27391b7808 2 minutes ago Running kube-apiserver 0 372f4141d9137 fba3779726a06 303ce5db0e90d 2 minutes ago Running etcd 0 3ee03dbf78d43 a1ed6a6dedbb3 d109c0821a2b9 2 minutes ago Running kube-scheduler 0 db8f8c5177ce0

==> coredns [1ec0b67d5cca] <== .:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.5 linux/amd64, go1.13.4, c2fd1b2

==> coredns [4e715245cd88] <== .:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.5 linux/amd64, go1.13.4, c2fd1b2

==> dmesg <== [Mar15 13:43] You have booted with nomodeset. This means your GPU drivers are DISABLED [ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly [ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it [ +0.034465] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. [ +1.984577] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 [ +0.537998] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument [ +0.005314] systemd-fstab-generator[1140]: Ignoring "noauto" for root device [ +0.002502] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling. [ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.) [ +0.934882] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack. [Mar15 13:44] vboxguest: loading out-of-tree module taints kernel. [ +0.002996] vboxguest: PCI device not found, probably running on physical hardware. [ +5.991799] systemd-fstab-generator[1991]: Ignoring "noauto" for root device [ +0.181419] systemd-fstab-generator[2007]: Ignoring "noauto" for root device [ +7.834789] kauditd_printk_skb: 65 callbacks suppressed [ +1.203863] systemd-fstab-generator[2402]: Ignoring "noauto" for root device [ +0.586929] systemd-fstab-generator[2598]: Ignoring "noauto" for root device [ +6.148026] kauditd_printk_skb: 107 callbacks suppressed [ +12.369794] systemd-fstab-generator[3700]: Ignoring "noauto" for root device [ +7.123198] kauditd_printk_skb: 32 callbacks suppressed [ +5.215001] kauditd_printk_skb: 44 callbacks suppressed [Mar15 13:46] NFSD: Unable to end grace period: -110

==> kernel <== 13:47:06 up 3 min, 0 users, load average: 0.41, 0.34, 0.14 Linux minikube 4.19.94 #1 SMP Fri Mar 6 11:41:28 PST 2020 x86_64 GNU/Linux PRETTY_NAME="Buildroot 2019.02.9"

==> kube-apiserver [ab542694c3d8] <== W0315 13:44:27.491138 1 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources. W0315 13:44:27.504895 1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W0315 13:44:27.507482 1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W0315 13:44:27.517111 1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W0315 13:44:27.531417 1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources. W0315 13:44:27.531499 1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources. I0315 13:44:27.538639 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass. I0315 13:44:27.538776 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota. I0315 13:44:27.540049 1 client.go:361] parsed scheme: "endpoint" I0315 13:44:27.540102 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0315 13:44:27.546416 1 client.go:361] parsed scheme: "endpoint" I0315 13:44:27.546439 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0315 13:44:28.964705 1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt I0315 13:44:28.964810 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I0315 13:44:28.964938 1 dynamic_serving_content.go:129] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key I0315 13:44:28.965231 1 secure_serving.go:178] Serving securely on [::]:8443 I0315 13:44:28.965309 1 crd_finalizer.go:263] Starting CRDFinalizer I0315 13:44:28.965232 1 tlsconfig.go:219] Starting DynamicServingCertificateController I0315 13:44:28.966013 1 available_controller.go:386] Starting AvailableConditionController I0315 13:44:28.966028 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0315 13:44:28.966042 1 controller.go:81] Starting OpenAPI AggregationController I0315 13:44:28.966337 1 autoregister_controller.go:140] Starting autoregister controller I0315 13:44:28.966353 1 cache.go:32] Waiting for caches to sync for autoregister controller I0315 13:44:28.970331 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0315 13:44:28.970348 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister I0315 13:44:28.971501 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0315 13:44:28.971541 1 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller I0315 13:44:28.971598 1 apiservice_controller.go:94] Starting APIServiceRegistrationController I0315 13:44:28.971646 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0315 13:44:28.971777 1 controller.go:85] Starting OpenAPI controller I0315 13:44:28.971835 1 customresource_discovery_controller.go:208] Starting DiscoveryController I0315 13:44:28.971911 1 naming_controller.go:288] Starting NamingConditionController I0315 13:44:28.971969 1 establishing_controller.go:73] Starting EstablishingController I0315 13:44:28.972022 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController I0315 13:44:28.972071 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController E0315 13:44:29.041869 1 controller.go:151] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.178, ResourceVersion: 0, AdditionalErrorMsg: I0315 13:44:29.049929 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I0315 13:44:29.049935 1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt I0315 13:44:29.066613 1 cache.go:39] Caches are synced for autoregister controller I0315 13:44:29.067005 1 cache.go:39] Caches are synced for AvailableConditionController controller I0315 13:44:29.070649 1 shared_informer.go:204] Caches are synced for crd-autoregister I0315 13:44:29.072928 1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller I0315 13:44:29.073519 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0315 13:44:29.964878 1 controller.go:107] OpenAPI AggregationController: Processing item I0315 13:44:29.965081 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0315 13:44:29.965159 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0315 13:44:29.977837 1 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000 I0315 13:44:29.991649 1 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000 I0315 13:44:29.991681 1 storage_scheduling.go:142] all system priority classes are created successfully or already exist. I0315 13:44:31.810948 1 controller.go:606] quota admission added evaluator for: endpoints I0315 13:44:31.824755 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io I0315 13:44:32.378394 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0315 13:44:32.610069 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io W0315 13:44:33.039256 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.178] I0315 13:44:33.180128 1 controller.go:606] quota admission added evaluator for: serviceaccounts I0315 13:44:34.862061 1 controller.go:606] quota admission added evaluator for: deployments.apps I0315 13:44:35.091327 1 controller.go:606] quota admission added evaluator for: daemonsets.apps I0315 13:44:40.068275 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps I0315 13:44:40.115245 1 controller.go:606] quota admission added evaluator for: replicasets.apps I0315 13:46:01.028779 1 controller.go:606] quota admission added evaluator for: statefulsets.apps

==> kube-controller-manager [048665b628e2] <== I0315 13:44:38.710863 1 shared_informer.go:197] Waiting for caches to sync for disruption I0315 13:44:38.860360 1 controllermanager.go:533] Started "csrcleaner" I0315 13:44:38.860530 1 cleaner.go:81] Starting CSR cleaner controller I0315 13:44:39.111999 1 controllermanager.go:533] Started "attachdetach" I0315 13:44:39.112061 1 attach_detach_controller.go:342] Starting attach detach controller I0315 13:44:39.112330 1 shared_informer.go:197] Waiting for caches to sync for attach detach I0315 13:44:39.362923 1 controllermanager.go:533] Started "pv-protection" I0315 13:44:39.363017 1 pv_protection_controller.go:81] Starting PV protection controller I0315 13:44:39.363028 1 shared_informer.go:197] Waiting for caches to sync for PV protection I0315 13:44:39.613308 1 controllermanager.go:533] Started "job" I0315 13:44:39.614276 1 shared_informer.go:197] Waiting for caches to sync for resource quota I0315 13:44:39.613700 1 job_controller.go:143] Starting job controller I0315 13:44:39.615340 1 shared_informer.go:197] Waiting for caches to sync for job I0315 13:44:39.624243 1 shared_informer.go:197] Waiting for caches to sync for garbage collector W0315 13:44:39.647447 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="m01" does not exist I0315 13:44:39.657475 1 shared_informer.go:204] Caches are synced for service account I0315 13:44:39.661471 1 shared_informer.go:204] Caches are synced for certificate-csrsigning I0315 13:44:39.661852 1 shared_informer.go:204] Caches are synced for stateful set I0315 13:44:39.663625 1 shared_informer.go:204] Caches are synced for PV protection I0315 13:44:39.703279 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator I0315 13:44:39.704888 1 shared_informer.go:204] Caches are synced for certificate-csrapproving I0315 13:44:39.708102 1 shared_informer.go:204] Caches are synced for namespace I0315 13:44:39.708888 1 shared_informer.go:204] Caches are synced for persistent volume I0315 13:44:39.711587 1 shared_informer.go:204] Caches are synced for TTL I0315 13:44:39.712568 1 shared_informer.go:204] Caches are synced for HPA I0315 13:44:39.712580 1 shared_informer.go:204] Caches are synced for GC I0315 13:44:39.712634 1 shared_informer.go:204] Caches are synced for attach detach I0315 13:44:39.712698 1 shared_informer.go:204] Caches are synced for endpoint I0315 13:44:39.715718 1 shared_informer.go:204] Caches are synced for job I0315 13:44:39.721271 1 shared_informer.go:204] Caches are synced for ReplicationController I0315 13:44:39.732314 1 shared_informer.go:204] Caches are synced for PVC protection I0315 13:44:39.735151 1 shared_informer.go:204] Caches are synced for expand E0315 13:44:39.846482 1 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again I0315 13:44:40.014276 1 shared_informer.go:204] Caches are synced for bootstrap_signer I0315 13:44:40.062478 1 shared_informer.go:204] Caches are synced for daemon sets I0315 13:44:40.070927 1 shared_informer.go:204] Caches are synced for taint I0315 13:44:40.071021 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: W0315 13:44:40.071144 1 node_lifecycle_controller.go:1058] Missing timestamp for Node m01. Assuming now as a timestamp. I0315 13:44:40.071205 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal. I0315 13:44:40.071538 1 taint_manager.go:186] Starting NoExecuteTaintManager I0315 13:44:40.071884 1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"m01", UID:"784f4a1a-d93d-489d-8ea9-cc382a1f6d7a", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node m01 event: Registered Node m01 in Controller I0315 13:44:40.111025 1 shared_informer.go:204] Caches are synced for disruption I0315 13:44:40.111256 1 disruption.go:338] Sending events to api server. I0315 13:44:40.113268 1 shared_informer.go:204] Caches are synced for deployment I0315 13:44:40.122752 1 shared_informer.go:204] Caches are synced for ReplicaSet I0315 13:44:40.161761 1 shared_informer.go:204] Caches are synced for resource quota I0315 13:44:40.168397 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"caa75629-fc4a-4206-8ca4-fd77d47391cc", APIVersion:"apps/v1", ResourceVersion:"228", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-gvw2x I0315 13:44:40.170604 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"2312320e-49b2-4c78-8a6f-d669942c9740", APIVersion:"apps/v1", ResourceVersion:"215", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-6955765f44 to 2 I0315 13:44:40.201046 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"a624d76d-b56b-4a9c-b972-03e5823d42ca", APIVersion:"apps/v1", ResourceVersion:"344", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-v8876 I0315 13:44:40.214581 1 shared_informer.go:204] Caches are synced for resource quota I0315 13:44:40.224632 1 shared_informer.go:204] Caches are synced for garbage collector I0315 13:44:40.240459 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"a624d76d-b56b-4a9c-b972-03e5823d42ca", APIVersion:"apps/v1", ResourceVersion:"344", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-294ps I0315 13:44:40.255423 1 shared_informer.go:204] Caches are synced for garbage collector I0315 13:44:40.255487 1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0315 13:46:01.013579 1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"mongo-pvc", UID:"d2057a9f-3abb-483d-b8f7-20c73eb01fd3", APIVersion:"v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding I0315 13:46:01.091202 1 event.go:281] Event(v1.ObjectReference{Kind:"StatefulSet", Namespace:"default", Name:"mongo", UID:"56c9f1c2-29ee-4907-9b22-7819d2883180", APIVersion:"apps/v1", ResourceVersion:"578", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' create Pod mongo-0 in StatefulSet mongo successful I0315 13:46:09.711843 1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"mongo-pvc", UID:"d2057a9f-3abb-483d-b8f7-20c73eb01fd3", APIVersion:"v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding I0315 13:46:24.711497 1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"mongo-pvc", UID:"d2057a9f-3abb-483d-b8f7-20c73eb01fd3", APIVersion:"v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding I0315 13:46:39.711928 1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"mongo-pvc", UID:"d2057a9f-3abb-483d-b8f7-20c73eb01fd3", APIVersion:"v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding I0315 13:46:54.711857 1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"mongo-pvc", UID:"d2057a9f-3abb-483d-b8f7-20c73eb01fd3", APIVersion:"v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding

==> kube-proxy [b56fad7f9d6c] <== W0315 13:44:42.884394 1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy I0315 13:44:42.961109 1 node.go:135] Successfully retrieved node IP: 192.168.39.178 I0315 13:44:42.961367 1 server_others.go:145] Using iptables Proxier. W0315 13:44:42.962695 1 proxier.go:286] clusterCIDR not specified, unable to distinguish between internal and external traffic I0315 13:44:42.965383 1 server.go:571] Version: v1.17.3 I0315 13:44:42.972410 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 I0315 13:44:42.972561 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I0315 13:44:42.973596 1 conntrack.go:83] Setting conntrack hashsize to 32768 I0315 13:44:42.979451 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I0315 13:44:42.979523 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I0315 13:44:42.979658 1 config.go:131] Starting endpoints config controller I0315 13:44:42.979678 1 shared_informer.go:197] Waiting for caches to sync for endpoints config I0315 13:44:42.979698 1 config.go:313] Starting service config controller I0315 13:44:42.979702 1 shared_informer.go:197] Waiting for caches to sync for service config I0315 13:44:43.079849 1 shared_informer.go:204] Caches are synced for service config I0315 13:44:43.079980 1 shared_informer.go:204] Caches are synced for endpoints config

==> kube-scheduler [a1ed6a6dedbb] <== W0315 13:44:25.821233 1 authorization.go:47] Authorization is disabled W0315 13:44:25.821250 1 authentication.go:92] Authentication is disabled I0315 13:44:25.821257 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I0315 13:44:25.822479 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 I0315 13:44:25.822614 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0315 13:44:25.822628 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0315 13:44:25.822652 1 tlsconfig.go:219] Starting DynamicServingCertificateController E0315 13:44:25.825737 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0315 13:44:25.825733 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0315 13:44:25.825868 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0315 13:44:25.825943 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0315 13:44:25.826005 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: Get https://localhost:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0315 13:44:25.826054 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0315 13:44:25.826116 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0315 13:44:25.826177 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0315 13:44:25.826255 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0315 13:44:25.826313 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0315 13:44:25.826323 1 reflector.go:153] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0315 13:44:25.826398 1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list v1.ConfigMap: Get https://localhost:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0315 13:44:29.060752 1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0315 13:44:29.061937 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0315 13:44:29.062057 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0315 13:44:29.062268 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0315 13:44:29.062591 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0315 13:44:29.062946 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0315 13:44:29.063069 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0315 13:44:29.063433 1 reflector.go:153] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0315 13:44:29.063564 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0315 13:44:29.063930 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0315 13:44:29.064053 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0315 13:44:29.066837 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0315 13:44:30.061745 1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0315 13:44:30.062674 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0315 13:44:30.067306 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0315 13:44:30.069761 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0315 13:44:30.070708 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0315 13:44:30.071991 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0315 13:44:30.073275 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0315 13:44:30.074268 1 reflector.go:153] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0315 13:44:30.075439 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0315 13:44:30.076576 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0315 13:44:30.077597 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0315 13:44:30.078702 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0315 13:44:31.063067 1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0315 13:44:31.064248 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0315 13:44:31.068517 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0315 13:44:31.070648 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0315 13:44:31.071697 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0315 13:44:31.072855 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0315 13:44:31.074065 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0315 13:44:31.075525 1 reflector.go:153] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0315 13:44:31.076256 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0315 13:44:31.077526 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0315 13:44:31.078279 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0315 13:44:31.079290 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0315 13:44:32.064684 1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" I0315 13:44:32.122932 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler... I0315 13:44:32.174793 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler I0315 13:44:33.122861 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file E0315 13:46:01.109128 1 factory.go:494] pod is already present in the activeQ

==> kubelet <== -- Logs begin at Sun 2020-03-15 13:43:59 UTC, end at Sun 2020-03-15 13:47:07 UTC. -- Mar 15 13:44:34 minikube kubelet[3709]: I0315 13:44:34.965760 3709 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] } Mar 15 13:44:34 minikube kubelet[3709]: I0315 13:44:34.965822 3709 clientconn.go:577] ClientConn switching balancer to "pick_first" Mar 15 13:44:34 minikube kubelet[3709]: I0315 13:44:34.965910 3709 remote_image.go:50] parsed scheme: "" Mar 15 13:44:34 minikube kubelet[3709]: I0315 13:44:34.965973 3709 remote_image.go:50] scheme "" not registered, fallback to default scheme Mar 15 13:44:34 minikube kubelet[3709]: I0315 13:44:34.965984 3709 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] } Mar 15 13:44:34 minikube kubelet[3709]: I0315 13:44:34.965988 3709 clientconn.go:577] ClientConn switching balancer to "pick_first" Mar 15 13:44:38 minikube kubelet[3709]: E0315 13:44:38.032146 3709 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated. Mar 15 13:44:38 minikube kubelet[3709]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.038849 3709 kuberuntime_manager.go:211] Container runtime docker initialized, version: 19.03.6, apiVersion: 1.40.0 Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.052391 3709 server.go:1113] Started kubelet Mar 15 13:44:38 minikube kubelet[3709]: E0315 13:44:38.052797 3709 kubelet.go:1302] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.053715 3709 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.057919 3709 server.go:144] Starting to listen on 0.0.0.0:10250 Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.059603 3709 server.go:384] Adding debug handlers to kubelet server. Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.062631 3709 volume_manager.go:265] Starting Kubelet Volume Manager Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.063900 3709 desired_state_of_world_populator.go:138] Desired state populator starts to run Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.077402 3709 status_manager.go:157] Starting to sync pod status with apiserver Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.077441 3709 kubelet.go:1820] Starting kubelet main sync loop. Mar 15 13:44:38 minikube kubelet[3709]: E0315 13:44:38.077486 3709 kubelet.go:1844] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful] Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.164154 3709 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach Mar 15 13:44:38 minikube kubelet[3709]: E0315 13:44:38.177816 3709 kubelet.go:1844] skipping pod synchronization - container runtime status check may not have completed yet Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.186284 3709 kubelet_node_status.go:70] Attempting to register node m01 Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.186693 3709 cpu_manager.go:173] [cpumanager] starting with none policy Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.186789 3709 cpu_manager.go:174] [cpumanager] reconciling every 10s Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.186858 3709 policy_none.go:43] [cpumanager] none policy: Start Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.188293 3709 plugin_manager.go:114] Starting Kubelet Plugin Manager Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.258403 3709 kubelet_node_status.go:112] Node m01 was previously registered Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.258645 3709 kubelet_node_status.go:73] Successfully registered node m01 Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.466248 3709 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/23ac451767325451408aa747a33e3099-ca-certs") pod "kube-apiserver-m01" (UID: "23ac451767325451408aa747a33e3099") Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.466470 3709 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/23ac451767325451408aa747a33e3099-k8s-certs") pod "kube-apiserver-m01" (UID: "23ac451767325451408aa747a33e3099") Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.466612 3709 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/23ac451767325451408aa747a33e3099-usr-share-ca-certificates") pod "kube-apiserver-m01" (UID: "23ac451767325451408aa747a33e3099") Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.466745 3709 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/80ba035806c251fc494f94b0d3d4047d-etcd-certs") pod "etcd-m01" (UID: "80ba035806c251fc494f94b0d3d4047d") Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.466873 3709 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/80ba035806c251fc494f94b0d3d4047d-etcd-data") pod "etcd-m01" (UID: "80ba035806c251fc494f94b0d3d4047d") Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.567300 3709 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/67b7e5352c5d7693f9bfac40cd9df88f-ca-certs") pod "kube-controller-manager-m01" (UID: "67b7e5352c5d7693f9bfac40cd9df88f") Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.567644 3709 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/67b7e5352c5d7693f9bfac40cd9df88f-flexvolume-dir") pod "kube-controller-manager-m01" (UID: "67b7e5352c5d7693f9bfac40cd9df88f") Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.568082 3709 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/67b7e5352c5d7693f9bfac40cd9df88f-usr-share-ca-certificates") pod "kube-controller-manager-m01" (UID: "67b7e5352c5d7693f9bfac40cd9df88f") Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.568271 3709 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/67b7e5352c5d7693f9bfac40cd9df88f-k8s-certs") pod "kube-controller-manager-m01" (UID: "67b7e5352c5d7693f9bfac40cd9df88f") Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.568397 3709 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/67b7e5352c5d7693f9bfac40cd9df88f-kubeconfig") pod "kube-controller-manager-m01" (UID: "67b7e5352c5d7693f9bfac40cd9df88f") Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.668776 3709 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/e3025acd90e7465e66fa19c71b916366-kubeconfig") pod "kube-scheduler-m01" (UID: "e3025acd90e7465e66fa19c71b916366") Mar 15 13:44:38 minikube kubelet[3709]: I0315 13:44:38.668820 3709 reconciler.go:156] Reconciler: start to sync state Mar 15 13:44:40 minikube kubelet[3709]: I0315 13:44:40.273117 3709 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/9c295e45-79bc-41ba-930d-0840087d25ff-xtables-lock") pod "kube-proxy-gvw2x" (UID: "9c295e45-79bc-41ba-930d-0840087d25ff") Mar 15 13:44:40 minikube kubelet[3709]: I0315 13:44:40.273541 3709 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/9c295e45-79bc-41ba-930d-0840087d25ff-lib-modules") pod "kube-proxy-gvw2x" (UID: "9c295e45-79bc-41ba-930d-0840087d25ff") Mar 15 13:44:40 minikube kubelet[3709]: I0315 13:44:40.273601 3709 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a2054814-6716-4604-8c47-659c901e588d-config-volume") pod "coredns-6955765f44-v8876" (UID: "a2054814-6716-4604-8c47-659c901e588d") Mar 15 13:44:40 minikube kubelet[3709]: I0315 13:44:40.273650 3709 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-xxx76" (UniqueName: "kubernetes.io/secret/9c295e45-79bc-41ba-930d-0840087d25ff-kube-proxy-token-xxx76") pod "kube-proxy-gvw2x" (UID: "9c295e45-79bc-41ba-930d-0840087d25ff") Mar 15 13:44:40 minikube kubelet[3709]: I0315 13:44:40.273696 3709 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-ck4r4" (UniqueName: "kubernetes.io/secret/a2054814-6716-4604-8c47-659c901e588d-coredns-token-ck4r4") pod "coredns-6955765f44-v8876" (UID: "a2054814-6716-4604-8c47-659c901e588d") Mar 15 13:44:40 minikube kubelet[3709]: I0315 13:44:40.273741 3709 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/9c295e45-79bc-41ba-930d-0840087d25ff-kube-proxy") pod "kube-proxy-gvw2x" (UID: "9c295e45-79bc-41ba-930d-0840087d25ff") Mar 15 13:44:40 minikube kubelet[3709]: I0315 13:44:40.374032 3709 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-ck4r4" (UniqueName: "kubernetes.io/secret/e0400d3e-0822-484b-8a59-dc522f1bf74e-coredns-token-ck4r4") pod "coredns-6955765f44-294ps" (UID: "e0400d3e-0822-484b-8a59-dc522f1bf74e") Mar 15 13:44:40 minikube kubelet[3709]: I0315 13:44:40.374256 3709 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e0400d3e-0822-484b-8a59-dc522f1bf74e-config-volume") pod "coredns-6955765f44-294ps" (UID: "e0400d3e-0822-484b-8a59-dc522f1bf74e") Mar 15 13:44:40 minikube kubelet[3709]: I0315 13:44:40.875804 3709 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/4f27891c-2215-4d83-b972-9b2516bbb042-tmp") pod "storage-provisioner" (UID: "4f27891c-2215-4d83-b972-9b2516bbb042") Mar 15 13:44:40 minikube kubelet[3709]: I0315 13:44:40.875935 3709 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-n8mkj" (UniqueName: "kubernetes.io/secret/4f27891c-2215-4d83-b972-9b2516bbb042-storage-provisioner-token-n8mkj") pod "storage-provisioner" (UID: "4f27891c-2215-4d83-b972-9b2516bbb042") Mar 15 13:44:41 minikube kubelet[3709]: W0315 13:44:41.166707 3709 pod_container_deletor.go:75] Container "b43e2c3c314652f28fe9563b49877e9cbf402670282e4b87ca2a6eb0cfd64963" not found in pod's containers Mar 15 13:44:42 minikube kubelet[3709]: W0315 13:44:42.969447 3709 pod_container_deletor.go:75] Container "d0c2d3ba6416c55200e55ddff183c283503c7e61c48adfbaaf7babd59ce97851" not found in pod's containers Mar 15 13:44:43 minikube kubelet[3709]: W0315 13:44:43.638442 3709 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-v8876 through plugin: invalid network status for Mar 15 13:44:43 minikube kubelet[3709]: W0315 13:44:43.898885 3709 pod_container_deletor.go:75] Container "106015078cc63c6f7b66226b8376f7956ce46c8b9562bcf411e4d74214370321" not found in pod's containers Mar 15 13:44:43 minikube kubelet[3709]: W0315 13:44:43.900265 3709 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-v8876 through plugin: invalid network status for Mar 15 13:44:43 minikube kubelet[3709]: W0315 13:44:43.900907 3709 pod_container_deletor.go:75] Container "1ce00149d20bc7a404106b4620ce6fcd863ef1d8112b0b3cad8a50b959f8c02c" not found in pod's containers Mar 15 13:44:43 minikube kubelet[3709]: W0315 13:44:43.906119 3709 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-294ps through plugin: invalid network status for Mar 15 13:44:44 minikube kubelet[3709]: W0315 13:44:44.949094 3709 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-294ps through plugin: invalid network status for Mar 15 13:44:44 minikube kubelet[3709]: W0315 13:44:44.978473 3709 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-v8876 through plugin: invalid network status for Mar 15 13:44:45 minikube kubelet[3709]: W0315 13:44:45.998098 3709 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-294ps through plugin: invalid network status for

==> storage-provisioner [dcc32e267491] <==

The operating system version: Ubuntu 18.04.4 LTS, 5.3.0-40-generic

The deployment that fails (contains PV definition): mongo.deployment.yml.txt

Prior applying the deployment, the folder structure was created on the minikube node using

minikube ssh

Then creating the folder

mkdir -p /tmp/data/db
govargo commented 4 years ago

This issue can be reproduced (minikube v1.8.2).

【Additional Information】(Sorry, I cannot still resolve this)

When mongo.deployment.yml is deployed, I check storage-provisioner's log. Then storage-provisioner outputs following log.

storage-provisioner E0319 15:55:11.655434       1 controller.go:584] Error getting claim "default/mongo-pvc"'s StorageClass's fields: StorageClass "local-storage" not found

It seems storage-provisoner's controller cannot get StorageClass API Resource. Latest minikube uses sig-storage-lib-external-provisioner v4.0.0.

I could find the target source code. But it looks correct for me.

https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/blob/d22b74e900af4bf90174d259c3e52c3680b41ab4/controller/controller.go#L1415-L1422

In my opinion, this may be kubernetes api version's problem. If so, it can explan why the storage-provisioner worked at before minikube version upgrade.

I checked sig-storage-lib-external-provisioner releases. And I found the latest tag v4.1.0. It says "Works with 1.5-1.17 Kubernetes clusters". I tried to deploy local build storage-provisoner using v4.1.0 to my minikube cluster.

But it stops working...

$ stern storage-provisioner
+ storage-provisioner › storage-provisioner
storage-provisioner storage-provisioner I0319 17:20:36.308607       1 storage_provisioner.go:115] Initializing the Minikube storage provisioner...
storage-provisioner storage-provisioner I0319 17:20:36.315801       1 storage_provisioner.go:140] Storage provisioner initialized, now starting service!
storage-provisioner storage-provisioner I0319 17:20:36.315905       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
storage-provisioner storage-provisioner I0319 17:20:53.723798       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
storage-provisioner storage-provisioner I0319 17:20:53.722422       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c6a6e39b-fdb8-4d14-b64a-06a58a1535e7", APIVersion:"v1", ResourceVersion:"13948", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_4a5e3d60-0c2e-4d16-ba6b-e42f444539e4 became leader
storage-provisioner storage-provisioner I0319 17:20:53.730880       1 controller.go:780] Starting provisioner controller k8s.io/minikube-hostpath_minikube_4a5e3d60-0c2e-4d16-ba6b-e42f444539e4!
storage-provisioner storage-provisioner I0319 17:20:53.831839       1 controller.go:829] Started provisioner controller k8s.io/minikube-hostpath_minikube_4a5e3d60-0c2e-4d16-ba6b-e42f444539e4!
(stopped)

[Added(2019/03/20)]

Apparently, the storage-provisioner built on macOS does not work properly. I debugged why the storage-provisioner doesn't work, but I did not understand after all.

And I tried sig-storage-local-static-provisioner. The local-static-provisioner doesn't work too(Pod still keep Pending since the PVC & PV are not bound).

govargo commented 4 years ago

Finally, I understood why the PV & PVC are not bound.

Now minikube is going add multi nodes feature. So, the nodeName of minikube became "m01".

$ kubectl get nodes
NAME   STATUS   ROLES    AGE   VERSION
m01    Ready    master   29m   v1.18.0-rc.1

This is why the PV & PVC are not bound. The PV mongo-pv has node-affinity to nodeName minikube. So if you change nodeName to m01, you can use running mongo-db. @alex-gab

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mongo-pv
spec:
  capacity:
    storage: 1Gi
  # volumeMode block feature gate enabled by default with 1.13+
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  # StorageClass has a reclaim policy default so it'll be "inherited" by the PV
  # persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /tmp/data/db
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
-         - minikube 
+         - m01
govargo commented 4 years ago

related issue

https://github.com/kubernetes/kubernetes/issues/88229

alex-gab commented 4 years ago

Thank you @govargo, I confirm that after upgrading to the latest stable minikube release (v1.8.2) and performing the changes you suggested, everything works fine!

Just a little mention, I do not know if it is an issue or not: while the node name has changed to m01 when issuing the command:

$ kubectl get nodes
NAME   STATUS   ROLES    AGE   VERSION
m01    Ready    master   14m   v1.17.3

the hostname of the node still remains minikube:

$ minikube ssh
                         _             _            
            _         _ ( )           ( )           
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __  
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ 
$ hostname
minikube
govargo commented 4 years ago

I was relieved that mongo-db works.

Just a little mention, I do not know if it is an issue or not: while the node name has changed to m01 when issuing the command:

I think this is not issue. Multi node experimental PR is merged to master branch(not released yet). https://github.com/kubernetes/minikube/pull/6787

Multi nodes is likely the following output.

$ minikube start --nodes=2

$ minikube node stop node2
$ minikube node start node2

I think the host name was minikube for a long time, but it will change in the future. minikube ssh will also change in future.

alex-gab commented 4 years ago

Hello @govargo, a small observation that I do not know if it is a big thing or not. After upgrading to minikube v1.9.1, kubectl command query for nodes reports minikube now instead of m01.

$ kubectl get nodes
NAME       STATUS   ROLES    AGE     VERSION
minikube   Ready    master   8m25s   v1.18.0

$ minikube status
m01
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

$ minikube version
minikube version: v1.9.1
commit: d8747aec7ebf8332ddae276d5f8fb42d3152b5a1
govargo commented 4 years ago

@alex-gab Thank you for your comment!! Unfortunately, it looks like bug. I checked this in latest minikube version.

I opened new issue #7452. This issue will be fixed in it.