kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
28.8k stars 4.82k forks source link

error during start with `--kubernetes-version=stable` during k8s upgrade #8156

Closed robrich closed 3 years ago

robrich commented 4 years ago

Steps to reproduce the issue:

  1. Running on Windows 10 Pro 1909
  2. Existing cluster was previously running minikube 1.9.2 and k8s 1.18.0 and started with the same ... --kubernetes-version=stable command.
  3. Upgrade minikube.exe to 1.10.1
  4. minikube start --vm-driver hyperv --kubernetes-version=stable
  5. Minikube vm is now running, but k8s is not.
  6. Change start command to minikube start --vm-driver hyperv --kubernetes-version=v1.18.2
  7. k8s is now running as expected.

Full output of failed command:

* minikube v1.10.1 on Microsoft Windows 10 Pro 10.0.18363 Build 18363
  - MINIKUBE_ACTIVE_DOCKERD=minikube
* Using the hyperv driver based on existing profile
E0514 18:13:23.038256    1944 start.go:988] Error parsing old version "stable": No Major.Minor.Patch elements found
E0514 18:13:23.040047    1944 start.go:988] Error parsing old version "stable": No Major.Minor.Patch elements found
* Starting control plane node minikube in cluster minikube
* Downloading Kubernetes v1.18.2 preload ...
    > preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4: 525.43 MiB
* Restarting existing hyperv VM for "minikube" ...
* Preparing Kubernetes v1.18.2 on Docker 18.09.9 ...
*
* [INVALID_KUBERNETES_VERSION] Failed to update cluster kubeadm images: semver: No Major.Minor.Patch elements found
* Suggestion: Specify --kubernetes-version in v<major>.<minor.<build> form. example: 'v1.1.14'

Full output of minikube start command used, if not already included:

Optional: Full output of minikube logs command:

``` * ==> Docker <== * -- Logs begin at Fri 2020-05-15 01:14:39 UTC, end at Fri 2020-05-15 01:19:45 UTC. -- * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.553087909Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1 * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.553363109Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1 * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.553378609Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1 * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.553414509Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1 * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.553424309Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1 * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.553433509Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1 * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.553442209Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1 * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.553451809Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1 * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.553547409Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2 * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.553619409Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1 * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.554995509Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1 * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.555024109Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1 * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.555069309Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1 * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.555079909Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1 * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.555089309Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1 * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.555098209Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1 * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.555106209Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1 * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.555114709Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1 * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.555123209Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1 * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.555132009Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1 * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.555140509Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1 * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.555194509Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1 * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.555206609Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1 * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.555215609Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1 * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.555266309Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1 * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.555490609Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock" * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.555571809Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock" * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.556036209Z" level=info msg="containerd successfully booted in 0.030045s" * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.557840409Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000047100, READY" module=grpc * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.575475309Z" level=info msg="systemd-resolved is running, so using resolvconf: /run/systemd/resolve/resolv.conf" * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.587057909Z" level=info msg="parsed scheme: \"unix\"" module=grpc * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.587180209Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.587647409Z" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/containerd.sock 0 }]" module=grpc * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.587866509Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.587899509Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0000476b0, CONNECTING" module=grpc * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.588742909Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0000476b0, READY" module=grpc * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.590121209Z" level=info msg="parsed scheme: \"unix\"" module=grpc * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.590266509Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.590486309Z" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/containerd.sock 0 }]" module=grpc * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.590694009Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.590826909Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000047980, CONNECTING" module=grpc * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.591047609Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000047980, READY" module=grpc * May 15 01:15:17 minikube dockerd[2362]: time="2020-05-15T01:15:17.748829009Z" level=info msg="[graphdriver] using prior storage driver: overlay2" * May 15 01:15:18 minikube dockerd[2362]: time="2020-05-15T01:15:18.460659709Z" level=info msg="Graph migration to content-addressability took 0.00 seconds" * May 15 01:15:18 minikube dockerd[2362]: time="2020-05-15T01:15:18.461019109Z" level=warning msg="Your kernel does not support cgroup blkio weight" * May 15 01:15:18 minikube dockerd[2362]: time="2020-05-15T01:15:18.461045109Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" * May 15 01:15:18 minikube dockerd[2362]: time="2020-05-15T01:15:18.461055709Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" * May 15 01:15:18 minikube dockerd[2362]: time="2020-05-15T01:15:18.461065009Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" * May 15 01:15:18 minikube dockerd[2362]: time="2020-05-15T01:15:18.461074909Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" * May 15 01:15:18 minikube dockerd[2362]: time="2020-05-15T01:15:18.461084909Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" * May 15 01:15:18 minikube dockerd[2362]: time="2020-05-15T01:15:18.461696009Z" level=info msg="Loading containers: start." * May 15 01:15:19 minikube dockerd[2362]: time="2020-05-15T01:15:19.374091009Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" * May 15 01:15:19 minikube dockerd[2362]: time="2020-05-15T01:15:19.616160409Z" level=info msg="Loading containers: done." * May 15 01:15:19 minikube dockerd[2362]: time="2020-05-15T01:15:19.648062809Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" * May 15 01:15:19 minikube dockerd[2362]: time="2020-05-15T01:15:19.648980709Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" * May 15 01:15:19 minikube dockerd[2362]: time="2020-05-15T01:15:19.680106709Z" level=info msg="Docker daemon" commit=039a7df9ba graphdriver(s)=overlay2 version=18.09.9 * May 15 01:15:19 minikube dockerd[2362]: time="2020-05-15T01:15:19.681608509Z" level=info msg="Daemon has completed initialization" * May 15 01:15:19 minikube dockerd[2362]: time="2020-05-15T01:15:19.707835009Z" level=info msg="API listen on /var/run/docker.sock" * May 15 01:15:19 minikube dockerd[2362]: time="2020-05-15T01:15:19.707867709Z" level=info msg="API listen on [::]:2376" * May 15 01:15:19 minikube systemd[1]: Started Docker Application Container Engine. * * ==> container status <== * time="2020-05-15T01:19:47Z" level=fatal msg="failed to connect: failed to connect, make sure you are running as root and the runtime has been started: context deadline exceeded" * CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES * 81fd99ed22ed d3e55153f52f "kube-controller-man…" 2 days ago Exited (2) 2 days ago k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_3016593d20758bbfe68aba26604a8e3d_13 * f1dea7edb1b7 a31f78c7c8ce "kube-scheduler --au…" 2 days ago Exited (2) 2 days ago k8s_kube-scheduler_kube-scheduler-minikube_kube-system_5795d0c442cb997ff93c49feeb9f6386_13 * 92e8aa0ad497 29024c9c6e70 "/usr/bin/dumb-init …" 6 days ago Exited (137) 2 days ago k8s_nginx-ingress-controller_nginx-ingress-controller-6fc5bcc8c9-mlqv9_kube-system_98bb2c67-c1a4-4689-b253-948eb77311d4_25 * 2394323cd63d 4689081edb10 "/storage-provisioner" 6 days ago Exited (2) 2 days ago k8s_storage-provisioner_storage-provisioner_kube-system_37323912-6250-4009-9f30-4dc93d14d7cc_23 * 3a22a495c545 k8s.gcr.io/pause:3.2 "/pause" 6 days ago Exited (0) 2 days ago k8s_POD_nginx-ingress-controller-6fc5bcc8c9-mlqv9_kube-system_98bb2c67-c1a4-4689-b253-948eb77311d4_10 * 56b7e78a0297 k8s.gcr.io/pause:3.2 "/pause" 6 days ago Exited (0) 2 days ago k8s_POD_storage-provisioner_kube-system_37323912-6250-4009-9f30-4dc93d14d7cc_15 * 13cb74b7038f 67da37a9a360 "/coredns -conf /etc…" 6 days ago Exited (0) 2 days ago k8s_coredns_coredns-66bff467f8-5pt62_kube-system_fb7a8046-3629-4ec1-b3fd-b5f235ac8d7e_8 * e320df0fb73d 67da37a9a360 "/coredns -conf /etc…" 6 days ago Exited (0) 2 days ago k8s_coredns_coredns-66bff467f8-t2prt_kube-system_c7a2a0ad-a298-4500-b85b-365a97a2833c_8 * db4a42a2b581 43940c34f24f "/usr/local/bin/kube…" 6 days ago Exited (2) 2 days ago k8s_kube-proxy_kube-proxy-d9lww_kube-system_a299b53f-bd53-4bea-a2ca-a7380efba5fa_6 * a6859b3827f2 k8s.gcr.io/pause:3.2 "/pause" 6 days ago Exited (0) 2 days ago k8s_POD_kube-proxy-d9lww_kube-system_a299b53f-bd53-4bea-a2ca-a7380efba5fa_6 * a6b4d6731b61 k8s.gcr.io/pause:3.2 "/pause" 6 days ago Exited (0) 2 days ago k8s_POD_coredns-66bff467f8-5pt62_kube-system_fb7a8046-3629-4ec1-b3fd-b5f235ac8d7e_6 * 5396e3b4f3f6 k8s.gcr.io/pause:3.2 "/pause" 6 days ago Exited (0) 2 days ago k8s_POD_coredns-66bff467f8-t2prt_kube-system_c7a2a0ad-a298-4500-b85b-365a97a2833c_6 * b06835fc34fd 303ce5db0e90 "etcd --advertise-cl…" 6 days ago Exited (0) 2 days ago k8s_etcd_etcd-minikube_kube-system_749fa983cc6c15d9a2da52fc6bbad2e5_7 * 345bb7be2aad d3e55153f52f "kube-controller-man…" 6 days ago Exited (255) 2 days ago k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_3016593d20758bbfe68aba26604a8e3d_12 * f94decc51888 a31f78c7c8ce "kube-scheduler --au…" 6 days ago Exited (255) 2 days ago k8s_kube-scheduler_kube-scheduler-minikube_kube-system_5795d0c442cb997ff93c49feeb9f6386_12 * e93e744525ff 74060cea7f70 "kube-apiserver --ad…" 6 days ago Exited (137) 2 days ago k8s_kube-apiserver_kube-apiserver-minikube_kube-system_36bdf954ca3c7ba1d1a6064d389812cb_7 * a19df4e92d3a k8s.gcr.io/pause:3.2 "/pause" 6 days ago Exited (0) 2 days ago k8s_POD_kube-controller-manager-minikube_kube-system_3016593d20758bbfe68aba26604a8e3d_6 * cf517e7424f6 k8s.gcr.io/pause:3.2 "/pause" 6 days ago Exited (0) 2 days ago k8s_POD_kube-apiserver-minikube_kube-system_36bdf954ca3c7ba1d1a6064d389812cb_6 * 0bb32f3f0c97 k8s.gcr.io/pause:3.2 "/pause" 6 days ago Exited (0) 2 days ago k8s_POD_etcd-minikube_kube-system_749fa983cc6c15d9a2da52fc6bbad2e5_6 * 4231cf75ef63 k8s.gcr.io/pause:3.2 "/pause" 6 days ago Exited (0) 2 days ago k8s_POD_kube-scheduler-minikube_kube-system_5795d0c442cb997ff93c49feeb9f6386_6 * * ==> coredns [13cb74b7038f] <== * I0508 18:45:21.201471 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-08 18:44:59.949538284 +0000 UTC m=+0.275201657) (total time: 21.251790656s): * Trace[2019727887]: [21.251790656s] [21.251790656s] END * E0508 18:45:21.201974 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused * I0508 18:45:21.201891 1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-08 18:44:59.992437846 +0000 UTC m=+0.318101319) (total time: 21.209440096s): * Trace[1427131847]: [21.209440096s] [21.209440096s] END * E0508 18:45:21.202062 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused * I0508 18:45:21.201938 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-08 18:45:00.197835304 +0000 UTC m=+0.523498777) (total time: 21.004097238s): * Trace[939984059]: [21.004097238s] [21.004097238s] END * E0508 18:45:21.202074 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused * .:53 * [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 * CoreDNS-1.6.7 * linux/amd64, go1.13.6, da7f65b * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] SIGTERM: Shutting down servers then terminating * [INFO] plugin/health: Going into lameduck mode for 5s * * ==> coredns [e320df0fb73d] <== * I0508 18:45:21.201482 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-08 18:44:59.99250204 +0000 UTC m=+0.318173113) (total time: 21.2088992s): * Trace[2019727887]: [21.2088992s] [21.2088992s] END * E0508 18:45:21.201498 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused * I0508 18:45:21.201559 1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-08 18:45:00.198299689 +0000 UTC m=+0.523970662) (total time: 21.003244752s): * Trace[1427131847]: [21.003244752s] [21.003244752s] END * E0508 18:45:21.201568 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused * I0508 18:45:21.201615 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-08 18:44:59.949529385 +0000 UTC m=+0.275200358) (total time: 21.252079956s): * Trace[939984059]: [21.252079956s] [21.252079956s] END * E0508 18:45:21.201820 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused * .:53 * [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 * CoreDNS-1.6.7 * linux/amd64, go1.13.6, da7f65b * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] SIGTERM: Shutting down servers then terminating * [INFO] plugin/health: Going into lameduck mode for 5s * * ==> describe nodes <== E0514 18:19:46.943251 7216 logs.go:178] command /bin/bash -c "sudo /var/lib/minikube/binaries/stable/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/stable/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1 stdout: stderr: sudo: /var/lib/minikube/binaries/stable/kubectl: command not found output: "\n** stderr ** \nsudo: /var/lib/minikube/binaries/stable/kubectl: command not found\n\n** /stderr **" * * ==> dmesg <== * [May15 01:14] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. * [ +0.031296] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug, * * this clock source is slow. Consider trying other clock sources * [ +23.850956] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 * [ +1.779630] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons * [ +5.795071] systemd-fstab-generator[1174]: Ignoring "noauto" for root device * [ +0.006861] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:35 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling. * [ +0.000003] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.) * [ +5.269116] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack. * [ +0.136942] vboxguest: loading out-of-tree module taints kernel. * [ +0.005377] vboxguest: PCI device not found, probably running on physical hardware. * [May15 01:15] systemd-fstab-generator[2344]: Ignoring "noauto" for root device * [ +0.094421] systemd-fstab-generator[2353]: Ignoring "noauto" for root device * [ +23.528545] systemd-fstab-generator[2736]: Ignoring "noauto" for root device * [May15 01:16] NFSD: Unable to end grace period: -110 * * ==> etcd [b06835fc34fd] <== * 2020-05-12 20:36:10.758521 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-apiserver-minikube.160d270c530c643d\" " with result "range_response_count:1 size:862" took too long (192.902586ms) to execute * 2020-05-12 20:36:11.036783 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:294" took too long (129.172689ms) to execute * WARNING: 2020/05/12 20:36:15 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" * 2020-05-12 20:36:19.533305 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-controller-manager-minikube.160d21e087f8795e\" " with result "range_response_count:0 size:6" took too long (144.007762ms) to execute * 2020-05-12 20:36:20.563388 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-minikube\" " with result "range_response_count:1 size:4002" took too long (135.429475ms) to execute * 2020-05-12 20:36:51.550227 I | mvcc: store.index: compact 366506 * 2020-05-12 20:36:51.561367 I | mvcc: finished scheduled compaction at 366506 (took 9.460684ms) * 2020-05-12 20:38:37.025298 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:173" took too long (123.370261ms) to execute * 2020-05-12 20:41:51.574258 I | mvcc: store.index: compact 367161 * 2020-05-12 20:41:51.578268 I | mvcc: finished scheduled compaction at 367161 (took 2.721294ms) * 2020-05-12 20:46:51.584892 I | mvcc: store.index: compact 367861 * 2020-05-12 20:46:51.600454 I | mvcc: finished scheduled compaction at 367861 (took 15.281161ms) * 2020-05-12 20:49:27.912073 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:304" took too long (147.442815ms) to execute * 2020-05-12 20:49:30.105003 W | etcdserver: request "header: txn: success:> failure: >>" with result "size:18" took too long (138.188239ms) to execute * 2020-05-12 20:49:31.872539 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:286" took too long (262.460013ms) to execute * 2020-05-12 20:49:31.873004 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:500 " with result "range_response_count:0 size:6" took too long (263.53241ms) to execute * 2020-05-12 20:50:37.075709 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:173" took too long (128.52056ms) to execute * 2020-05-12 20:50:38.223769 W | etcdserver: request "header: txn: success:> failure: >>" with result "size:18" took too long (141.370725ms) to execute * 2020-05-12 20:50:38.225141 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "range_response_count:1 size:435" took too long (1.010438125s) to execute * 2020-05-12 20:50:38.238814 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:453" took too long (534.041786ms) to execute * 2020-05-12 20:50:38.669806 W | etcdserver: request "header: txn: success:> failure: >>" with result "size:18" took too long (322.485346ms) to execute * 2020-05-12 20:50:38.670017 W | etcdserver: read-only range request "key:\"/registry/endpointslices/default/kubernetes\" " with result "range_response_count:1 size:490" took too long (383.851583ms) to execute * 2020-05-12 20:50:38.671580 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:304" took too long (371.258217ms) to execute * 2020-05-12 20:50:41.320126 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "range_response_count:1 size:435" took too long (422.983979ms) to execute * 2020-05-12 20:50:41.320757 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:304" took too long (562.983609ms) to execute * 2020-05-12 20:50:41.627616 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:285" took too long (191.126394ms) to execute * 2020-05-12 20:50:45.599424 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:304" took too long (120.57888ms) to execute * 2020-05-12 20:50:45.911548 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "range_response_count:1 size:435" took too long (166.533358ms) to execute * 2020-05-12 20:50:46.079295 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:286" took too long (107.658514ms) to execute * 2020-05-12 20:50:48.288301 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:286" took too long (144.133617ms) to execute * 2020-05-12 20:51:21.033833 W | etcdserver: request "header: txn: success:> failure: >>" with result "size:18" took too long (128.121958ms) to execute * 2020-05-12 20:51:29.834277 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:286" took too long (125.120866ms) to execute * 2020-05-12 20:51:31.791550 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:303" took too long (297.500505ms) to execute * 2020-05-12 20:51:32.856997 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/ingress-controller-leader-nginx\" " with result "range_response_count:1 size:456" took too long (120.921477ms) to execute * 2020-05-12 20:51:37.143749 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:220" took too long (150.613297ms) to execute * 2020-05-12 20:51:38.660986 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:285" took too long (168.606849ms) to execute * 2020-05-12 20:51:40.218042 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:304" took too long (117.495786ms) to execute * 2020-05-12 20:51:49.300876 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:285" took too long (117.087887ms) to execute * 2020-05-12 20:51:50.946556 W | etcdserver: read-only range request "key:\"/registry/podtemplates\" range_end:\"/registry/podtemplatet\" count_only:true " with result "range_response_count:0 size:6" took too long (108.43481ms) to execute * 2020-05-12 20:51:51.619910 I | mvcc: store.index: compact 368561 * 2020-05-12 20:51:51.692151 I | mvcc: finished scheduled compaction at 368561 (took 71.14201ms) * 2020-05-12 20:53:16.713892 W | wal: sync duration of 2.126503941s, expected less than 1s * 2020-05-12 20:53:16.758826 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:304" took too long (2.157009658s) to execute * 2020-05-12 20:53:16.760741 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "range_response_count:1 size:435" took too long (1.518048489s) to execute * 2020-05-12 20:53:38.119407 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:304" took too long (148.740996ms) to execute * 2020-05-12 20:53:38.365913 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:286" took too long (133.453938ms) to execute * 2020-05-12 20:53:46.970440 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "range_response_count:1 size:435" took too long (351.431744ms) to execute * 2020-05-12 20:53:46.971395 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:304" took too long (531.626155ms) to execute * 2020-05-12 20:55:45.088844 I | etcdserver: start to snapshot (applied: 590059, lastsnap: 580058) * 2020-05-12 20:55:45.113635 I | etcdserver: saved snapshot at index 590059 * 2020-05-12 20:55:45.114360 I | etcdserver: compacted raft log at 585059 * 2020-05-12 20:55:50.280912 I | pkg/fileutil: purged file /var/lib/minikube/etcd/member/snap/0000000000000011-0000000000083d96.snap successfully * 2020-05-12 20:56:51.691210 I | mvcc: store.index: compact 369247 * 2020-05-12 20:56:51.692703 I | mvcc: finished scheduled compaction at 369247 (took 742.598µs) * 2020-05-12 20:56:58.387437 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:303" took too long (139.720613ms) to execute * 2020-05-12 20:57:54.026647 W | etcdserver: read-only range request "key:\"/registry/poddisruptionbudgets\" range_end:\"/registry/poddisruptionbudgett\" count_only:true " with result "range_response_count:0 size:6" took too long (115.585979ms) to execute * 2020-05-12 20:57:54.027296 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:286" took too long (220.909086ms) to execute * 2020-05-12 20:57:55.089573 N | pkg/osutil: received terminated signal, shutting down... * WARNING: 2020/05/12 20:57:55 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * 2020-05-12 20:57:55.373176 I | etcdserver: skipped leadership transfer for single voting member cluster * * ==> kernel <== * 01:19:48 up 5 min, 0 users, load average: 0.02, 0.13, 0.08 * Linux minikube 4.15.0 #1 SMP Wed Sep 18 07:44:58 PDT 2019 x86_64 GNU/Linux * PRETTY_NAME="Buildroot 2018.05.3" * * ==> kube-apiserver [e93e744525ff] <== * W0512 20:58:03.645090 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:03.700357 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:03.701613 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:03.718690 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:03.772655 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:03.789589 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:03.834590 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:03.842365 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:03.872410 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:03.908705 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:03.931879 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:03.932498 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:03.988122 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:03.996836 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.145780 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.164401 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.174209 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.217407 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.223170 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.235187 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.242970 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.246884 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.258374 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.277235 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.296689 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.301783 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.312374 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.350079 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.351093 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.422055 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.452269 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.512641 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.541597 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.574761 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.597482 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.624861 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.647760 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.690230 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.707604 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.718859 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.731944 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.786742 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.791111 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.888660 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:04.944852 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:05.011056 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:05.011342 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:05.051896 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:05.052178 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:05.080473 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:05.092963 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:05.101407 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:05.146856 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:05.193032 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:05.200852 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:05.267103 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:05.314848 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:05.349443 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:05.353774 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0512 20:58:05.356241 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * * ==> kube-controller-manager [345bb7be2aad] <== * I0508 18:45:19.128585 1 controllermanager.go:533] Started "pv-protection" * I0508 18:45:19.128732 1 pv_protection_controller.go:83] Starting PV protection controller * I0508 18:45:19.128740 1 shared_informer.go:223] Waiting for caches to sync for PV protection * I0508 18:45:19.279037 1 controllermanager.go:533] Started "serviceaccount" * I0508 18:45:19.279092 1 serviceaccounts_controller.go:117] Starting service account controller * I0508 18:45:19.279099 1 shared_informer.go:223] Waiting for caches to sync for service account * I0508 18:45:19.428400 1 controllermanager.go:533] Started "daemonset" * I0508 18:45:19.428596 1 daemon_controller.go:257] Starting daemon sets controller * I0508 18:45:19.428605 1 shared_informer.go:223] Waiting for caches to sync for daemon sets * I0508 18:45:19.727846 1 controllermanager.go:533] Started "disruption" * I0508 18:45:19.728283 1 disruption.go:331] Starting disruption controller * I0508 18:45:19.728432 1 shared_informer.go:223] Waiting for caches to sync for disruption * I0508 18:45:19.878680 1 controllermanager.go:533] Started "cronjob" * W0508 18:45:19.878926 1 controllermanager.go:525] Skipping "nodeipam" * I0508 18:45:19.878979 1 cronjob_controller.go:97] Starting CronJob Manager * I0508 18:45:20.039689 1 controllermanager.go:533] Started "persistentvolume-binder" * I0508 18:45:20.042016 1 pv_controller_base.go:295] Starting persistent volume controller * I0508 18:45:20.043779 1 shared_informer.go:223] Waiting for caches to sync for persistent volume * I0508 18:45:20.042967 1 shared_informer.go:223] Waiting for caches to sync for resource quota * I0508 18:45:20.078991 1 shared_informer.go:223] Waiting for caches to sync for garbage collector * W0508 18:45:20.095061 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist * I0508 18:45:20.096311 1 shared_informer.go:230] Caches are synced for namespace * I0508 18:45:20.104032 1 shared_informer.go:230] Caches are synced for certificate-csrsigning * I0508 18:45:20.121352 1 shared_informer.go:230] Caches are synced for PVC protection * I0508 18:45:20.128695 1 shared_informer.go:230] Caches are synced for certificate-csrapproving * I0508 18:45:20.128917 1 shared_informer.go:230] Caches are synced for PV protection * I0508 18:45:20.130026 1 shared_informer.go:230] Caches are synced for TTL * I0508 18:45:20.130180 1 shared_informer.go:230] Caches are synced for endpoint_slice * I0508 18:45:20.144439 1 shared_informer.go:230] Caches are synced for persistent volume * I0508 18:45:20.159773 1 shared_informer.go:230] Caches are synced for endpoint * I0508 18:45:20.161274 1 shared_informer.go:230] Caches are synced for expand * I0508 18:45:20.166564 1 shared_informer.go:230] Caches are synced for GC * I0508 18:45:20.178786 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator * I0508 18:45:20.180190 1 shared_informer.go:230] Caches are synced for service account * I0508 18:45:20.181331 1 shared_informer.go:230] Caches are synced for HPA * I0508 18:45:20.228962 1 shared_informer.go:230] Caches are synced for job * I0508 18:45:20.332380 1 shared_informer.go:230] Caches are synced for taint * I0508 18:45:20.332968 1 taint_manager.go:187] Starting NoExecuteTaintManager * I0508 18:45:20.333487 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: * W0508 18:45:20.334206 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp. * I0508 18:45:20.334568 1 node_lifecycle_controller.go:1249] Controller detected that zone is now in state Normal. * I0508 18:45:20.334215 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"a6e64099-086c-43ff-b176-f86e5d68b0b3", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller * I0508 18:45:20.336033 1 shared_informer.go:230] Caches are synced for bootstrap_signer * I0508 18:45:20.379510 1 shared_informer.go:230] Caches are synced for ReplicaSet * I0508 18:45:20.384122 1 shared_informer.go:230] Caches are synced for deployment * I0508 18:45:20.397445 1 shared_informer.go:230] Caches are synced for stateful set * I0508 18:45:20.428884 1 shared_informer.go:230] Caches are synced for daemon sets * I0508 18:45:20.580510 1 shared_informer.go:230] Caches are synced for ReplicationController * I0508 18:45:20.584804 1 shared_informer.go:230] Caches are synced for attach detach * I0508 18:45:20.628856 1 shared_informer.go:230] Caches are synced for disruption * I0508 18:45:20.629112 1 disruption.go:339] Sending events to api server. * I0508 18:45:20.647691 1 shared_informer.go:230] Caches are synced for resource quota * I0508 18:45:20.675651 1 shared_informer.go:230] Caches are synced for resource quota * I0508 18:45:20.679910 1 shared_informer.go:230] Caches are synced for garbage collector * I0508 18:45:20.743600 1 shared_informer.go:230] Caches are synced for garbage collector * I0508 18:45:20.744685 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage * E0512 20:36:13.865050 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://192.168.184.131:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: context deadline exceeded * I0512 20:36:13.910444 1 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"", Name:"", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_564ef06a-bf90-4b10-8c08-786eb677a5d6 stopped leading * I0512 20:36:13.913364 1 leaderelection.go:277] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition * F0512 20:36:13.915228 1 controllermanager.go:279] leaderelection lost * * ==> kube-controller-manager [81fd99ed22ed] <== * I0512 20:36:42.793671 1 controllermanager.go:533] Started "statefulset" * I0512 20:36:42.793847 1 stateful_set.go:146] Starting stateful set controller * I0512 20:36:42.794292 1 shared_informer.go:223] Waiting for caches to sync for stateful set * I0512 20:36:42.953955 1 controllermanager.go:533] Started "namespace" * I0512 20:36:42.954071 1 namespace_controller.go:200] Starting namespace controller * I0512 20:36:42.954078 1 shared_informer.go:223] Waiting for caches to sync for namespace * I0512 20:36:43.093063 1 controllermanager.go:533] Started "clusterrole-aggregation" * I0512 20:36:43.093130 1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator * I0512 20:36:43.095531 1 shared_informer.go:223] Waiting for caches to sync for ClusterRoleAggregator * I0512 20:36:43.243989 1 controllermanager.go:533] Started "job" * I0512 20:36:43.244183 1 job_controller.go:144] Starting job controller * I0512 20:36:43.244192 1 shared_informer.go:223] Waiting for caches to sync for job * I0512 20:36:43.392572 1 controllermanager.go:533] Started "replicaset" * I0512 20:36:43.392720 1 replica_set.go:181] Starting replicaset controller * I0512 20:36:43.392729 1 shared_informer.go:223] Waiting for caches to sync for ReplicaSet * I0512 20:36:43.542930 1 controllermanager.go:533] Started "persistentvolume-expander" * I0512 20:36:43.542958 1 expand_controller.go:319] Starting expand controller * I0512 20:36:43.544906 1 shared_informer.go:223] Waiting for caches to sync for expand * I0512 20:36:43.544848 1 shared_informer.go:223] Waiting for caches to sync for resource quota * I0512 20:36:43.667188 1 shared_informer.go:230] Caches are synced for job * I0512 20:36:43.667988 1 shared_informer.go:230] Caches are synced for expand * I0512 20:36:43.668324 1 shared_informer.go:230] Caches are synced for namespace * I0512 20:36:43.691294 1 shared_informer.go:230] Caches are synced for certificate-csrapproving * I0512 20:36:43.693370 1 shared_informer.go:230] Caches are synced for ReplicaSet * I0512 20:36:43.697632 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator * I0512 20:36:43.700088 1 shared_informer.go:230] Caches are synced for service account * I0512 20:36:43.700620 1 shared_informer.go:230] Caches are synced for PV protection * I0512 20:36:43.713878 1 shared_informer.go:230] Caches are synced for PVC protection * I0512 20:36:43.731347 1 shared_informer.go:230] Caches are synced for certificate-csrsigning * I0512 20:36:43.742939 1 shared_informer.go:230] Caches are synced for HPA * I0512 20:36:43.743842 1 shared_informer.go:230] Caches are synced for endpoint_slice * I0512 20:36:43.746498 1 shared_informer.go:223] Waiting for caches to sync for garbage collector * W0512 20:36:43.746903 1 endpointslice_controller.go:260] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: node "minikube" not found * I0512 20:36:43.747023 1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"f796cd25-6a4d-4e29-aea8-5b0b958fca94", APIVersion:"v1", ResourceVersion:"210", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: node "minikube" not found * W0512 20:36:43.752609 1 endpointslice_controller.go:260] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: node "minikube" not found * I0512 20:36:43.752652 1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"f796cd25-6a4d-4e29-aea8-5b0b958fca94", APIVersion:"v1", ResourceVersion:"210", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: node "minikube" not found * W0512 20:36:43.755120 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist * I0512 20:36:43.766578 1 shared_informer.go:230] Caches are synced for daemon sets * I0512 20:36:43.774292 1 shared_informer.go:230] Caches are synced for attach detach * I0512 20:36:43.777975 1 shared_informer.go:230] Caches are synced for TTL * I0512 20:36:43.795710 1 shared_informer.go:230] Caches are synced for taint * I0512 20:36:43.796305 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: * I0512 20:36:43.796646 1 taint_manager.go:187] Starting NoExecuteTaintManager * W0512 20:36:43.797037 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp. * I0512 20:36:43.799909 1 node_lifecycle_controller.go:1249] Controller detected that zone is now in state Normal. * I0512 20:36:43.797230 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"a6e64099-086c-43ff-b176-f86e5d68b0b3", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller * I0512 20:36:43.800076 1 shared_informer.go:230] Caches are synced for endpoint * I0512 20:36:43.822163 1 shared_informer.go:230] Caches are synced for GC * I0512 20:36:43.850031 1 shared_informer.go:230] Caches are synced for deployment * I0512 20:36:43.854491 1 shared_informer.go:230] Caches are synced for persistent volume * I0512 20:36:43.895076 1 shared_informer.go:230] Caches are synced for stateful set * I0512 20:36:44.043961 1 shared_informer.go:230] Caches are synced for disruption * I0512 20:36:44.044093 1 disruption.go:339] Sending events to api server. * I0512 20:36:44.044840 1 shared_informer.go:230] Caches are synced for ReplicationController * I0512 20:36:44.151661 1 shared_informer.go:230] Caches are synced for resource quota * I0512 20:36:44.169074 1 shared_informer.go:230] Caches are synced for garbage collector * I0512 20:36:44.169224 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage * I0512 20:36:44.192968 1 shared_informer.go:230] Caches are synced for resource quota * I0512 20:36:44.228012 1 shared_informer.go:230] Caches are synced for bootstrap_signer * I0512 20:36:44.247099 1 shared_informer.go:230] Caches are synced for garbage collector * * ==> kube-proxy [db4a42a2b581] <== * W0508 18:45:00.680799 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy * I0508 18:45:00.738594 1 node.go:136] Successfully retrieved node IP: 192.168.184.131 * I0508 18:45:00.739482 1 server_others.go:186] Using iptables Proxier. * W0508 18:45:00.739618 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined * I0508 18:45:00.742204 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local * I0508 18:45:00.760437 1 server.go:583] Version: v1.18.0 * I0508 18:45:00.763870 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 * I0508 18:45:00.763898 1 conntrack.go:52] Setting nf_conntrack_max to 131072 * I0508 18:45:00.764038 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 * I0508 18:45:00.764088 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 * I0508 18:45:00.772069 1 config.go:315] Starting service config controller * I0508 18:45:00.772279 1 shared_informer.go:223] Waiting for caches to sync for service config * I0508 18:45:00.772419 1 config.go:133] Starting endpoints config controller * I0508 18:45:00.772548 1 shared_informer.go:223] Waiting for caches to sync for endpoints config * I0508 18:45:00.872892 1 shared_informer.go:230] Caches are synced for service config * I0508 18:45:00.872907 1 shared_informer.go:230] Caches are synced for endpoints config * * ==> kube-scheduler [f1dea7edb1b7] <== * I0512 20:36:19.801170 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0512 20:36:19.801462 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0512 20:36:20.857474 1 serving.go:313] Generated self-signed cert in-memory * I0512 20:36:21.601640 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0512 20:36:21.601855 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * W0512 20:36:21.617085 1 authorization.go:47] Authorization is disabled * W0512 20:36:21.617160 1 authentication.go:40] Authentication is disabled * I0512 20:36:21.617194 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 * I0512 20:36:21.628138 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 * I0512 20:36:21.628515 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file * I0512 20:36:21.628765 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file * I0512 20:36:21.628867 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0512 20:36:21.629031 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0512 20:36:21.628879 1 tlsconfig.go:240] Starting DynamicServingCertificateController * I0512 20:36:21.729262 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0512 20:36:21.730842 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file * I0512 20:36:21.731039 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler... * I0512 20:36:37.986105 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler * I0512 20:57:54.164834 1 log.go:172] http: TLS handshake error from 127.0.0.1:49270: read tcp 127.0.0.1:10259->127.0.0.1:49270: read: connection reset by peer * * ==> kube-scheduler [f94decc51888] <== * I0508 18:44:52.660264 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0508 18:44:52.660599 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0508 18:44:53.504351 1 serving.go:313] Generated self-signed cert in-memory * W0508 18:44:58.209925 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' * W0508 18:44:58.210022 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" * W0508 18:44:58.210033 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous. * W0508 18:44:58.210039 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false * I0508 18:44:58.298783 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0508 18:44:58.298857 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * W0508 18:44:58.384018 1 authorization.go:47] Authorization is disabled * W0508 18:44:58.384045 1 authentication.go:40] Authentication is disabled * I0508 18:44:58.384229 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 * I0508 18:44:58.389402 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0508 18:44:58.389838 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0508 18:44:58.390638 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 * I0508 18:44:58.391017 1 tlsconfig.go:240] Starting DynamicServingCertificateController * I0508 18:44:58.490989 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0508 18:44:58.491659 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler... * I0508 18:45:16.123987 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler * E0512 20:36:13.615484 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-scheduler: Get https://192.168.184.131:8443/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: context deadline exceeded * I0512 20:36:13.635543 1 leaderelection.go:277] failed to renew lease kube-system/kube-scheduler: timed out waiting for the condition * F0512 20:36:13.635990 1 server.go:244] leaderelection lost * * ==> kubelet <== * -- Logs begin at Fri 2020-05-15 01:14:39 UTC, end at Fri 2020-05-15 01:19:49 UTC. -- * -- No entries -- * * ==> storage-provisioner [2394323cd63d] <== ! unable to fetch logs for: describe nodes ```
medyagh commented 4 years ago

@robrich thank you for reporting this. that seems like a bug ! meanwhile I believe u should still be able to get arround it by passing the actual version.

I will make sure we fix this in our next release

radeksm commented 4 years ago

Problem seems to be driver independent. It happens when you try start a cluster while there is already existing one, regardless if it's running or stopped.

[radek@c8k20 ~]$ ./minikube-linux-amd64 status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

[radek@c8k20 ~]$  ./minikube-linux-amd64  --v=4 --driver=docker  --kubernetes-version=stable  start
* minikube v1.10.1 on Centos 8.1.1911
* Using the docker driver based on existing profile
E0517 18:23:37.548452   14231 start.go:988] Error parsing old version "stable": No Major.Minor.Patch elements found
E0517 18:23:37.548857   14231 start.go:988] Error parsing old version "stable": No Major.Minor.Patch elements found
* Starting control plane node minikube in cluster minikube
* Updating the running docker "minikube" container ...
* Preparing Kubernetes v1.18.2 on Docker 19.03.2 ...
  - kubeadm.pod-network-cidr=10.244.0.0/16
*
* [INVALID_KUBERNETES_VERSION] Failed to update cluster kubeadm images: semver: No Major.Minor.Patch elements found
* Suggestion: Specify --kubernetes-version in v<major>.<minor.<build> form. example: 'v1.1.14'

IMHO we should check if there is existing cluster and if there is, we should refuse to start.

xAt0mZ commented 4 years ago

Problem seems to be driver independent.

+1. Can reproduce with Docker and Virtualbox.

Idk how minikube manages the default value, but per the doc

--kubernetes-version='': The Kubernetes version that the minikube VM will use (ex: v1.2.3, 'stable' for v1.18.2, 'latest' for v1.18.3-beta.0). Defaults to 'stable'.

IMO using the flag with the default value and not using the flag (so let minikube use the default value by itself) should have the same behaviour.

Per testing, not specifying --kubernetes-version doesn't behave the same as specifying it.

With update available 2020-05-25-173007_1083x660_scrot

With default kubernetes-version (stable) 2020-05-25-173332_1125x816_scrot

Suggestion

Only use stable and latest as dynamic version aliases and replaced them with the current version when performing the action: so minikube start --kubernetes-version=stable becomes minikube start --kubernetes-version=v1.18.2 (current latest).

In case of version missmatch between existing cluster and specified version (stable beeing an alias to current stable, dynamically changing in time) perform the upgrade. IMO the only matter of using stable instead of a specific version is to always have the latest version under the stable tag instead of manually performing the updates.

Btw bug also exists with latest alias 2020-05-25-174053_1078x464_scrot

medyagh commented 3 years ago

I believe this was fixed ? @robrich @xAt0mZ do you mind verifying if this issue was solved by latest verison of minikube?

xAt0mZ commented 3 years ago

@medyagh I confirm this issue is solved on kubernetes 1.12.2 :rocket:

~> minikube version
minikube version: v1.12.2
commit: be7c19d391302656d27f1f213657d925c4e1cfc2
sharifelgamal commented 3 years ago

Excellent, closing this.