kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.42k stars 4.88k forks source link

kvm2: [kubelet-check] Initial timeout of 40s passed: write unix /var/run/docker.sock->@: write: broken pipe #7145

Closed deej-io closed 4 years ago

deej-io commented 4 years ago

When using the kvm2 driver, the "Launching Kubernetes" task is painfully slow and often fails with an error due to some connection timeout. The resulting error from minikube is different everytime so I have just presented the latest below.

In the case where minikube up does succeed, I regularly get 503 errors when running kubectl get componentstatuses and commands like minikube dashboard often fail with timeouts.

Using the virtualbox driver works perfectly fine, so I can use it for now. But I am a little stumped as to how to resolve this.

Many thanks,

Daniel

The exact command to reproduce the issue:

minikube start --driver kvm2

The full output of the command that failed:

🙄  minikube v1.8.2 on Arch rolling
✨  Using the kvm2 driver based on user configuration
🔥  Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.17.3 on Docker 19.03.6 ...
🚀  Launching Kubernetes ...

💣  Error starting cluster: init failed. output: "-- stdout --\n[init] Using Kubernetes version: v1.17.3\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [m01 localhost] and IPs [192.168.39.240 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [m01 localhost] and IPs [192.168.39.240 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s\n[kubelet-check] Initial timeout of 40s passed.\n\nUnfortunately, an error has occurred:\n\ttimed out waiting for the condition\n\nThis error is likely caused by:\n\t- The kubelet is not running\n\t- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)\n\nIf you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:\n\t- 'systemctl status kubelet'\n\t- 'journalctl -xeu kubelet'\n\nAdditionally, a control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.\nHere is one example how you may list all Kubernetes containers running in docker:\n\t- 'docker ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'docker logs CONTAINERID'\n\n-- /stdout --\n** stderr ** \nW0322 15:00:53.205175    2490 validation.go:28] Cannot validate kubelet config - no validator is available\nW0322 15:00:53.205272    2490 validation.go:28] Cannot validate kube-proxy config - no validator is available\n\t[WARNING Hostname]: hostname \"m01\" could not be reached\n\t[WARNING Hostname]: hostname \"m01\": lookup m01 on 192.168.122.1:53: no such host\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nW0322 15:00:55.762983    2490 manifests.go:214] the default kube-apiserver authorization-mode is \"Node,RBAC\"; using \"Node,RBAC\"\nW0322 15:00:55.764418    2490 manifests.go:214] the default kube-apiserver authorization-mode is \"Node,RBAC\"; using \"Node,RBAC\"\nerror execution phase wait-control-plane: couldn't initialize a Kubernetes cluster\nTo see the stack trace of this error execute with --v=5 or higher\n\n** /stderr **": /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.17.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [m01 localhost] and IPs [192.168.39.240 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [m01 localhost] and IPs [192.168.39.240 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
    - 'docker ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'docker logs CONTAINERID'

stderr:
W0322 15:00:53.205175    2490 validation.go:28] Cannot validate kubelet config - no validator is available
W0322 15:00:53.205272    2490 validation.go:28] Cannot validate kube-proxy config - no validator is available
    [WARNING Hostname]: hostname "m01" could not be reached
    [WARNING Hostname]: hostname "m01": lookup m01 on 192.168.122.1:53: no such host
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0322 15:00:55.762983    2490 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0322 15:00:55.764418    2490 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose

The output of the minikube logs command:

==> Docker <==
-- Logs begin at Sun 2020-03-22 15:00:03 UTC, end at Sun 2020-03-22 15:07:29 UTC. --
Mar 22 15:00:25 minikube dockerd[2228]: time="2020-03-22T15:00:25.688968924Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
Mar 22 15:00:25 minikube dockerd[2228]: time="2020-03-22T15:00:25.689035759Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
Mar 22 15:00:25 minikube dockerd[2228]: time="2020-03-22T15:00:25.689101502Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
Mar 22 15:00:25 minikube dockerd[2228]: time="2020-03-22T15:00:25.689171313Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
Mar 22 15:00:25 minikube dockerd[2228]: time="2020-03-22T15:00:25.689243028Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
Mar 22 15:00:25 minikube dockerd[2228]: time="2020-03-22T15:00:25.689322237Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
Mar 22 15:00:25 minikube dockerd[2228]: time="2020-03-22T15:00:25.689402517Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
Mar 22 15:00:25 minikube dockerd[2228]: time="2020-03-22T15:00:25.689493217Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
Mar 22 15:00:25 minikube dockerd[2228]: time="2020-03-22T15:00:25.689571154Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
Mar 22 15:00:25 minikube dockerd[2228]: time="2020-03-22T15:00:25.689639311Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
Mar 22 15:00:25 minikube dockerd[2228]: time="2020-03-22T15:00:25.689732246Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
Mar 22 15:00:25 minikube dockerd[2228]: time="2020-03-22T15:00:25.689914798Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
Mar 22 15:00:25 minikube dockerd[2228]: time="2020-03-22T15:00:25.690024093Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
Mar 22 15:00:25 minikube dockerd[2228]: time="2020-03-22T15:00:25.690094966Z" level=info msg="containerd successfully booted in 0.006411s"
Mar 22 15:00:25 minikube dockerd[2228]: time="2020-03-22T15:00:25.697622868Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar 22 15:00:25 minikube dockerd[2228]: time="2020-03-22T15:00:25.697799218Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar 22 15:00:25 minikube dockerd[2228]: time="2020-03-22T15:00:25.697903394Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Mar 22 15:00:25 minikube dockerd[2228]: time="2020-03-22T15:00:25.697981130Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar 22 15:00:25 minikube dockerd[2228]: time="2020-03-22T15:00:25.698600982Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar 22 15:00:25 minikube dockerd[2228]: time="2020-03-22T15:00:25.698622132Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar 22 15:00:25 minikube dockerd[2228]: time="2020-03-22T15:00:25.698639094Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Mar 22 15:00:25 minikube dockerd[2228]: time="2020-03-22T15:00:25.698650926Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar 22 15:00:44 minikube dockerd[2228]: time="2020-03-22T15:00:44.281250303Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Mar 22 15:00:44 minikube dockerd[2228]: time="2020-03-22T15:00:44.281284169Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Mar 22 15:00:44 minikube dockerd[2228]: time="2020-03-22T15:00:44.281294390Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
Mar 22 15:00:44 minikube dockerd[2228]: time="2020-03-22T15:00:44.281303628Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
Mar 22 15:00:44 minikube dockerd[2228]: time="2020-03-22T15:00:44.281313126Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
Mar 22 15:00:44 minikube dockerd[2228]: time="2020-03-22T15:00:44.281321683Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
Mar 22 15:00:44 minikube dockerd[2228]: time="2020-03-22T15:00:44.281514631Z" level=info msg="Loading containers: start."
Mar 22 15:00:48 minikube dockerd[2228]: time="2020-03-22T15:00:48.176446471Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Mar 22 15:00:50 minikube dockerd[2228]: time="2020-03-22T15:00:50.937052255Z" level=info msg="Loading containers: done."
Mar 22 15:00:51 minikube dockerd[2228]: time="2020-03-22T15:00:51.991129110Z" level=info msg="Docker daemon" commit=369ce74a3c graphdriver(s)=overlay2 version=19.03.6
Mar 22 15:00:51 minikube dockerd[2228]: time="2020-03-22T15:00:51.991194536Z" level=info msg="Daemon has completed initialization"
Mar 22 15:00:52 minikube dockerd[2228]: time="2020-03-22T15:00:52.610576555Z" level=info msg="API listen on [::]:2376"
Mar 22 15:00:52 minikube systemd[1]: Started Docker Application Container Engine.
Mar 22 15:00:52 minikube dockerd[2228]: time="2020-03-22T15:00:52.611974424Z" level=info msg="API listen on /var/run/docker.sock"
Mar 22 15:01:38 minikube dockerd[2228]: time="2020-03-22T15:01:38.695102547Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/83389c771bd432e30ddabec90d4167157f03b950961a8553deadcc566c11dde5/shim.sock" debug=false pid=3228
Mar 22 15:01:42 minikube dockerd[2228]: time="2020-03-22T15:01:42.284303387Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/23aa1aa1349c4850f3a04fde4a579fecf0646130e0bb8c053bbe55d1d3459a95/shim.sock" debug=false pid=3300
Mar 22 15:01:47 minikube dockerd[2228]: time="2020-03-22T15:01:47.370284427Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/cf05dbe78fb086e910581579f3721eafb6afbfb7c0604b20ec05d575b4241f61/shim.sock" debug=false pid=3362
Mar 22 15:01:51 minikube dockerd[2228]: time="2020-03-22T15:01:51.944545922Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/78cb0318ae894f15dc8a572f4336cfbbd468baa1feffa9dd7a98bb587b67e04f/shim.sock" debug=false pid=3421
Mar 22 15:01:55 minikube dockerd[2228]: time="2020-03-22T15:01:55.186169278Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/94c541bb4f91a75b97d3d30f348ad4a9c0e1f55fbd05dbce7b0a75d33d379b7b/shim.sock" debug=false pid=3560
Mar 22 15:01:55 minikube dockerd[2228]: time="2020-03-22T15:01:55.559173445Z" level=error msg="Handler for GET /containers/78cb0318ae894f15dc8a572f4336cfbbd468baa1feffa9dd7a98bb587b67e04f/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Mar 22 15:01:55 minikube dockerd[2228]: time="2020-03-22T15:01:55.559328325Z" level=error msg="Handler for GET /containers/78cb0318ae894f15dc8a572f4336cfbbd468baa1feffa9dd7a98bb587b67e04f/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Mar 22 15:01:55 minikube dockerd[2228]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
Mar 22 15:01:55 minikube dockerd[2228]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
Mar 22 15:02:28 minikube dockerd[2228]: time="2020-03-22T15:02:28.094809793Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/300cbe21b0b31dd0fddfc1464500579bc96296a21841686bcd5f4a728b3c492e/shim.sock" debug=false pid=4054
Mar 22 15:02:30 minikube dockerd[2228]: time="2020-03-22T15:02:30.881344641Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/76ab1aba18a09b6fac561f09200a97995c9e052a6a47219be2d78c93bef31ead/shim.sock" debug=false pid=4121
Mar 22 15:02:31 minikube dockerd[2228]: time="2020-03-22T15:02:31.716068649Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3cd0e33e8e49eb8f63e5f28baca4199ee65c79fbd90847c40a2931afc6d30e94/shim.sock" debug=false pid=4188
Mar 22 15:03:49 minikube dockerd[2228]: time="2020-03-22T15:03:49.781546434Z" level=info msg="shim reaped" id=76ab1aba18a09b6fac561f09200a97995c9e052a6a47219be2d78c93bef31ead
Mar 22 15:03:49 minikube dockerd[2228]: time="2020-03-22T15:03:49.791739720Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 22 15:04:02 minikube dockerd[2228]: time="2020-03-22T15:04:02.461099766Z" level=info msg="shim reaped" id=3cd0e33e8e49eb8f63e5f28baca4199ee65c79fbd90847c40a2931afc6d30e94
Mar 22 15:04:02 minikube dockerd[2228]: time="2020-03-22T15:04:02.471284899Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 22 15:04:06 minikube dockerd[2228]: time="2020-03-22T15:04:06.613240856Z" level=info msg="shim reaped" id=300cbe21b0b31dd0fddfc1464500579bc96296a21841686bcd5f4a728b3c492e
Mar 22 15:04:06 minikube dockerd[2228]: time="2020-03-22T15:04:06.623446571Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 22 15:04:11 minikube dockerd[2228]: time="2020-03-22T15:04:11.886936382Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1ce6d77c9d6fcd88633f170f7d0a3191415bba5a3d10778f30500cfd8afa1ef9/shim.sock" debug=false pid=4648
Mar 22 15:04:19 minikube dockerd[2228]: time="2020-03-22T15:04:19.514043510Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5be312f51ab9ea24be6d96a61fac75c7769f7a5446bfebf94f154dae1de3d67c/shim.sock" debug=false pid=4702
Mar 22 15:04:22 minikube dockerd[2228]: time="2020-03-22T15:04:22.486829471Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/446b2f8a11f718a92ee60760e5181c11d1c2e7a0641e6778ad490394d376b8d0/shim.sock" debug=false pid=4741
Mar 22 15:05:05 minikube dockerd[2228]: time="2020-03-22T15:05:05.921368512Z" level=info msg="shim reaped" id=1ce6d77c9d6fcd88633f170f7d0a3191415bba5a3d10778f30500cfd8afa1ef9
Mar 22 15:05:05 minikube dockerd[2228]: time="2020-03-22T15:05:05.931625180Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 22 15:05:35 minikube dockerd[2228]: time="2020-03-22T15:05:35.154737055Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/88ee17a092b7ccb36b4aeab2c70b7fa9b8e63cede97fe5a704867ad0e3c0281d/shim.sock" debug=false pid=5161

==> container status <==
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
88ee17a092b7c       b0f1517c1f4bb       2 minutes ago       Running             kube-controller-manager   2                   78cb0318ae894
446b2f8a11f71       90d27391b7808       3 minutes ago       Running             kube-apiserver            2                   23aa1aa1349c4
5be312f51ab9e       d109c0821a2b9       3 minutes ago       Running             kube-scheduler            2                   cf05dbe78fb08
1ce6d77c9d6fc       b0f1517c1f4bb       3 minutes ago       Exited              kube-controller-manager   1                   78cb0318ae894
3cd0e33e8e49e       d109c0821a2b9       5 minutes ago       Exited              kube-scheduler            1                   cf05dbe78fb08
300cbe21b0b31       90d27391b7808       5 minutes ago       Exited              kube-apiserver            1                   23aa1aa1349c4
94c541bb4f91a       303ce5db0e90d       5 minutes ago       Running             etcd                      0                   83389c771bd43

==> dmesg <==
[Mar22 14:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
[  +0.024709] Decoding supported only on Scalable MCA processors.
[  +2.379967] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[Mar22 15:00] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
[  +0.002124] systemd-fstab-generator[1138]: Ignoring "noauto" for root device
[  +0.004235] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[  +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[  +0.908128] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[  +2.406254] vboxguest: loading out-of-tree module taints kernel.
[  +0.004033] vboxguest: PCI device not found, probably running on physical hardware.
[  +3.322334] systemd-fstab-generator[1986]: Ignoring "noauto" for root device
[  +0.213877] systemd-fstab-generator[2002]: Ignoring "noauto" for root device
[  +0.229284] systemd-fstab-generator[2018]: Ignoring "noauto" for root device
[  +9.739061] kauditd_printk_skb: 59 callbacks suppressed
[ +33.780626] kauditd_printk_skb: 101 callbacks suppressed
[  +2.019059] systemd-fstab-generator[2431]: Ignoring "noauto" for root device
[  +0.586862] systemd-fstab-generator[2627]: Ignoring "noauto" for root device
[Mar22 15:02] NFSD: Unable to end grace period: -110

==> kernel <==
 15:07:29 up 7 min,  0 users,  load average: 2.71, 2.08, 1.04
Linux minikube 4.19.94 #1 SMP Fri Mar 6 11:41:28 PST 2020 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2019.02.9"

==> kube-apiserver [300cbe21b0b3] <==
Trace[2033014160]: [2.382344981s] [2.381774576s] Object stored in database
I0322 15:04:05.107047       1 dynamic_cafile_content.go:181] Shutting down request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0322 15:04:05.107147       1 controller.go:180] Shutting down kubernetes service endpoint reconciler
I0322 15:04:05.107335       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
I0322 15:04:05.107405       1 controller.go:122] Shutting down OpenAPI controller
I0322 15:04:05.107419       1 apiapproval_controller.go:197] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
I0322 15:04:05.107429       1 dynamic_cafile_content.go:181] Shutting down request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0322 15:04:05.107431       1 nonstructuralschema_controller.go:203] Shutting down NonStructuralSchemaConditionController
I0322 15:04:05.107444       1 establishing_controller.go:84] Shutting down EstablishingController
I0322 15:04:05.107455       1 naming_controller.go:299] Shutting down NamingConditionController
I0322 15:04:05.107462       1 controller.go:87] Shutting down OpenAPI AggregationController
I0322 15:04:05.107466       1 customresource_discovery_controller.go:219] Shutting down DiscoveryController
I0322 15:04:05.107474       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController
I0322 15:04:05.107477       1 autoregister_controller.go:164] Shutting down autoregister controller
I0322 15:04:05.107492       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I0322 15:04:05.107501       1 crd_finalizer.go:275] Shutting down CRDFinalizer
I0322 15:04:05.107511       1 apiservice_controller.go:106] Shutting down APIServiceRegistrationController
I0322 15:04:05.107520       1 available_controller.go:398] Shutting down AvailableConditionController
I0322 15:04:05.107443       1 dynamic_cafile_content.go:181] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0322 15:04:05.107751       1 dynamic_serving_content.go:144] Shutting down serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I0322 15:04:05.107766       1 dynamic_cafile_content.go:181] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
E0322 15:04:05.108100       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
E0322 15:04:05.108322       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I0322 15:04:05.108455       1 trace.go:116] Trace[1642533066]: "GuaranteedUpdate etcd3" type:*core.Event (started: 2020-03-22 15:04:02.924420628 +0000 UTC m=+94.745849689) (total time: 2.184011724s):
Trace[1642533066]: [2.184011724s] [2.184011724s] END
E0322 15:04:05.108474       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I0322 15:04:05.108494       1 secure_serving.go:222] Stopped listening on [::]:8443
E0322 15:04:05.108525       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
E0322 15:04:05.110188       1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: unexpected EOF
I0322 15:04:05.110236       1 trace.go:116] Trace[272172082]: "Get" url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller,user-agent:kube-apiserver/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-22 15:04:02.924789563 +0000 UTC m=+94.746218624) (total time: 2.185410941s):
Trace[272172082]: [2.185410941s] [2.185406012s] END
E0322 15:04:05.111613       1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: dial tcp 127.0.0.1:8443: connect: connection refused
I0322 15:04:05.112617       1 trace.go:116] Trace[792396609]: "Patch" url:/api/v1/namespaces/kube-system/events/kube-apiserver-m01.15fea88796bfaf0f,user-agent:kubelet/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-22 15:04:02.92436304 +0000 UTC m=+94.745792101) (total time: 2.188227566s):
Trace[792396609]: [2.188227566s] [2.188210043s] END
I0322 15:04:05.113651       1 trace.go:116] Trace[166035055]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-controller-manager-m01,user-agent:kubelet/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-22 15:04:03.884870676 +0000 UTC m=+95.706299737) (total time: 1.228757526s):
Trace[166035055]: [1.228757526s] [1.228747687s] END
E0322 15:04:05.114940       1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:05.115917       1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:05.117007       1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:05.118124       1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:05.119202       1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:05.120289       1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:05.121359       1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:05.122433       1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:05.123517       1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:05.124604       1 storage_rbac.go:284] unable to reconcile role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:05.125740       1 storage_rbac.go:284] unable to reconcile role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:05.126876       1 storage_rbac.go:284] unable to reconcile role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:05.127929       1 storage_rbac.go:284] unable to reconcile role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:05.128969       1 storage_rbac.go:284] unable to reconcile role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:05.130083       1 storage_rbac.go:284] unable to reconcile role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:05.131160       1 storage_rbac.go:284] unable to reconcile role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:05.132240       1 storage_rbac.go:316] unable to reconcile rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:05.133377       1 storage_rbac.go:316] unable to reconcile rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:05.134402       1 storage_rbac.go:316] unable to reconcile rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:05.135495       1 storage_rbac.go:316] unable to reconcile rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:05.136599       1 storage_rbac.go:316] unable to reconcile rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:05.137677       1 storage_rbac.go:316] unable to reconcile rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:05.138817       1 storage_rbac.go:316] unable to reconcile rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:06.561147       1 controller.go:183] StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.240, ResourceVersion: 0, AdditionalErrorMsg: 

==> kube-apiserver [446b2f8a11f7] <==
I0322 15:07:27.878298       1 trace.go:116] Trace[159509127]: "GuaranteedUpdate etcd3" type:*core.ServiceAccount (started: 2020-03-22 15:07:25.927823134 +0000 UTC m=+183.340684842) (total time: 1.950447564s):
Trace[159509127]: [1.950436604s] [1.950316969s] Transaction committed
I0322 15:07:27.878372       1 trace.go:116] Trace[2128943406]: "Update" url:/api/v1/namespaces/kube-system/serviceaccounts/service-controller,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/tokens-controller,client:127.0.0.1 (started: 2020-03-22 15:07:25.927772128 +0000 UTC m=+183.340633836) (total time: 1.950579543s):
Trace[2128943406]: [1.950546942s] [1.950517256s] Object stored in database
I0322 15:07:27.879293       1 trace.go:116] Trace[1858053111]: "GuaranteedUpdate etcd3" type:*rbac.ClusterRole (started: 2020-03-22 15:07:26.469113869 +0000 UTC m=+183.881975587) (total time: 1.410161356s):
Trace[1858053111]: [1.410108326s] [1.409883604s] Transaction committed
I0322 15:07:27.879364       1 trace.go:116] Trace[288842077]: "Update" url:/apis/rbac.authorization.k8s.io/v1/clusterroles/view,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/system:serviceaccount:kube-system:clusterrole-aggregation-controller,client:127.0.0.1 (started: 2020-03-22 15:07:26.468994605 +0000 UTC m=+183.881856314) (total time: 1.410351091s):
Trace[288842077]: [1.410315294s] [1.410240083s] Object stored in database
I0322 15:07:27.880796       1 trace.go:116] Trace[2040771788]: "GuaranteedUpdate etcd3" type:*rbac.ClusterRole (started: 2020-03-22 15:07:26.469402941 +0000 UTC m=+183.882264670) (total time: 1.411374643s):
Trace[2040771788]: [1.411341401s] [1.411032862s] Transaction committed
I0322 15:07:27.880865       1 trace.go:116] Trace[2035450809]: "Update" url:/apis/rbac.authorization.k8s.io/v1/clusterroles/edit,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/system:serviceaccount:kube-system:clusterrole-aggregation-controller,client:127.0.0.1 (started: 2020-03-22 15:07:26.469198969 +0000 UTC m=+183.882060697) (total time: 1.411647886s):
Trace[2035450809]: [1.411614083s] [1.411466526s] Object stored in database
I0322 15:07:27.881804       1 trace.go:116] Trace[1899235669]: "GuaranteedUpdate etcd3" type:*core.Node (started: 2020-03-22 15:07:26.506042017 +0000 UTC m=+183.918903725) (total time: 1.375726648s):
Trace[1899235669]: [1.375649073s] [1.370878658s] Transaction committed
I0322 15:07:27.882091       1 trace.go:116] Trace[1904346104]: "Patch" url:/api/v1/nodes/m01,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/system:serviceaccount:kube-system:ttl-controller,client:127.0.0.1 (started: 2020-03-22 15:07:26.505973088 +0000 UTC m=+183.918834806) (total time: 1.376086333s):
Trace[1904346104]: [1.375975344s] [1.371470377s] Object stored in database
I0322 15:07:27.881820       1 trace.go:116] Trace[1264828740]: "GuaranteedUpdate etcd3" type:*rbac.ClusterRole (started: 2020-03-22 15:07:26.469529981 +0000 UTC m=+183.882391699) (total time: 1.412273489s):
Trace[1264828740]: [1.412256658s] [1.412138807s] Transaction committed
I0322 15:07:27.882622       1 trace.go:116] Trace[867844394]: "Update" url:/apis/rbac.authorization.k8s.io/v1/clusterroles/admin,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/system:serviceaccount:kube-system:clusterrole-aggregation-controller,client:127.0.0.1 (started: 2020-03-22 15:07:26.469048717 +0000 UTC m=+183.881910445) (total time: 1.413543634s):
Trace[867844394]: [1.413460058s] [1.413018328s] Object stored in database
I0322 15:07:27.894102       1 trace.go:116] Trace[1032147056]: "List etcd3" key:/events,resourceVersion:0,limit:500,continue: (started: 2020-03-22 15:07:25.941463028 +0000 UTC m=+183.354324736) (total time: 1.952587872s):
Trace[1032147056]: [1.952587872s] [1.952587872s] END
I0322 15:07:27.894992       1 trace.go:116] Trace[766939830]: "List" url:/apis/events.k8s.io/v1beta1/events,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/shared-informers,client:127.0.0.1 (started: 2020-03-22 15:07:25.941452768 +0000 UTC m=+183.354314476) (total time: 1.953506347s):
Trace[766939830]: [1.952801484s] [1.952793919s] Listing from storage done
I0322 15:07:27.895732       1 trace.go:116] Trace[2072812300]: "Create" url:/api/v1/namespaces/default/events,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/system:serviceaccount:kube-system:node-controller,client:127.0.0.1 (started: 2020-03-22 15:07:26.539026502 +0000 UTC m=+183.951888220) (total time: 1.356676589s):
Trace[2072812300]: [1.356619472s] [1.356531487s] Object stored in database
I0322 15:07:27.933204       1 trace.go:116] Trace[1698190573]: "GuaranteedUpdate etcd3" type:*coordination.Lease (started: 2020-03-22 15:07:27.242410471 +0000 UTC m=+184.655272219) (total time: 690.760561ms):
Trace[1698190573]: [690.739902ms] [690.540919ms] Transaction committed
I0322 15:07:27.933436       1 trace.go:116] Trace[1472553011]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/m01,user-agent:kubelet/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-22 15:07:27.242124204 +0000 UTC m=+184.654985952) (total time: 691.284634ms):
Trace[1472553011]: [691.214863ms] [690.970465ms] Object stored in database
I0322 15:07:27.938875       1 trace.go:116] Trace[1211320920]: "List etcd3" key:/resourcequotas/kube-node-lease,resourceVersion:,limit:0,continue: (started: 2020-03-22 15:07:26.024559499 +0000 UTC m=+183.437421207) (total time: 1.914281696s):
Trace[1211320920]: [1.914281696s] [1.914281696s] END
I0322 15:07:27.938999       1 trace.go:116] Trace[811011954]: "List" url:/api/v1/namespaces/kube-node-lease/resourcequotas,user-agent:kube-apiserver/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-22 15:07:26.024552586 +0000 UTC m=+183.437414294) (total time: 1.914421318s):
Trace[811011954]: [1.914339024s] [1.914335959s] Listing from storage done
I0322 15:07:27.944201       1 trace.go:116] Trace[1756145935]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-22 15:07:26.087414513 +0000 UTC m=+183.500276231) (total time: 1.856752228s):
Trace[1756145935]: [1.85672085s] [1.856709068s] About to write a response
I0322 15:07:27.946737       1 trace.go:116] Trace[1967707993]: "GuaranteedUpdate etcd3" type:*core.Node (started: 2020-03-22 15:07:26.539940898 +0000 UTC m=+183.952802616) (total time: 1.406711139s):
Trace[1967707993]: [1.350379459s] [1.349669556s] Transaction committed
I0322 15:07:27.947060       1 trace.go:116] Trace[1337748792]: "Patch" url:/api/v1/nodes/m01,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/system:serviceaccount:kube-system:node-controller,client:127.0.0.1 (started: 2020-03-22 15:07:26.539865296 +0000 UTC m=+183.952727004) (total time: 1.407154021s):
Trace[1337748792]: [1.350517458s] [1.349980109s] About to apply patch
I0322 15:07:29.064909       1 trace.go:116] Trace[136138932]: "Create" url:/api/v1/namespaces/kube-node-lease/serviceaccounts,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/system:serviceaccount:kube-system:service-account-controller,client:127.0.0.1 (started: 2020-03-22 15:07:26.024205964 +0000 UTC m=+183.437067682) (total time: 3.040667341s):
Trace[136138932]: [3.040630873s] [3.040543038s] Object stored in database
I0322 15:07:29.065806       1 trace.go:116] Trace[891539876]: "List etcd3" key:/services/specs,resourceVersion:,limit:0,continue: (started: 2020-03-22 15:07:27.944796153 +0000 UTC m=+185.357657891) (total time: 1.120980318s):
Trace[891539876]: [1.120980318s] [1.120980318s] END
I0322 15:07:29.065947       1 trace.go:116] Trace[119837963]: "List" url:/api/v1/services,user-agent:kube-apiserver/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-22 15:07:27.944786595 +0000 UTC m=+185.357648333) (total time: 1.121132944s):
Trace[119837963]: [1.121049878s] [1.121043836s] Listing from storage done
I0322 15:07:29.066190       1 trace.go:116] Trace[1359994759]: "List etcd3" key:/services/specs,resourceVersion:,limit:0,continue: (started: 2020-03-22 15:07:27.943858854 +0000 UTC m=+185.356720592) (total time: 1.122307638s):
Trace[1359994759]: [1.122307638s] [1.122307638s] END
I0322 15:07:29.066287       1 trace.go:116] Trace[1078369069]: "List" url:/api/v1/services,user-agent:kube-apiserver/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-22 15:07:27.943845689 +0000 UTC m=+185.356707427) (total time: 1.122393179s):
Trace[1078369069]: [1.122358614s] [1.12235137s] Listing from storage done
I0322 15:07:29.066889       1 trace.go:116] Trace[429292902]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-22 15:07:27.945184943 +0000 UTC m=+185.358046681) (total time: 1.121680371s):
Trace[429292902]: [1.121649303s] [1.121636909s] About to write a response
I0322 15:07:29.067450       1 trace.go:116] Trace[145576192]: "Get" url:/api/v1/nodes/m01,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/system:serviceaccount:kube-system:generic-garbage-collector,client:127.0.0.1 (started: 2020-03-22 15:07:28.20390332 +0000 UTC m=+185.616765048) (total time: 863.522998ms):
Trace[145576192]: [863.464287ms] [863.451563ms] About to write a response
I0322 15:07:29.068149       1 trace.go:116] Trace[1730279724]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-22 15:07:27.942207704 +0000 UTC m=+185.355069443) (total time: 1.125918907s):
Trace[1730279724]: [1.125887689s] [1.125850129s] About to write a response
I0322 15:07:29.069013       1 trace.go:116] Trace[1722898193]: "GuaranteedUpdate etcd3" type:*rbac.ClusterRole (started: 2020-03-22 15:07:27.94882183 +0000 UTC m=+185.361683568) (total time: 1.120170457s):
Trace[1722898193]: [1.120103932s] [1.119764095s] Transaction committed
I0322 15:07:29.069284       1 trace.go:116] Trace[464997105]: "Update" url:/apis/rbac.authorization.k8s.io/v1/clusterroles/admin,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/system:serviceaccount:kube-system:clusterrole-aggregation-controller,client:127.0.0.1 (started: 2020-03-22 15:07:27.948625511 +0000 UTC m=+185.361487240) (total time: 1.120616814s):
Trace[464997105]: [1.120408664s] [1.120280244s] Object stored in database

==> kube-controller-manager [1ce6d77c9d6f] <==
I0322 15:04:12.957394       1 serving.go:312] Generated self-signed cert in-memory
I0322 15:04:13.409945       1 controllermanager.go:161] Version: v1.17.3
I0322 15:04:13.410756       1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0322 15:04:13.410814       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0322 15:04:13.411558       1 secure_serving.go:178] Serving securely on 127.0.0.1:10257
I0322 15:04:13.411636       1 tlsconfig.go:219] Starting DynamicServingCertificateController
I0322 15:04:13.412149       1 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I0322 15:04:13.412258       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...
E0322 15:04:13.412871       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:15.567643       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:18.159164       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:20.301475       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:26.114183       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "endpoints" in API group "" in the namespace "kube-system"
I0322 15:04:54.093754       1 leaderelection.go:252] successfully acquired lease kube-system/kube-controller-manager
I0322 15:04:54.094092       1 event.go:281] Event(v1.ObjectReference{Kind:"Lease", Namespace:"kube-system", Name:"kube-controller-manager", UID:"1944c905-1340-410c-a4d9-5efe493ea761", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"209", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_184b5b5f-ebc5-4e0a-afe7-c05f9bac1912 became leader
I0322 15:04:54.094126       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"705a545a-7e79-4a06-ab75-a3af7e92e8f1", APIVersion:"v1", ResourceVersion:"207", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_184b5b5f-ebc5-4e0a-afe7-c05f9bac1912 became leader
F0322 15:05:05.864554       1 controllermanager.go:230] error building controller context: failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server ("[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\nhealthz check failed") has prevented the request from succeeding

==> kube-controller-manager [88ee17a092b7] <==
I0322 15:07:09.121181       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
I0322 15:07:09.121202       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
I0322 15:07:09.121288       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
W0322 15:07:09.121302       1 shared_informer.go:415] resyncPeriod 55983731018539 is smaller than resyncCheckPeriod 68495811866177 and the informer has already started. Changing it to 68495811866177
I0322 15:07:09.121376       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
I0322 15:07:09.121431       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
I0322 15:07:09.121508       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
I0322 15:07:09.121528       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
I0322 15:07:09.122505       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
I0322 15:07:09.122911       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
I0322 15:07:09.122938       1 controllermanager.go:533] Started "resourcequota"
I0322 15:07:09.124099       1 resource_quota_controller.go:271] Starting resource quota controller
I0322 15:07:09.124116       1 shared_informer.go:197] Waiting for caches to sync for resource quota
I0322 15:07:09.124241       1 resource_quota_monitor.go:303] QuotaMonitor running
I0322 15:07:17.750311       1 garbagecollector.go:129] Starting garbage collector controller
I0322 15:07:17.750327       1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I0322 15:07:17.750347       1 graph_builder.go:282] GraphBuilder running
I0322 15:07:17.750700       1 controllermanager.go:533] Started "garbagecollector"
I0322 15:07:24.086323       1 controllermanager.go:533] Started "daemonset"
I0322 15:07:24.086538       1 daemon_controller.go:255] Starting daemon sets controller
I0322 15:07:24.086553       1 shared_informer.go:197] Waiting for caches to sync for daemon sets
E0322 15:07:25.928841       1 core.go:91] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0322 15:07:25.928856       1 controllermanager.go:525] Skipping "service"
I0322 15:07:25.929080       1 shared_informer.go:197] Waiting for caches to sync for resource quota
I0322 15:07:25.932476       1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I0322 15:07:25.975457       1 shared_informer.go:204] Caches are synced for HPA 
I0322 15:07:25.985972       1 shared_informer.go:204] Caches are synced for job 
I0322 15:07:26.004710       1 shared_informer.go:204] Caches are synced for ReplicaSet 
I0322 15:07:26.012605       1 shared_informer.go:204] Caches are synced for namespace 
I0322 15:07:26.018696       1 shared_informer.go:204] Caches are synced for certificate-csrsigning 
I0322 15:07:26.023363       1 shared_informer.go:204] Caches are synced for service account 
I0322 15:07:26.025809       1 shared_informer.go:204] Caches are synced for certificate-csrapproving 
I0322 15:07:26.035696       1 shared_informer.go:204] Caches are synced for PV protection 
I0322 15:07:26.045328       1 shared_informer.go:204] Caches are synced for deployment 
I0322 15:07:26.048843       1 shared_informer.go:204] Caches are synced for PVC protection 
I0322 15:07:26.062906       1 shared_informer.go:204] Caches are synced for expand 
I0322 15:07:26.065769       1 shared_informer.go:204] Caches are synced for stateful set 
I0322 15:07:26.115893       1 shared_informer.go:204] Caches are synced for bootstrap_signer 
I0322 15:07:26.345909       1 shared_informer.go:204] Caches are synced for endpoint 
I0322 15:07:26.383044       1 shared_informer.go:204] Caches are synced for ReplicationController 
I0322 15:07:26.454966       1 shared_informer.go:204] Caches are synced for disruption 
I0322 15:07:26.455062       1 disruption.go:338] Sending events to api server.
I0322 15:07:26.467718       1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
W0322 15:07:26.480996       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="m01" does not exist
I0322 15:07:26.486718       1 shared_informer.go:204] Caches are synced for daemon sets 
I0322 15:07:26.487505       1 shared_informer.go:204] Caches are synced for attach detach 
I0322 15:07:26.504528       1 shared_informer.go:204] Caches are synced for TTL 
I0322 15:07:26.509675       1 shared_informer.go:204] Caches are synced for persistent volume 
I0322 15:07:26.537038       1 shared_informer.go:204] Caches are synced for taint 
I0322 15:07:26.537502       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
W0322 15:07:26.537781       1 node_lifecycle_controller.go:1058] Missing timestamp for Node m01. Assuming now as a timestamp.
I0322 15:07:26.538326       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
I0322 15:07:26.538034       1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"m01", UID:"9b5937b1-f846-43e1-983e-ca25e495d7b7", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node m01 event: Registered Node m01 in Controller
I0322 15:07:26.538253       1 taint_manager.go:186] Starting NoExecuteTaintManager
I0322 15:07:26.564138       1 shared_informer.go:204] Caches are synced for GC 
I0322 15:07:27.924324       1 shared_informer.go:204] Caches are synced for resource quota 
I0322 15:07:27.929222       1 shared_informer.go:204] Caches are synced for resource quota 
I0322 15:07:27.932636       1 shared_informer.go:204] Caches are synced for garbage collector 
I0322 15:07:27.950558       1 shared_informer.go:204] Caches are synced for garbage collector 
I0322 15:07:27.950723       1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage

==> kube-scheduler [3cd0e33e8e49] <==
E0322 15:03:36.785603       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0322 15:03:36.786640       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0322 15:03:36.787561       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0322 15:03:36.788541       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0322 15:03:36.790575       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0322 15:03:36.794489       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0322 15:03:36.795455       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0322 15:03:36.799623       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0322 15:03:37.781758       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0322 15:03:37.782648       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:03:37.783621       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0322 15:03:37.784878       1 reflector.go:153] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0322 15:03:37.786741       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0322 15:03:37.787563       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0322 15:03:37.788606       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0322 15:03:37.789697       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0322 15:03:37.791308       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0322 15:03:37.795400       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0322 15:03:37.796286       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0322 15:03:37.800566       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0322 15:03:38.783017       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0322 15:03:38.783925       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:03:38.784835       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0322 15:03:38.785952       1 reflector.go:153] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0322 15:03:38.787678       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0322 15:03:38.788723       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0322 15:03:38.789700       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0322 15:03:38.790779       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0322 15:03:38.792306       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0322 15:03:38.796479       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0322 15:03:38.797637       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0322 15:03:38.801503       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0322 15:03:39.784522       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0322 15:03:39.785387       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0322 15:03:40.786597       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...
E0322 15:03:40.787605       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0322 15:03:41.772033       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
E0322 15:03:41.789251       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:03:42.790640       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:03:43.791970       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:03:44.793452       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:03:45.794850       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:03:46.796450       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:03:47.797790       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:03:48.799170       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:03:49.800284       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:03:50.801545       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:03:51.803243       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:03:52.804646       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:03:53.806242       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:03:54.807374       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:03:55.808815       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:03:56.810196       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:03:57.811736       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:03:58.813313       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:03:59.814561       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:00.815741       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:01.817134       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0322 15:04:02.412601       1 leaderelection.go:288] failed to renew lease kube-system/kube-scheduler: failed to tryAcquireOrRenew context deadline exceeded
F0322 15:04:02.412691       1 server.go:257] leaderelection lost

==> kube-scheduler [5be312f51ab9] <==
E0322 15:04:22.335718       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:22.336725       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0322 15:04:26.127144       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0322 15:04:26.129472       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:26.136142       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0322 15:04:26.136315       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0322 15:04:26.137238       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0322 15:04:26.137307       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0322 15:04:26.138297       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0322 15:04:26.138341       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0322 15:04:26.138380       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0322 15:04:26.138417       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0322 15:04:26.138455       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0322 15:04:26.138495       1 reflector.go:153] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0322 15:04:27.132900       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0322 15:04:27.223527       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...
E0322 15:04:28.134851       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:29.136429       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:30.137826       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:31.138844       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:32.139887       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:33.141330       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:34.142702       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:35.144172       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:36.145569       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:37.146703       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:38.148109       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:39.149922       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:40.151355       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:41.152469       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:42.154305       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:43.155688       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:44.157803       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:45.159462       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:46.160738       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:47.162072       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:48.163602       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:49.164844       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:50.166183       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0322 15:04:50.977424       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
E0322 15:04:51.167510       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:52.168873       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:53.170808       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:54.172142       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:55.173289       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:56.175177       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:57.176387       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:58.177617       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:04:59.179167       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:05:00.180360       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:05:01.181976       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:05:02.183953       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:05:03.185444       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:05:04.187071       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:05:05.188519       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:05:06.189456       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:05:07.190501       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:05:08.191634       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0322 15:05:09.192767       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0322 15:05:10.222625       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 

==> kubelet <==
-- Logs begin at Sun 2020-03-22 15:00:03 UTC, end at Sun 2020-03-22 15:07:29 UTC. --
Mar 22 15:04:14 minikube kubelet[3455]: E0322 15:04:14.407601    3455 kuberuntime_manager.go:955] getPodContainerStatuses for pod "kube-scheduler-m01_kube-system(e3025acd90e7465e66fa19c71b916366)" failed: rpc error: code = Unknown desc = Error: No such container: 5be312f51ab9ea24be6d96a61fac75c7769f7a5446bfebf94f154dae1de3d67c
Mar 22 15:04:14 minikube kubelet[3455]: E0322 15:04:14.475607    3455 kubelet_node_status.go:402] Error updating node status, will retry: error getting node "m01": Get https://localhost:8443/api/v1/nodes/m01?resourceVersion=0&timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:14 minikube kubelet[3455]: E0322 15:04:14.476252    3455 kubelet_node_status.go:402] Error updating node status, will retry: error getting node "m01": Get https://localhost:8443/api/v1/nodes/m01?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:14 minikube kubelet[3455]: E0322 15:04:14.476614    3455 kubelet_node_status.go:402] Error updating node status, will retry: error getting node "m01": Get https://localhost:8443/api/v1/nodes/m01?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:14 minikube kubelet[3455]: E0322 15:04:14.476973    3455 kubelet_node_status.go:402] Error updating node status, will retry: error getting node "m01": Get https://localhost:8443/api/v1/nodes/m01?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:14 minikube kubelet[3455]: E0322 15:04:14.477307    3455 kubelet_node_status.go:402] Error updating node status, will retry: error getting node "m01": Get https://localhost:8443/api/v1/nodes/m01?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:14 minikube kubelet[3455]: E0322 15:04:14.477443    3455 kubelet_node_status.go:389] Unable to update node status: update node status exceeds retry count
Mar 22 15:04:15 minikube kubelet[3455]: E0322 15:04:15.051583    3455 event.go:272] Unable to write event: 'Patch https://localhost:8443/api/v1/namespaces/kube-system/events/kube-apiserver-m01.15fea88796bfaf0f: dial tcp 127.0.0.1:8443: connect: connection refused' (may retry after sleeping)
Mar 22 15:04:15 minikube kubelet[3455]: E0322 15:04:15.118083    3455 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=1&timeout=5m31s&timeoutSeconds=331&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:15 minikube kubelet[3455]: E0322 15:04:15.118876    3455 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?allowWatchBookmarks=true&resourceVersion=1&timeout=6m46s&timeoutSeconds=406&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:15 minikube kubelet[3455]: E0322 15:04:15.119994    3455 reflector.go:307] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1&timeoutSeconds=389&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:15 minikube kubelet[3455]: E0322 15:04:15.121073    3455 reflector.go:307] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to watch *v1.Node: Get https://localhost:8443/api/v1/nodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dm01&resourceVersion=63&timeoutSeconds=331&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:15 minikube kubelet[3455]: E0322 15:04:15.122155    3455 reflector.go:307] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: Get https://localhost:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=spec.nodeName%3Dm01&resourceVersion=166&timeoutSeconds=361&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:15 minikube kubelet[3455]: W0322 15:04:15.422480    3455 status_manager.go:530] Failed to get status for pod "kube-controller-manager-m01_kube-system(67b7e5352c5d7693f9bfac40cd9df88f)": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-m01: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:15 minikube kubelet[3455]: E0322 15:04:15.428290    3455 remote_runtime.go:295] ContainerStatus "446b2f8a11f718a92ee60760e5181c11d1c2e7a0641e6778ad490394d376b8d0" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 446b2f8a11f718a92ee60760e5181c11d1c2e7a0641e6778ad490394d376b8d0
Mar 22 15:04:15 minikube kubelet[3455]: E0322 15:04:15.428339    3455 kuberuntime_manager.go:955] getPodContainerStatuses for pod "kube-apiserver-m01_kube-system(7b7b36a9e42f4ed8ff7eb0c2274453d3)" failed: rpc error: code = Unknown desc = Error: No such container: 446b2f8a11f718a92ee60760e5181c11d1c2e7a0641e6778ad490394d376b8d0
Mar 22 15:04:16 minikube kubelet[3455]: E0322 15:04:16.118618    3455 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=1&timeout=7m58s&timeoutSeconds=478&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:16 minikube kubelet[3455]: E0322 15:04:16.119643    3455 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?allowWatchBookmarks=true&resourceVersion=1&timeout=9m31s&timeoutSeconds=571&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:16 minikube kubelet[3455]: E0322 15:04:16.120579    3455 reflector.go:307] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1&timeoutSeconds=535&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:16 minikube kubelet[3455]: E0322 15:04:16.121674    3455 reflector.go:307] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to watch *v1.Node: Get https://localhost:8443/api/v1/nodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dm01&resourceVersion=63&timeoutSeconds=480&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:16 minikube kubelet[3455]: E0322 15:04:16.122745    3455 reflector.go:307] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: Get https://localhost:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=spec.nodeName%3Dm01&resourceVersion=166&timeoutSeconds=356&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:16 minikube kubelet[3455]: E0322 15:04:16.747582    3455 controller.go:135] failed to ensure node lease exists, will retry in 6.4s, error: Get https://localhost:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/m01?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:17 minikube kubelet[3455]: E0322 15:04:17.119455    3455 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=1&timeout=5m44s&timeoutSeconds=344&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:17 minikube kubelet[3455]: E0322 15:04:17.120165    3455 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?allowWatchBookmarks=true&resourceVersion=1&timeout=7m29s&timeoutSeconds=449&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:17 minikube kubelet[3455]: E0322 15:04:17.121141    3455 reflector.go:307] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1&timeoutSeconds=413&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:17 minikube kubelet[3455]: E0322 15:04:17.122156    3455 reflector.go:307] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to watch *v1.Node: Get https://localhost:8443/api/v1/nodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dm01&resourceVersion=63&timeoutSeconds=337&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:17 minikube kubelet[3455]: E0322 15:04:17.123371    3455 reflector.go:307] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: Get https://localhost:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=spec.nodeName%3Dm01&resourceVersion=166&timeoutSeconds=392&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:18 minikube kubelet[3455]: E0322 15:04:18.120313    3455 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=1&timeout=6m28s&timeoutSeconds=388&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:18 minikube kubelet[3455]: E0322 15:04:18.121007    3455 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?allowWatchBookmarks=true&resourceVersion=1&timeout=9m57s&timeoutSeconds=597&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:18 minikube kubelet[3455]: E0322 15:04:18.122014    3455 reflector.go:307] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1&timeoutSeconds=597&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:18 minikube kubelet[3455]: E0322 15:04:18.123340    3455 reflector.go:307] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to watch *v1.Node: Get https://localhost:8443/api/v1/nodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dm01&resourceVersion=63&timeoutSeconds=516&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:18 minikube kubelet[3455]: E0322 15:04:18.124189    3455 reflector.go:307] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: Get https://localhost:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=spec.nodeName%3Dm01&resourceVersion=166&timeoutSeconds=517&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:19 minikube kubelet[3455]: E0322 15:04:19.120993    3455 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=1&timeout=7m37s&timeoutSeconds=457&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:19 minikube kubelet[3455]: E0322 15:04:19.121864    3455 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?allowWatchBookmarks=true&resourceVersion=1&timeout=5m42s&timeoutSeconds=342&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:19 minikube kubelet[3455]: E0322 15:04:19.122849    3455 reflector.go:307] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1&timeoutSeconds=371&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:19 minikube kubelet[3455]: E0322 15:04:19.123892    3455 reflector.go:307] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to watch *v1.Node: Get https://localhost:8443/api/v1/nodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dm01&resourceVersion=63&timeoutSeconds=542&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:19 minikube kubelet[3455]: E0322 15:04:19.124987    3455 reflector.go:307] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: Get https://localhost:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=spec.nodeName%3Dm01&resourceVersion=166&timeoutSeconds=554&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:20 minikube kubelet[3455]: E0322 15:04:20.121703    3455 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=1&timeout=7m32s&timeoutSeconds=452&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:20 minikube kubelet[3455]: E0322 15:04:20.122385    3455 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?allowWatchBookmarks=true&resourceVersion=1&timeout=7m0s&timeoutSeconds=420&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:20 minikube kubelet[3455]: E0322 15:04:20.123490    3455 reflector.go:307] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1&timeoutSeconds=422&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:20 minikube kubelet[3455]: E0322 15:04:20.124670    3455 reflector.go:307] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to watch *v1.Node: Get https://localhost:8443/api/v1/nodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dm01&resourceVersion=63&timeoutSeconds=487&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:20 minikube kubelet[3455]: E0322 15:04:20.125566    3455 reflector.go:307] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: Get https://localhost:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=spec.nodeName%3Dm01&resourceVersion=166&timeoutSeconds=591&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:21 minikube kubelet[3455]: E0322 15:04:21.122181    3455 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=1&timeout=6m19s&timeoutSeconds=379&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:21 minikube kubelet[3455]: E0322 15:04:21.123161    3455 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?allowWatchBookmarks=true&resourceVersion=1&timeout=9m53s&timeoutSeconds=593&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:21 minikube kubelet[3455]: E0322 15:04:21.124175    3455 reflector.go:307] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1&timeoutSeconds=454&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:21 minikube kubelet[3455]: E0322 15:04:21.125238    3455 reflector.go:307] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to watch *v1.Node: Get https://localhost:8443/api/v1/nodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dm01&resourceVersion=63&timeoutSeconds=513&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:21 minikube kubelet[3455]: E0322 15:04:21.126288    3455 reflector.go:307] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: Get https://localhost:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=spec.nodeName%3Dm01&resourceVersion=166&timeoutSeconds=550&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:21 minikube kubelet[3455]: W0322 15:04:21.789183    3455 pod_container_deletor.go:75] Container "4a0947f70b041308d1476b57972bcd821b94c12f9822a91a7973ba574eb7524a" not found in pod's containers
Mar 22 15:04:21 minikube kubelet[3455]: W0322 15:04:21.789359    3455 status_manager.go:530] Failed to get status for pod "kube-controller-manager-m01_kube-system(67b7e5352c5d7693f9bfac40cd9df88f)": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-m01: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:22 minikube kubelet[3455]: E0322 15:04:22.122743    3455 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=1&timeout=8m35s&timeoutSeconds=515&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:22 minikube kubelet[3455]: E0322 15:04:22.123543    3455 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?allowWatchBookmarks=true&resourceVersion=1&timeout=6m14s&timeoutSeconds=374&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:22 minikube kubelet[3455]: E0322 15:04:22.124553    3455 reflector.go:307] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1&timeoutSeconds=481&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:22 minikube kubelet[3455]: E0322 15:04:22.125696    3455 reflector.go:307] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to watch *v1.Node: Get https://localhost:8443/api/v1/nodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dm01&resourceVersion=63&timeoutSeconds=498&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:22 minikube kubelet[3455]: E0322 15:04:22.126740    3455 reflector.go:307] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: Get https://localhost:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=spec.nodeName%3Dm01&resourceVersion=166&timeoutSeconds=317&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 22 15:04:23 minikube kubelet[3455]: W0322 15:04:23.521759    3455 pod_container_deletor.go:75] Container "c9fabe85d712c88ed9222e7d83ea48e6788f69276a2b49bfc96724193c6f7e5d" not found in pod's containers
Mar 22 15:04:38 minikube kubelet[3455]: W0322 15:04:38.550843    3455 prober.go:108] No ref for container "docker://94c541bb4f91a75b97d3d30f348ad4a9c0e1f55fbd05dbce7b0a75d33d379b7b" (etcd-m01_kube-system(05286260ff435c0171122ce14f3ab37b):etcd)
Mar 22 15:05:12 minikube kubelet[3455]: E0322 15:05:12.753452    3455 pod_workers.go:191] Error syncing pod 67b7e5352c5d7693f9bfac40cd9df88f ("kube-controller-manager-m01_kube-system(67b7e5352c5d7693f9bfac40cd9df88f)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-m01_kube-system(67b7e5352c5d7693f9bfac40cd9df88f)"
Mar 22 15:05:25 minikube kubelet[3455]: E0322 15:05:25.814719    3455 remote_runtime.go:295] ContainerStatus "88ee17a092b7ccb36b4aeab2c70b7fa9b8e63cede97fe5a704867ad0e3c0281d" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 88ee17a092b7ccb36b4aeab2c70b7fa9b8e63cede97fe5a704867ad0e3c0281d
Mar 22 15:05:25 minikube kubelet[3455]: E0322 15:05:25.814781    3455 kuberuntime_manager.go:955] getPodContainerStatuses for pod "kube-controller-manager-m01_kube-system(67b7e5352c5d7693f9bfac40cd9df88f)" failed: rpc error: code = Unknown desc = Error: No such container: 88ee17a092b7ccb36b4aeab2c70b7fa9b8e63cede97fe5a704867ad0e3c0281d
Mar 22 15:07:18 minikube kubelet[3455]: W0322 15:07:18.550689    3455 prober.go:108] No ref for container "docker://94c541bb4f91a75b97d3d30f348ad4a9c0e1f55fbd05dbce7b0a75d33d379b7b" (etcd-m01_kube-system(05286260ff435c0171122ce14f3ab37b):etcd)

The operating system version: Arch Linux. Linux snowden 5.5.10-arch1-1 #1 SMP PREEMPT Wed, 18 Mar 2020 08:40:35 +0000 x86_64 GNU/Linux

tstromberg commented 4 years ago

Do you mind checking if upgrading to minikube v1.9.2 fixes the issue? You may need to run minikube delete first to delete the corrupt state.

deej-io commented 4 years ago

Hi. Thank you for getting back to me.

I've updated to v1.9.2 from the arch repos and continue to see similar issues:

😄  minikube v1.9.2 on Arch rolling
✨  Using the kvm2 driver based on user configuration
👍  Starting control plane node m01 in cluster minikube
🔥  Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.18.0 on Docker 19.03.8 ...
💥  initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [192.168.39.139 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [192.168.39.139 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

    Unfortunately, an error has occurred:
        timed out waiting for the condition

    This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in docker:
        - 'docker ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'docker logs CONTAINERID'

stderr:
W0429 21:38:22.966334    2476 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0429 21:38:25.933310    2476 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0429 21:38:25.934837    2476 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

Output from minikube logs:

==> Docker <==
-- Logs begin at Wed 2020-04-29 21:37:30 UTC, end at Wed 2020-04-29 21:45:47 UTC. --
Apr 29 21:38:20 minikube dockerd[2185]: time="2020-04-29T21:38:20.032249257Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Apr 29 21:38:21 minikube dockerd[2185]: time="2020-04-29T21:38:21.003563886Z" level=info msg="Loading containers: done."
Apr 29 21:38:21 minikube dockerd[2185]: time="2020-04-29T21:38:21.373856798Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8
Apr 29 21:38:21 minikube dockerd[2185]: time="2020-04-29T21:38:21.374510194Z" level=info msg="Daemon has completed initialization"
Apr 29 21:38:22 minikube dockerd[2185]: time="2020-04-29T21:38:22.000875316Z" level=info msg="API listen on /var/run/docker.sock"
Apr 29 21:38:22 minikube dockerd[2185]: time="2020-04-29T21:38:22.000954600Z" level=info msg="API listen on [::]:2376"
Apr 29 21:38:22 minikube systemd[1]: Started Docker Application Container Engine.
Apr 29 21:38:56 minikube dockerd[2185]: time="2020-04-29T21:38:56.882619532Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d27eed3852e26bc27f012e4fe6a0df5d841141655077a1609f8b76eb70f03790/shim.sock" debug=false pid=3266
Apr 29 21:39:16 minikube dockerd[2185]: time="2020-04-29T21:39:16.029890016Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f3506c834b0c37904fea7c8e8c7ce3f0fed094ecd8ac4883b6d82ce4d1822f1b/shim.sock" debug=false pid=3400
Apr 29 21:39:18 minikube dockerd[2185]: time="2020-04-29T21:39:18.314195590Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/cd2efbed5d4bb27d299d549940543730c4775532d7070a06de02170a1ee4613a/shim.sock" debug=false pid=3491
Apr 29 21:39:22 minikube dockerd[2185]: time="2020-04-29T21:39:22.296970438Z" level=error msg="Handler for GET /containers/cd2efbed5d4bb27d299d549940543730c4775532d7070a06de02170a1ee4613a/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Apr 29 21:39:22 minikube dockerd[2185]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
Apr 29 21:39:25 minikube dockerd[2185]: time="2020-04-29T21:39:25.105630811Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8ee497d2b10b3941912e23e1a6bab72abc78e3f6936b3dcf8c4fd2fb27d76772/shim.sock" debug=false pid=3645
Apr 29 21:39:30 minikube dockerd[2185]: time="2020-04-29T21:39:30.782668033Z" level=info msg="shim reaped" id=8ee497d2b10b3941912e23e1a6bab72abc78e3f6936b3dcf8c4fd2fb27d76772
Apr 29 21:39:30 minikube dockerd[2185]: time="2020-04-29T21:39:30.792782319Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 29 21:39:49 minikube dockerd[2185]: time="2020-04-29T21:39:49.755644231Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d26e7a73f510eb54e60bee2fe30cccff8245bd840122b7b807a21f264204f37a/shim.sock" debug=false pid=4063
Apr 29 21:39:52 minikube dockerd[2185]: time="2020-04-29T21:39:52.483488839Z" level=error msg="2b771beddd8e5eeb0d58bd1332f0511c7a45e63327308af768ae4507da595d41 cleanup: failed to delete container from containerd: no such container"
Apr 29 21:39:55 minikube dockerd[2185]: time="2020-04-29T21:39:55.048875629Z" level=error msg="Handler for GET /containers/2b771beddd8e5eeb0d58bd1332f0511c7a45e63327308af768ae4507da595d41/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Apr 29 21:39:55 minikube dockerd[2185]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
Apr 29 21:39:55 minikube dockerd[2185]: time="2020-04-29T21:39:55.441860431Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/149a2cf94f0c8bd63a58f827dc6622219b433c75b983db4adc5f6712afa99bfc/shim.sock" debug=false pid=4244
Apr 29 21:39:56 minikube dockerd[2185]: time="2020-04-29T21:39:56.965894916Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/473b0f93a5d0f6691c92d273c1dfd4fc16e078ae5a4c6b7a7e6251bf22e437bc/shim.sock" debug=false pid=4285
Apr 29 21:39:59 minikube dockerd[2185]: time="2020-04-29T21:39:59.510175211Z" level=error msg="Handler for GET /containers/473b0f93a5d0f6691c92d273c1dfd4fc16e078ae5a4c6b7a7e6251bf22e437bc/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Apr 29 21:39:59 minikube dockerd[2185]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
Apr 29 21:40:28 minikube dockerd[2185]: time="2020-04-29T21:40:28.683504104Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/88d4ad283f4055c927860ba9fc7b91a8b6778226f0d0b229ad2fa1707c553c76/shim.sock" debug=false pid=4572
Apr 29 21:41:13 minikube dockerd[2185]: time="2020-04-29T21:41:13.869682213Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ab31b84140009419ec8e77461d6ac001b479c33e97cc4543741ea96703a5083b/shim.sock" debug=false pid=4764
Apr 29 21:41:30 minikube dockerd[2185]: time="2020-04-29T21:41:30.213957521Z" level=info msg="shim reaped" id=149a2cf94f0c8bd63a58f827dc6622219b433c75b983db4adc5f6712afa99bfc
Apr 29 21:41:30 minikube dockerd[2185]: time="2020-04-29T21:41:30.224131564Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 29 21:41:53 minikube dockerd[2185]: time="2020-04-29T21:41:53.247178838Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/eb2b6fe00a3cf30755188441d439826ee0337aab9884fc81e5035222dd35c02b/shim.sock" debug=false pid=4933
Apr 29 21:42:35 minikube dockerd[2185]: time="2020-04-29T21:42:35.686013096Z" level=info msg="shim reaped" id=eb2b6fe00a3cf30755188441d439826ee0337aab9884fc81e5035222dd35c02b
Apr 29 21:42:35 minikube dockerd[2185]: time="2020-04-29T21:42:35.696279427Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 29 21:42:42 minikube dockerd[2185]: time="2020-04-29T21:42:42.562157609Z" level=info msg="shim reaped" id=ab31b84140009419ec8e77461d6ac001b479c33e97cc4543741ea96703a5083b
Apr 29 21:42:42 minikube dockerd[2185]: time="2020-04-29T21:42:42.572515150Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 29 21:42:50 minikube dockerd[2185]: time="2020-04-29T21:42:50.468443422Z" level=info msg="shim reaped" id=88d4ad283f4055c927860ba9fc7b91a8b6778226f0d0b229ad2fa1707c553c76
Apr 29 21:42:50 minikube dockerd[2185]: time="2020-04-29T21:42:50.479143197Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 29 21:42:58 minikube dockerd[2185]: time="2020-04-29T21:42:58.778845340Z" level=info msg="shim reaped" id=473b0f93a5d0f6691c92d273c1dfd4fc16e078ae5a4c6b7a7e6251bf22e437bc
Apr 29 21:42:58 minikube dockerd[2185]: time="2020-04-29T21:42:58.789213067Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 29 21:43:04 minikube dockerd[2185]: time="2020-04-29T21:43:04.405690537Z" level=info msg="shim reaped" id=d26e7a73f510eb54e60bee2fe30cccff8245bd840122b7b807a21f264204f37a
Apr 29 21:43:04 minikube dockerd[2185]: time="2020-04-29T21:43:04.415998191Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 29 21:43:10 minikube dockerd[2185]: time="2020-04-29T21:43:10.877286978Z" level=info msg="shim reaped" id=cd2efbed5d4bb27d299d549940543730c4775532d7070a06de02170a1ee4613a
Apr 29 21:43:10 minikube dockerd[2185]: time="2020-04-29T21:43:10.887623845Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 29 21:43:18 minikube dockerd[2185]: time="2020-04-29T21:43:18.153162291Z" level=info msg="shim reaped" id=f3506c834b0c37904fea7c8e8c7ce3f0fed094ecd8ac4883b6d82ce4d1822f1b
Apr 29 21:43:18 minikube dockerd[2185]: time="2020-04-29T21:43:18.164114950Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 29 21:43:26 minikube dockerd[2185]: time="2020-04-29T21:43:26.314603335Z" level=info msg="shim reaped" id=d27eed3852e26bc27f012e4fe6a0df5d841141655077a1609f8b76eb70f03790
Apr 29 21:43:26 minikube dockerd[2185]: time="2020-04-29T21:43:26.324865651Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 29 21:44:16 minikube dockerd[2185]: time="2020-04-29T21:44:16.526555961Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4bfa6d5a58609acd232038688e71f53cf3d7c28231f95166f4b0338dbeb9383d/shim.sock" debug=false pid=6173
Apr 29 21:44:27 minikube dockerd[2185]: time="2020-04-29T21:44:27.926354816Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9cf1006f17e332ea6f64f79d3089aae50dd317b3b12ec919fd5104ed0dd8d4f4/shim.sock" debug=false pid=6229
Apr 29 21:44:33 minikube dockerd[2185]: time="2020-04-29T21:44:33.100569137Z" level=error msg="Handler for GET /containers/9cf1006f17e332ea6f64f79d3089aae50dd317b3b12ec919fd5104ed0dd8d4f4/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Apr 29 21:44:33 minikube dockerd[2185]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
Apr 29 21:44:45 minikube dockerd[2185]: time="2020-04-29T21:44:45.154395787Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f35698cb2f225898c6a43e6d25cc03b008397d5c4042a2bde60a8556d893508e/shim.sock" debug=false pid=6320
Apr 29 21:44:51 minikube dockerd[2185]: time="2020-04-29T21:44:51.736488589Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/af248d1f5a56ddbefca8c59e65cc00d2edb97e8b4afdeea51a26aa3b71b44ddc/shim.sock" debug=false pid=6582
Apr 29 21:44:57 minikube dockerd[2185]: time="2020-04-29T21:44:57.131671814Z" level=info msg="shim reaped" id=af248d1f5a56ddbefca8c59e65cc00d2edb97e8b4afdeea51a26aa3b71b44ddc
Apr 29 21:44:57 minikube dockerd[2185]: time="2020-04-29T21:44:57.142006210Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 29 21:45:24 minikube dockerd[2185]: time="2020-04-29T21:45:24.116963232Z" level=error msg="262a9968ddd5133568f3ca6471bc11d73893fd8f2177c1f15513d58f82d743f7 cleanup: failed to delete container from containerd: no such container"
Apr 29 21:45:24 minikube dockerd[2185]: time="2020-04-29T21:45:24.117342780Z" level=error msg="Handler for GET /containers/262a9968ddd5133568f3ca6471bc11d73893fd8f2177c1f15513d58f82d743f7/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Apr 29 21:45:24 minikube dockerd[2185]: time="2020-04-29T21:45:24.117610233Z" level=error msg="Handler for GET /containers/262a9968ddd5133568f3ca6471bc11d73893fd8f2177c1f15513d58f82d743f7/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Apr 29 21:45:24 minikube dockerd[2185]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
Apr 29 21:45:24 minikube dockerd[2185]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
Apr 29 21:45:27 minikube dockerd[2185]: time="2020-04-29T21:45:27.888867151Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3bae131c1002d8d3caa725d3df069ed2c130aa4f2b1c6d7e90b3e35fccc25c92/shim.sock" debug=false pid=7255
Apr 29 21:45:28 minikube dockerd[2185]: time="2020-04-29T21:45:28.630357700Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/87019d1a86c3db5ada4e759e25de1c7c20d2c467766f870552ab82119cb76f06/shim.sock" debug=false pid=7322
Apr 29 21:45:30 minikube dockerd[2185]: time="2020-04-29T21:45:30.547154983Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/42687ce0cff5f57ebd8ec0830b0ddfff6b99271427d0513474f3bb8f124a2787/shim.sock" debug=false pid=7385

==> container status <==
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
87019d1a86c3d       a31f78c7c8ce1       53 seconds ago      Running             kube-scheduler            0                   4bfa6d5a58609
262a9968ddd51       74060cea7f704       53 seconds ago      Created             kube-apiserver            0                   af248d1f5a56d
42687ce0cff5f       d3e55153f52fb       53 seconds ago      Running             kube-controller-manager   0                   9cf1006f17e33
3bae131c1002d       303ce5db0e90d       53 seconds ago      Running             etcd                      0                   f35698cb2f225

==> describe nodes <==
E0429 22:45:47.697019    4756 logs.go:178] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"

==> dmesg <==
[Apr29 21:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[  +0.023979] Decoding supported only on Scalable MCA processors.
[  +2.289089] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[  +0.488721] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
[  +0.002237] systemd-fstab-generator[1142]: Ignoring "noauto" for root device
[  +0.004554] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[  +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[  +0.906074] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[  +2.656639] vboxguest: loading out-of-tree module taints kernel.
[  +0.003772] vboxguest: PCI device not found, probably running on physical hardware.
[  +3.359109] systemd-fstab-generator[1993]: Ignoring "noauto" for root device
[ +11.194772] kauditd_printk_skb: 59 callbacks suppressed
[Apr29 21:38] systemd-fstab-generator[2399]: Ignoring "noauto" for root device
[  +0.977346] systemd-fstab-generator[2620]: Ignoring "noauto" for root device
[  +9.166605] kauditd_printk_skb: 107 callbacks suppressed
[Apr29 21:39] NFSD: Unable to end grace period: -110
[Apr29 21:43] systemd-fstab-generator[5536]: Ignoring "noauto" for root device

==> etcd [3bae131c1002] <==
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-04-29 21:45:27.980386 I | etcdmain: etcd Version: 3.4.3
2020-04-29 21:45:27.980418 I | etcdmain: Git SHA: 3cf2f69b5
2020-04-29 21:45:27.980424 I | etcdmain: Go Version: go1.12.12
2020-04-29 21:45:27.980429 I | etcdmain: Go OS/Arch: linux/amd64
2020-04-29 21:45:27.980441 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-04-29 21:45:27.980516 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-04-29 21:45:27.981148 I | embed: name = minikube
2020-04-29 21:45:27.981166 I | embed: data dir = /var/lib/minikube/etcd
2020-04-29 21:45:27.981172 I | embed: member dir = /var/lib/minikube/etcd/member
2020-04-29 21:45:27.981177 I | embed: heartbeat = 100ms
2020-04-29 21:45:27.981181 I | embed: election = 1000ms
2020-04-29 21:45:27.981186 I | embed: snapshot count = 10000
2020-04-29 21:45:27.981194 I | embed: advertise client URLs = https://192.168.39.139:2379
2020-04-29 21:45:35.203748 W | wal: sync duration of 3.296108838s, expected less than 1s
2020-04-29 21:45:41.641844 I | etcdserver: starting member 3cbdd43a8949db2d in cluster 4af51893258ecb17
raft2020/04/29 21:45:41 INFO: 3cbdd43a8949db2d switched to configuration voters=()
raft2020/04/29 21:45:41 INFO: 3cbdd43a8949db2d became follower at term 0
raft2020/04/29 21:45:41 INFO: newRaft 3cbdd43a8949db2d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
raft2020/04/29 21:45:41 INFO: 3cbdd43a8949db2d became follower at term 1
raft2020/04/29 21:45:41 INFO: 3cbdd43a8949db2d switched to configuration voters=(4376887760750500653)

==> kernel <==
 21:45:47 up 8 min,  0 users,  load average: 3.61, 2.64, 1.37
Linux minikube 4.19.107 #1 SMP Thu Mar 26 11:33:10 PDT 2020 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2019.02.10"

==> kube-apiserver [262a9968ddd5] <==

==> kube-controller-manager [42687ce0cff5] <==
I0429 21:45:30.942120       1 serving.go:313] Generated self-signed cert in-memory
I0429 21:45:31.114556       1 controllermanager.go:161] Version: v1.18.0
I0429 21:45:31.115287       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0429 21:45:31.115368       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0429 21:45:31.115701       1 secure_serving.go:178] Serving securely on 127.0.0.1:10257
I0429 21:45:31.115780       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0429 21:45:31.116425       1 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I0429 21:45:31.116501       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...
E0429 21:45:31.116828       1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://192.168.39.139:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:31.117060       1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://192.168.39.139:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:33.446972       1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://192.168.39.139:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:36.749658       1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://192.168.39.139:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:39.224461       1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://192.168.39.139:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:41.679726       1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://192.168.39.139:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:45.743630       1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://192.168.39.139:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.139:8443: connect: connection refused

==> kube-scheduler [87019d1a86c3] <==
I0429 21:45:28.761875       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0429 21:45:28.762077       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0429 21:45:29.013855       1 serving.go:313] Generated self-signed cert in-memory
W0429 21:45:29.538886       1 authentication.go:297] Error looking up in-cluster authentication configuration: Get https://192.168.39.139:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 192.168.39.139:8443: connect: connection refused
W0429 21:45:29.538968       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
W0429 21:45:29.539008       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0429 21:45:29.543353       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0429 21:45:29.543372       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
W0429 21:45:29.544477       1 authorization.go:47] Authorization is disabled
W0429 21:45:29.544488       1 authentication.go:40] Authentication is disabled
I0429 21:45:29.544495       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0429 21:45:29.545488       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0429 21:45:29.545502       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0429 21:45:29.546066       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: Get https://192.168.39.139:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
I0429 21:45:29.546241       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0429 21:45:29.546314       1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0429 21:45:29.546585       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: Get https://192.168.39.139:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.546787       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: Get https://192.168.39.139:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.548135       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://192.168.39.139:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.548310       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://192.168.39.139:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.548327       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://192.168.39.139:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.548485       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://192.168.39.139:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.548659       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://192.168.39.139:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.548672       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://192.168.39.139:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.548927       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://192.168.39.139:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.549723       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: Get https://192.168.39.139:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.550887       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://192.168.39.139:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.551920       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://192.168.39.139:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.552946       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://192.168.39.139:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.554145       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://192.168.39.139:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.555161       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://192.168.39.139:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.556292       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://192.168.39.139:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.557331       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://192.168.39.139:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:31.172906       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://192.168.39.139:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:31.369496       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://192.168.39.139:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:31.659989       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://192.168.39.139:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:31.674589       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://192.168.39.139:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:31.816289       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://192.168.39.139:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:31.850700       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://192.168.39.139:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:32.070636       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://192.168.39.139:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:32.243678       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: Get https://192.168.39.139:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:32.395271       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: Get https://192.168.39.139:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:35.589262       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: Get https://192.168.39.139:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:35.863575       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://192.168.39.139:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:36.208376       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://192.168.39.139:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:37.124814       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://192.168.39.139:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:37.180465       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://192.168.39.139:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:37.296715       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://192.168.39.139:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:37.515381       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://192.168.39.139:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:37.649807       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: Get https://192.168.39.139:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:38.177332       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://192.168.39.139:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:42.390921       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://192.168.39.139:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:45.050057       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: Get https://192.168.39.139:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:45.099093       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://192.168.39.139:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:47.030297       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://192.168.39.139:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:47.537357       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://192.168.39.139:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused

==> kubelet <==
-- Logs begin at Wed 2020-04-29 21:37:30 UTC, end at Wed 2020-04-29 21:45:47 UTC. --
Apr 29 21:45:42 minikube kubelet[6914]: E0429 21:45:42.528081    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:42 minikube kubelet[6914]: E0429 21:45:42.628255    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:42 minikube kubelet[6914]: E0429 21:45:42.728462    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:42 minikube kubelet[6914]: E0429 21:45:42.828623    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:42 minikube kubelet[6914]: E0429 21:45:42.928800    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:43 minikube kubelet[6914]: E0429 21:45:43.028948    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:43 minikube kubelet[6914]: E0429 21:45:43.129073    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:43 minikube kubelet[6914]: E0429 21:45:43.229210    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:43 minikube kubelet[6914]: E0429 21:45:43.329363    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:43 minikube kubelet[6914]: E0429 21:45:43.429542    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:43 minikube kubelet[6914]: E0429 21:45:43.529736    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:43 minikube kubelet[6914]: E0429 21:45:43.629915    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:43 minikube kubelet[6914]: E0429 21:45:43.730080    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:43 minikube kubelet[6914]: E0429 21:45:43.830273    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:43 minikube kubelet[6914]: E0429 21:45:43.930427    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:44 minikube kubelet[6914]: E0429 21:45:44.030570    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:44 minikube kubelet[6914]: E0429 21:45:44.130722    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:44 minikube kubelet[6914]: E0429 21:45:44.230874    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:44 minikube kubelet[6914]: E0429 21:45:44.331021    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:44 minikube kubelet[6914]: E0429 21:45:44.431190    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:44 minikube kubelet[6914]: E0429 21:45:44.531672    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:44 minikube kubelet[6914]: E0429 21:45:44.631827    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:44 minikube kubelet[6914]: E0429 21:45:44.732369    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:44 minikube kubelet[6914]: E0429 21:45:44.832940    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:44 minikube kubelet[6914]: E0429 21:45:44.933195    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:45 minikube kubelet[6914]: E0429 21:45:45.033499    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:45 minikube kubelet[6914]: E0429 21:45:45.133744    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:45 minikube kubelet[6914]: E0429 21:45:45.234171    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:45 minikube kubelet[6914]: E0429 21:45:45.334463    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:45 minikube kubelet[6914]: E0429 21:45:45.435155    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:45 minikube kubelet[6914]: E0429 21:45:45.535975    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:45 minikube kubelet[6914]: E0429 21:45:45.636296    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:45 minikube kubelet[6914]: E0429 21:45:45.737524    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:45 minikube kubelet[6914]: E0429 21:45:45.839174    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:45 minikube kubelet[6914]: E0429 21:45:45.939771    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.040004    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.140234    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.240485    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.282869    6914 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: Get https://192.168.39.139:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s: dial tcp 192.168.39.139:8443: connect: connection refused
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.340718    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.440981    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.542258    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.642491    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.742434    6914 event.go:269] Unable to write event: 'Post https://192.168.39.139:8443/api/v1/namespaces/default/events: dial tcp 192.168.39.139:8443: connect: connection refused' (may retry after sleeping)
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.743154    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.820328    6914 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get node info: node "minikube" not found
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.843329    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.849836    6914 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.39.139:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
Apr 29 21:45:46 minikube kubelet[6914]: I0429 21:45:46.937354    6914 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.943489    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:46 minikube kubelet[6914]: I0429 21:45:46.957486    6914 kubelet_node_status.go:70] Attempting to register node minikube
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.957798    6914 kubelet_node_status.go:92] Unable to register node "minikube" with API server: Post https://192.168.39.139:8443/api/v1/nodes: dial tcp 192.168.39.139:8443: connect: connection refused
Apr 29 21:45:47 minikube kubelet[6914]: E0429 21:45:47.043675    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:47 minikube kubelet[6914]: E0429 21:45:47.143976    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:47 minikube kubelet[6914]: E0429 21:45:47.244178    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:47 minikube kubelet[6914]: E0429 21:45:47.344365    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:47 minikube kubelet[6914]: E0429 21:45:47.444524    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:47 minikube kubelet[6914]: E0429 21:45:47.544672    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:47 minikube kubelet[6914]: E0429 21:45:47.644821    6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:47 minikube kubelet[6914]: E0429 21:45:47.744946    6914 kubelet.go:2267] node "minikube" not found

❗  unable to fetch logs for: describe nodes

tstromberg commented 4 years ago

The apiserver is being hung up some how. These errors in your Docker log are very very unusual:

Apr 29 21:45:24 minikube dockerd[2185]: time="2020-04-29T21:45:24.116963232Z" level=error msg="262a9968ddd5133568f3ca6471bc11d73893fd8f2177c1f15513d58f82d743f7 cleanup: failed to delete container from containerd: no such container"
Apr 29 21:45:24 minikube dockerd[2185]: time="2020-04-29T21:45:24.117342780Z" level=error msg="Handler for GET /containers/262a9968ddd5133568f3ca6471bc11d73893fd8f2177c1f15513d58f82d743f7/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Apr 29 21:45:24 minikube dockerd[2185]: time="2020-04-29T21:45:24.117610233Z" level=error msg="Handler for GET /containers/262a9968ddd5133568f3ca6471bc11d73893fd8f2177c1f15513d58f82d743f7/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Apr 29 21:45:24 minikube dockerd[2185]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
Apr 29 21:45:24 minikube dockerd[2185]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
Apr 29 21:45:27 minikube dockerd[2185]: time="2020-04-29T21:45:27.888867151Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3bae131c1002d8d3caa725d3df069ed2c130aa4f2b1c6d7e90b3e35fccc25c92/shim.sock" debug=false pid=7255

I see the broken pipe error referenced at https://github.com/moby/moby/issues/22221 - but have no idea how or why it would be triggered in this environment. Most of the references to this error seem to be in reference to running on over-loaded/slow VM's. The load within your VM seems OK: 21:45:47 up 8 min, 0 users, load average: 3.61, 2.64, 1.37

It isn't guaranteed, but I wonder if minikube delete and minikube start gets by this error at all.

Any chance I can get you to try that, and report back with the output of:

minikube ssh "sudo dmesg"

Thanks!

deej-io commented 4 years ago

Thanks for all of your help so far. I've honestly no idea what I'm looking for so I appreciate you trawling through all of these logs.

I ran a minikube delete and minikube start --driver=kvm2 again got the same broken pipe error. The VM is still running after the error so I managed to run dmesg on it.

Out of curiosity, I also ran an ubuntu image via virt-manager and centos via Vagrant (with --provider libvirt) to see if it was all VMs that run slow, but both seemed reasonably responsive. I'm not sure if that means anything at all here though.

Please see the relevant logs below:

minikube start --driver=kvm2

😄  minikube v1.9.2 on Arch rolling
✨  Using the kvm2 driver based on user configuration
👍  Starting control plane node m01 in cluster minikube
🔥  Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.18.0 on Docker 19.03.8 ...
💥  initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [192.168.39.51 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [192.168.39.51 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

    Unfortunately, an error has occurred:
        timed out waiting for the condition

    This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in docker:
        - 'docker ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'docker logs CONTAINERID'

stderr:
W0502 13:22:45.461081    2479 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0502 13:22:49.097975    2479 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0502 13:22:49.107964    2479 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

minikube logs:

==> Docker <==
-- Logs begin at Sat 2020-05-02 13:21:30 UTC, end at Sat 2020-05-02 13:26:52 UTC. --
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031289760Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031305765Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031320085Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031333865Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031406731Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031455515Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031970880Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032010927Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032055502Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032071767Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032086769Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032101400Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032115150Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032129810Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032143650Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032157631Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032171661Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032206756Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032225305Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032240037Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032253757Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032402145Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032466212Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032481044Z" level=info msg="containerd successfully booted in 0.004083s"
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.040508210Z" level=info msg="parsed scheme: \"unix\"" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.040651617Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.040761914Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.040859563Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.041676323Z" level=info msg="parsed scheme: \"unix\"" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.041701718Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.041720758Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.041733807Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.752699727Z" level=warning msg="Your kernel does not support cgroup blkio weight"
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.753331067Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.753425760Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.753501856Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.753576982Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.753649673Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.753951476Z" level=info msg="Loading containers: start."
May 02 13:22:37 minikube dockerd[2189]: time="2020-05-02T13:22:37.116382805Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
May 02 13:22:41 minikube dockerd[2189]: time="2020-05-02T13:22:41.217990970Z" level=info msg="Loading containers: done."
May 02 13:22:43 minikube dockerd[2189]: time="2020-05-02T13:22:43.017991265Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8
May 02 13:22:43 minikube dockerd[2189]: time="2020-05-02T13:22:43.018501481Z" level=info msg="Daemon has completed initialization"
May 02 13:22:44 minikube dockerd[2189]: time="2020-05-02T13:22:44.376320083Z" level=info msg="API listen on /var/run/docker.sock"
May 02 13:22:44 minikube systemd[1]: Started Docker Application Container Engine.
May 02 13:22:44 minikube dockerd[2189]: time="2020-05-02T13:22:44.377061591Z" level=info msg="API listen on [::]:2376"
May 02 13:23:31 minikube dockerd[2189]: time="2020-05-02T13:23:31.469876190Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5e626516c30d06a8ac8952a320d0117d56666a53132649000c2b5ae666a5dc37/shim.sock" debug=false pid=3329
May 02 13:23:35 minikube dockerd[2189]: time="2020-05-02T13:23:35.364775831Z" level=error msg="Handler for GET /containers/5e626516c30d06a8ac8952a320d0117d56666a53132649000c2b5ae666a5dc37/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
May 02 13:23:35 minikube dockerd[2189]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
May 02 13:23:38 minikube dockerd[2189]: time="2020-05-02T13:23:38.179414191Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/add8015f706c306a27f8e23fb735d2d9f3d3c46f23ae7b8834f477ea8c00a781/shim.sock" debug=false pid=3369
May 02 13:23:58 minikube dockerd[2189]: time="2020-05-02T13:23:58.485098468Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ea4e2c9a7c4cb1692a885c79f21f01074f764cd85f05019cb53c7525845b0049/shim.sock" debug=false pid=3475
May 02 13:24:09 minikube dockerd[2189]: time="2020-05-02T13:24:09.414385232Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9abbbecba551cb426178312c873f36102b9ae444efd7542da490bdb3727384ca/shim.sock" debug=false pid=3691
May 02 13:24:21 minikube dockerd[2189]: time="2020-05-02T13:24:21.357659786Z" level=info msg="shim reaped" id=9abbbecba551cb426178312c873f36102b9ae444efd7542da490bdb3727384ca
May 02 13:24:21 minikube dockerd[2189]: time="2020-05-02T13:24:21.369113205Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 02 13:25:15 minikube dockerd[2189]: time="2020-05-02T13:25:15.686564107Z" level=info msg="shim reaped" id=ea4e2c9a7c4cb1692a885c79f21f01074f764cd85f05019cb53c7525845b0049
May 02 13:25:15 minikube dockerd[2189]: time="2020-05-02T13:25:15.696727714Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 02 13:25:40 minikube dockerd[2189]: time="2020-05-02T13:25:40.360936572Z" level=info msg="shim reaped" id=add8015f706c306a27f8e23fb735d2d9f3d3c46f23ae7b8834f477ea8c00a781
May 02 13:25:40 minikube dockerd[2189]: time="2020-05-02T13:25:40.371235877Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 02 13:26:01 minikube dockerd[2189]: time="2020-05-02T13:26:01.782519827Z" level=info msg="shim reaped" id=5e626516c30d06a8ac8952a320d0117d56666a53132649000c2b5ae666a5dc37
May 02 13:26:01 minikube dockerd[2189]: time="2020-05-02T13:26:01.792643419Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"

==> container status <==
time="2020-05-02T13:26:54Z" level=fatal msg="failed to connect: failed to connect, make sure you are running as root and the runtime has been started: context deadline exceeded"
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
6008a0593584        a31f78c7c8ce        "kube-scheduler --au…"   2 minutes ago       Created                                 k8s_kube-scheduler_kube-scheduler-minikube_kube-system_5795d0c442cb997ff93c49feeb9f6386_0

==> describe nodes <==
E0502 14:26:54.272492   14824 logs.go:178] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"

==> dmesg <==
[May 2 13:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[  +0.024014] Decoding supported only on Scalable MCA processors.
[  +2.395424] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[  +0.528220] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
[  +0.002202] systemd-fstab-generator[1142]: Ignoring "noauto" for root device
[  +0.005645] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[  +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[  +0.945739] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[  +2.883956] vboxguest: loading out-of-tree module taints kernel.
[  +0.002680] vboxguest: PCI device not found, probably running on physical hardware.
[  +2.976374] systemd-fstab-generator[1995]: Ignoring "noauto" for root device
[ +11.253354] kauditd_printk_skb: 59 callbacks suppressed
[May 2 13:22] kauditd_printk_skb: 71 callbacks suppressed
[  +7.593342] systemd-fstab-generator[2403]: Ignoring "noauto" for root device
[  +1.106280] systemd-fstab-generator[2630]: Ignoring "noauto" for root device
[  +9.212981] kauditd_printk_skb: 26 callbacks suppressed
[May 2 13:23] NFSD: Unable to end grace period: -110

==> kernel <==
 13:26:54 up 5 min,  0 users,  load average: 1.37, 1.67, 0.80
Linux minikube 4.19.107 #1 SMP Thu Mar 26 11:33:10 PDT 2020 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2019.02.10"

==> kube-scheduler [6008a0593584] <==

==> kubelet <==
-- Logs begin at Sat 2020-05-02 13:21:30 UTC, end at Sat 2020-05-02 13:26:54 UTC. --
May 02 13:24:35 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
May 02 13:24:35 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
May 02 13:24:35 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 02 13:24:35 minikube kubelet[4082]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: I0502 13:24:35.684976    4082 server.go:417] Version: v1.18.0
May 02 13:24:35 minikube kubelet[4082]: I0502 13:24:35.687123    4082 plugins.go:100] No cloud provider specified.
May 02 13:24:35 minikube kubelet[4082]: I0502 13:24:35.687251    4082 server.go:837] Client rotation is on, will bootstrap in background
May 02 13:24:35 minikube kubelet[4082]: I0502 13:24:35.690258    4082 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.559772    4082 server.go:646] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.559995    4082 container_manager_linux.go:266] container manager verified user specified cgroup-root exists: []
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.560010    4082 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.560411    4082 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.560421    4082 container_manager_linux.go:301] [topologymanager] Initializing Topology Manager with none policy
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.560425    4082 container_manager_linux.go:306] Creating device plugin manager: true
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.560468    4082 client.go:75] Connecting to docker on unix:///var/run/docker.sock
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.560490    4082 client.go:92] Start docker client with request timeout=2m0s
May 02 13:24:41 minikube kubelet[4082]: W0502 13:24:41.567581    4082 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.567611    4082 docker_service.go:238] Hairpin mode set to "hairpin-veth"
May 02 13:24:41 minikube kubelet[4082]: W0502 13:24:41.567718    4082 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.570212    4082 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.577418    4082 docker_service.go:258] Docker Info: &{ID:P3PS:GBTV:37DI:PEWB:WAFB:ESMR:4BMG:BE3S:C2S3:BPWF:MR5Z:YPBO Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem <unknown>] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2020-05-02T13:24:41.570928951Z LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:4.19.107 OperatingSystem:Buildroot 2019.02.10 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0007703f0 NCPU:2 MemTotal:3840438272 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:minikube Labels:[provider=kvm2] ExperimentalBuild:false ServerVersion:19.03.8 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:[]}
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.577489    4082 docker_service.go:271] Setting cgroupDriver to systemd
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.583913    4082 remote_runtime.go:59] parsed scheme: ""
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.583935    4082 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.583968    4082 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <nil> <nil>}
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.583978    4082 clientconn.go:933] ClientConn switching balancer to "pick_first"
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.584018    4082 remote_image.go:50] parsed scheme: ""
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.584026    4082 remote_image.go:50] scheme "" not registered, fallback to default scheme
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.584038    4082 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <nil> <nil>}
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.584045    4082 clientconn.go:933] ClientConn switching balancer to "pick_first"
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.584073    4082 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.584092    4082 kubelet.go:317] Watching apiserver
May 02 13:24:41 minikube kubelet[4082]: E0502 13:24:41.590714    4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get https://192.168.39.51:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:41 minikube kubelet[4082]: E0502 13:24:41.590873    4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Get https://192.168.39.51:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:41 minikube kubelet[4082]: E0502 13:24:41.591000    4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.39.51:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:41 minikube kubelet[4082]: E0502 13:24:41.595438    4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get https://192.168.39.51:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:41 minikube kubelet[4082]: E0502 13:24:41.595566    4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.39.51:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:41 minikube kubelet[4082]: E0502 13:24:41.595674    4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Get https://192.168.39.51:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:43 minikube kubelet[4082]: E0502 13:24:43.323866    4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.39.51:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:44 minikube kubelet[4082]: E0502 13:24:44.387713    4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get https://192.168.39.51:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:44 minikube systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
May 02 13:24:44 minikube kubelet[4082]: E0502 13:24:44.683019    4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Get https://192.168.39.51:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:47 minikube kubelet[4082]: E0502 13:24:47.659246    4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.39.51:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:47 minikube kubelet[4082]: E0502 13:24:47.820341    4082 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
May 02 13:24:47 minikube kubelet[4082]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
May 02 13:24:47 minikube kubelet[4082]: I0502 13:24:47.828446    4082 kuberuntime_manager.go:211] Container runtime docker initialized, version: 19.03.8, apiVersion: 1.40.0
May 02 13:24:47 minikube kubelet[4082]: I0502 13:24:47.828874    4082 server.go:1125] Started kubelet
May 02 13:24:47 minikube systemd[1]: kubelet.service: Succeeded.
May 02 13:24:47 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent.

❗  unable to fetch logs for: describe nodes
~ >>> minikube logs                                                                                                                                                                                                                                                                  [69]
==> Docker <==
-- Logs begin at Sat 2020-05-02 13:21:30 UTC, end at Sat 2020-05-02 13:26:58 UTC. --
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031289760Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031305765Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031320085Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031333865Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031406731Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031455515Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031970880Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032010927Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032055502Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032071767Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032086769Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032101400Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032115150Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032129810Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032143650Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032157631Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032171661Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032206756Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032225305Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032240037Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032253757Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032402145Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032466212Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032481044Z" level=info msg="containerd successfully booted in 0.004083s"
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.040508210Z" level=info msg="parsed scheme: \"unix\"" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.040651617Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.040761914Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.040859563Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.041676323Z" level=info msg="parsed scheme: \"unix\"" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.041701718Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.041720758Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.041733807Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.752699727Z" level=warning msg="Your kernel does not support cgroup blkio weight"
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.753331067Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.753425760Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.753501856Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.753576982Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.753649673Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.753951476Z" level=info msg="Loading containers: start."
May 02 13:22:37 minikube dockerd[2189]: time="2020-05-02T13:22:37.116382805Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
May 02 13:22:41 minikube dockerd[2189]: time="2020-05-02T13:22:41.217990970Z" level=info msg="Loading containers: done."
May 02 13:22:43 minikube dockerd[2189]: time="2020-05-02T13:22:43.017991265Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8
May 02 13:22:43 minikube dockerd[2189]: time="2020-05-02T13:22:43.018501481Z" level=info msg="Daemon has completed initialization"
May 02 13:22:44 minikube dockerd[2189]: time="2020-05-02T13:22:44.376320083Z" level=info msg="API listen on /var/run/docker.sock"
May 02 13:22:44 minikube systemd[1]: Started Docker Application Container Engine.
May 02 13:22:44 minikube dockerd[2189]: time="2020-05-02T13:22:44.377061591Z" level=info msg="API listen on [::]:2376"
May 02 13:23:31 minikube dockerd[2189]: time="2020-05-02T13:23:31.469876190Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5e626516c30d06a8ac8952a320d0117d56666a53132649000c2b5ae666a5dc37/shim.sock" debug=false pid=3329
May 02 13:23:35 minikube dockerd[2189]: time="2020-05-02T13:23:35.364775831Z" level=error msg="Handler for GET /containers/5e626516c30d06a8ac8952a320d0117d56666a53132649000c2b5ae666a5dc37/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
May 02 13:23:35 minikube dockerd[2189]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
May 02 13:23:38 minikube dockerd[2189]: time="2020-05-02T13:23:38.179414191Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/add8015f706c306a27f8e23fb735d2d9f3d3c46f23ae7b8834f477ea8c00a781/shim.sock" debug=false pid=3369
May 02 13:23:58 minikube dockerd[2189]: time="2020-05-02T13:23:58.485098468Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ea4e2c9a7c4cb1692a885c79f21f01074f764cd85f05019cb53c7525845b0049/shim.sock" debug=false pid=3475
May 02 13:24:09 minikube dockerd[2189]: time="2020-05-02T13:24:09.414385232Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9abbbecba551cb426178312c873f36102b9ae444efd7542da490bdb3727384ca/shim.sock" debug=false pid=3691
May 02 13:24:21 minikube dockerd[2189]: time="2020-05-02T13:24:21.357659786Z" level=info msg="shim reaped" id=9abbbecba551cb426178312c873f36102b9ae444efd7542da490bdb3727384ca
May 02 13:24:21 minikube dockerd[2189]: time="2020-05-02T13:24:21.369113205Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 02 13:25:15 minikube dockerd[2189]: time="2020-05-02T13:25:15.686564107Z" level=info msg="shim reaped" id=ea4e2c9a7c4cb1692a885c79f21f01074f764cd85f05019cb53c7525845b0049
May 02 13:25:15 minikube dockerd[2189]: time="2020-05-02T13:25:15.696727714Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 02 13:25:40 minikube dockerd[2189]: time="2020-05-02T13:25:40.360936572Z" level=info msg="shim reaped" id=add8015f706c306a27f8e23fb735d2d9f3d3c46f23ae7b8834f477ea8c00a781
May 02 13:25:40 minikube dockerd[2189]: time="2020-05-02T13:25:40.371235877Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 02 13:26:01 minikube dockerd[2189]: time="2020-05-02T13:26:01.782519827Z" level=info msg="shim reaped" id=5e626516c30d06a8ac8952a320d0117d56666a53132649000c2b5ae666a5dc37
May 02 13:26:01 minikube dockerd[2189]: time="2020-05-02T13:26:01.792643419Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"

==> container status <==
time="2020-05-02T13:27:00Z" level=fatal msg="failed to connect: failed to connect, make sure you are running as root and the runtime has been started: context deadline exceeded"
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
6008a0593584        a31f78c7c8ce        "kube-scheduler --au…"   2 minutes ago       Created                                 k8s_kube-scheduler_kube-scheduler-minikube_kube-system_5795d0c442cb997ff93c49feeb9f6386_0

==> describe nodes <==
E0502 14:27:00.197515   14870 logs.go:178] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"

==> dmesg <==
[May 2 13:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[  +0.024014] Decoding supported only on Scalable MCA processors.
[  +2.395424] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[  +0.528220] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
[  +0.002202] systemd-fstab-generator[1142]: Ignoring "noauto" for root device
[  +0.005645] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[  +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[  +0.945739] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[  +2.883956] vboxguest: loading out-of-tree module taints kernel.
[  +0.002680] vboxguest: PCI device not found, probably running on physical hardware.
[  +2.976374] systemd-fstab-generator[1995]: Ignoring "noauto" for root device
[ +11.253354] kauditd_printk_skb: 59 callbacks suppressed
[May 2 13:22] kauditd_printk_skb: 71 callbacks suppressed
[  +7.593342] systemd-fstab-generator[2403]: Ignoring "noauto" for root device
[  +1.106280] systemd-fstab-generator[2630]: Ignoring "noauto" for root device
[  +9.212981] kauditd_printk_skb: 26 callbacks suppressed
[May 2 13:23] NFSD: Unable to end grace period: -110

==> kernel <==
 13:27:00 up 5 min,  0 users,  load average: 1.26, 1.65, 0.80
Linux minikube 4.19.107 #1 SMP Thu Mar 26 11:33:10 PDT 2020 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2019.02.10"

==> kube-scheduler [6008a0593584] <==

==> kubelet <==
-- Logs begin at Sat 2020-05-02 13:21:30 UTC, end at Sat 2020-05-02 13:27:00 UTC. --
May 02 13:24:35 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
May 02 13:24:35 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
May 02 13:24:35 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 02 13:24:35 minikube kubelet[4082]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: I0502 13:24:35.684976    4082 server.go:417] Version: v1.18.0
May 02 13:24:35 minikube kubelet[4082]: I0502 13:24:35.687123    4082 plugins.go:100] No cloud provider specified.
May 02 13:24:35 minikube kubelet[4082]: I0502 13:24:35.687251    4082 server.go:837] Client rotation is on, will bootstrap in background
May 02 13:24:35 minikube kubelet[4082]: I0502 13:24:35.690258    4082 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.559772    4082 server.go:646] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.559995    4082 container_manager_linux.go:266] container manager verified user specified cgroup-root exists: []
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.560010    4082 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.560411    4082 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.560421    4082 container_manager_linux.go:301] [topologymanager] Initializing Topology Manager with none policy
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.560425    4082 container_manager_linux.go:306] Creating device plugin manager: true
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.560468    4082 client.go:75] Connecting to docker on unix:///var/run/docker.sock
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.560490    4082 client.go:92] Start docker client with request timeout=2m0s
May 02 13:24:41 minikube kubelet[4082]: W0502 13:24:41.567581    4082 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.567611    4082 docker_service.go:238] Hairpin mode set to "hairpin-veth"
May 02 13:24:41 minikube kubelet[4082]: W0502 13:24:41.567718    4082 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.570212    4082 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.577418    4082 docker_service.go:258] Docker Info: &{ID:P3PS:GBTV:37DI:PEWB:WAFB:ESMR:4BMG:BE3S:C2S3:BPWF:MR5Z:YPBO Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem <unknown>] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2020-05-02T13:24:41.570928951Z LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:4.19.107 OperatingSystem:Buildroot 2019.02.10 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0007703f0 NCPU:2 MemTotal:3840438272 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:minikube Labels:[provider=kvm2] ExperimentalBuild:false ServerVersion:19.03.8 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:[]}
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.577489    4082 docker_service.go:271] Setting cgroupDriver to systemd
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.583913    4082 remote_runtime.go:59] parsed scheme: ""
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.583935    4082 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.583968    4082 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <nil> <nil>}
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.583978    4082 clientconn.go:933] ClientConn switching balancer to "pick_first"
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.584018    4082 remote_image.go:50] parsed scheme: ""
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.584026    4082 remote_image.go:50] scheme "" not registered, fallback to default scheme
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.584038    4082 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <nil> <nil>}
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.584045    4082 clientconn.go:933] ClientConn switching balancer to "pick_first"
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.584073    4082 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.584092    4082 kubelet.go:317] Watching apiserver
May 02 13:24:41 minikube kubelet[4082]: E0502 13:24:41.590714    4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get https://192.168.39.51:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:41 minikube kubelet[4082]: E0502 13:24:41.590873    4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Get https://192.168.39.51:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:41 minikube kubelet[4082]: E0502 13:24:41.591000    4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.39.51:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:41 minikube kubelet[4082]: E0502 13:24:41.595438    4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get https://192.168.39.51:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:41 minikube kubelet[4082]: E0502 13:24:41.595566    4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.39.51:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:41 minikube kubelet[4082]: E0502 13:24:41.595674    4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Get https://192.168.39.51:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:43 minikube kubelet[4082]: E0502 13:24:43.323866    4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.39.51:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:44 minikube kubelet[4082]: E0502 13:24:44.387713    4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get https://192.168.39.51:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:44 minikube systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
May 02 13:24:44 minikube kubelet[4082]: E0502 13:24:44.683019    4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Get https://192.168.39.51:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:47 minikube kubelet[4082]: E0502 13:24:47.659246    4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.39.51:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:47 minikube kubelet[4082]: E0502 13:24:47.820341    4082 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
May 02 13:24:47 minikube kubelet[4082]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
May 02 13:24:47 minikube kubelet[4082]: I0502 13:24:47.828446    4082 kuberuntime_manager.go:211] Container runtime docker initialized, version: 19.03.8, apiVersion: 1.40.0
May 02 13:24:47 minikube kubelet[4082]: I0502 13:24:47.828874    4082 server.go:1125] Started kubelet
May 02 13:24:47 minikube systemd[1]: kubelet.service: Succeeded.
May 02 13:24:47 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent.

❗  unable to fetch logs for: describe nodes

minikube ssh "sudo dmesg":

[    0.000000] Linux version 4.19.107 (jenkins@jenkins) (gcc version 7.4.0 (Buildroot 2019.02.10)) #1 SMP Thu Mar 26 11:33:10 PDT 2020
[    0.000000] Command line: BOOT_IMAGE=/boot/bzImage root=/dev/sr0 loglevel=3 console=ttyS0 noembed nomodeset norestore waitusb=10 random.trust_cpu=on hw_rng_model=virtio systemd.legacy_systemd_cgroup_controller=yes initrd=/boot/initrd
[    0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
[    0.000000] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
[    0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
[    0.000000] BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
[    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
[    0.000000] BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000012e6fffff] usable
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] SMBIOS 2.8 present.
[    0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ?-20191223_100556-anatol 04/01/2014
[    0.000000] Hypervisor detected: KVM
[    0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00
[    0.000000] kvm-clock: cpu 0, msr 1087ca001, primary cpu clock
[    0.000000] kvm-clock: using sched offset of 554728778 cycles
[    0.000001] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
[    0.000002] tsc: Detected 3393.624 MHz processor
[    0.000428] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
[    0.000429] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000432] last_pfn = 0x12e700 max_arch_pfn = 0x400000000
[    0.000459] MTRR default type: write-back
[    0.000460] MTRR fixed ranges enabled:
[    0.000460]   00000-9FFFF write-back
[    0.000461]   A0000-BFFFF uncachable
[    0.000461]   C0000-FFFFF write-protect
[    0.000462] MTRR variable ranges enabled:
[    0.000463]   0 base 00C0000000 mask FFC0000000 uncachable
[    0.000463]   1 disabled
[    0.000463]   2 disabled
[    0.000463]   3 disabled
[    0.000464]   4 disabled
[    0.000464]   5 disabled
[    0.000464]   6 disabled
[    0.000464]   7 disabled
[    0.000474] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT
[    0.000481] last_pfn = 0xbffdb max_arch_pfn = 0x400000000
[    0.002431] found SMP MP-table at [mem 0x000f5c50-0x000f5c5f]
[    0.002471] Scanning 1 areas for low memory corruption
[    0.002492] Using GB pages for direct mapping
[    0.002493] BRK [0x108a01000, 0x108a01fff] PGTABLE
[    0.002494] BRK [0x108a02000, 0x108a02fff] PGTABLE
[    0.002495] BRK [0x108a03000, 0x108a03fff] PGTABLE
[    0.002508] BRK [0x108a04000, 0x108a04fff] PGTABLE
[    0.002510] BRK [0x108a05000, 0x108a05fff] PGTABLE
[    0.002559] BRK [0x108a06000, 0x108a06fff] PGTABLE
[    0.002569] BRK [0x108a07000, 0x108a07fff] PGTABLE
[    0.002576] BRK [0x108a08000, 0x108a08fff] PGTABLE
[    0.002595] RAMDISK: [mem 0x75db3000-0x7fffffff]
[    0.002608] ACPI: Early table checksum verification disabled
[    0.002634] ACPI: RSDP 0x00000000000F5A20 000014 (v00 BOCHS )
[    0.002640] ACPI: RSDT 0x00000000BFFE15A2 000030 (v01 BOCHS  BXPCRSDT 00000001 BXPC 00000001)
[    0.002643] ACPI: FACP 0x00000000BFFE1476 000074 (v01 BOCHS  BXPCFACP 00000001 BXPC 00000001)
[    0.002646] ACPI: DSDT 0x00000000BFFE0040 001436 (v01 BOCHS  BXPCDSDT 00000001 BXPC 00000001)
[    0.002648] ACPI: FACS 0x00000000BFFE0000 000040
[    0.002650] ACPI: APIC 0x00000000BFFE14EA 000080 (v01 BOCHS  BXPCAPIC 00000001 BXPC 00000001)
[    0.002651] ACPI: HPET 0x00000000BFFE156A 000038 (v01 BOCHS  BXPCHPET 00000001 BXPC 00000001)
[    0.002656] ACPI: Local APIC address 0xfee00000
[    0.002951] No NUMA configuration found
[    0.002952] Faking a node at [mem 0x0000000000000000-0x000000012e6fffff]
[    0.002955] NODE_DATA(0) allocated [mem 0x12e6fc000-0x12e6fffff]
[    0.003292] Zone ranges:
[    0.003292]   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
[    0.003293]   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
[    0.003294]   Normal   [mem 0x0000000100000000-0x000000012e6fffff]
[    0.003295] Movable zone start for each node
[    0.003295] Early memory node ranges
[    0.003296]   node   0: [mem 0x0000000000001000-0x000000000009efff]
[    0.003296]   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
[    0.003297]   node   0: [mem 0x0000000100000000-0x000000012e6fffff]
[    0.004013] Zeroed struct page in unavailable ranges: 6535 pages
[    0.004015] Initmem setup node 0 [mem 0x0000000000001000-0x000000012e6fffff]
[    0.004016] On node 0 totalpages: 976505
[    0.004017]   DMA zone: 64 pages used for memmap
[    0.004017]   DMA zone: 21 pages reserved
[    0.004018]   DMA zone: 3998 pages, LIFO batch:0
[    0.004077]   DMA32 zone: 12224 pages used for memmap
[    0.004078]   DMA32 zone: 782299 pages, LIFO batch:63
[    0.022226]   Normal zone: 2972 pages used for memmap
[    0.022228]   Normal zone: 190208 pages, LIFO batch:63
[    0.027775] ACPI: PM-Timer IO Port: 0x608
[    0.027778] ACPI: Local APIC address 0xfee00000
[    0.027782] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
[    0.027813] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
[    0.027814] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.027816] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
[    0.027816] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.027817] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
[    0.027818] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
[    0.027819] ACPI: IRQ0 used by override.
[    0.027819] ACPI: IRQ5 used by override.
[    0.027820] ACPI: IRQ9 used by override.
[    0.027820] ACPI: IRQ10 used by override.
[    0.027820] ACPI: IRQ11 used by override.
[    0.027822] Using ACPI (MADT) for SMP configuration information
[    0.027823] ACPI: HPET id: 0x8086a201 base: 0xfed00000
[    0.027829] smpboot: Allowing 2 CPUs, 0 hotplug CPUs
[    0.027848] PM: Registered nosave memory: [mem 0x00000000-0x00000fff]
[    0.027849] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
[    0.027849] PM: Registered nosave memory: [mem 0x000a0000-0x000effff]
[    0.027850] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff]
[    0.027851] PM: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
[    0.027851] PM: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
[    0.027852] PM: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
[    0.027852] PM: Registered nosave memory: [mem 0xff000000-0xfffbffff]
[    0.027852] PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
[    0.027854] [mem 0xc0000000-0xfeffbfff] available for PCI devices
[    0.027854] Booting paravirtualized kernel on KVM
[    0.027857] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
[    0.129794] random: get_random_bytes called from start_kernel+0x8f/0x4bc with crng_init=0
[    0.129800] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64 nr_cpu_ids:2 nr_node_ids:1
[    0.130114] percpu: Embedded 43 pages/cpu s137176 r8192 d30760 u1048576
[    0.130117] pcpu-alloc: s137176 r8192 d30760 u1048576 alloc=1*2097152
[    0.130118] pcpu-alloc: [0] 0 1
[    0.130139] KVM setup async PF for cpu 0
[    0.130143] kvm-stealtime: cpu 0, msr 12a615200
[    0.130147] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes)
[    0.130151] Built 1 zonelists, mobility grouping on.  Total pages: 961224
[    0.130151] Policy zone: Normal
[    0.130152] Kernel command line: BOOT_IMAGE=/boot/bzImage root=/dev/sr0 loglevel=3 console=ttyS0 noembed nomodeset norestore waitusb=10 random.trust_cpu=on hw_rng_model=virtio systemd.legacy_systemd_cgroup_controller=yes initrd=/boot/initrd
[    0.130207] You have booted with nomodeset. This means your GPU drivers are DISABLED
[    0.130207] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[    0.130207] Unless you actually understand what nomodeset does, you should reboot without enabling it
[    0.139713] Calgary: detecting Calgary via BIOS EBDA area
[    0.139714] Calgary: Unable to locate Rio Grande table in EBDA - bailing!
[    0.147414] Memory: 3580100K/3906020K available (14348K kernel code, 1636K rwdata, 3440K rodata, 1428K init, 2356K bss, 325920K reserved, 0K cma-reserved)
[    0.147732] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
[    0.148030] rcu: Hierarchical RCU implementation.
[    0.148030] rcu:     RCU event tracing is enabled.
[    0.148031] rcu:     RCU restricting CPUs from NR_CPUS=64 to nr_cpu_ids=2.
[    0.148032] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
[    0.148147] NR_IRQS: 4352, nr_irqs: 440, preallocated irqs: 16
[    0.148408] Console: colour *CGA 80x25
[    0.148445] console [ttyS0] enabled
[    0.148452] ACPI: Core revision 20180810
[    0.148624] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns
[    0.148695] hpet clockevent registered
[    0.148715] APIC: Switch to symmetric I/O mode setup
[    0.148717] KVM setup pv IPIs
[    0.149671] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
[    0.149687] tsc: Marking TSC unstable due to TSCs unsynchronized
[    0.149694] Calibrating delay loop (skipped) preset value.. 6787.24 BogoMIPS (lpj=3393624)
[    0.149696] pid_max: default: 32768 minimum: 301
[    0.149708] Security Framework initialized
[    0.149709] SELinux:  Initializing.
[    0.151426] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes)
[    0.151726] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes)
[    0.151734] Mount-cache hash table entries: 8192 (order: 4, 65536 bytes)
[    0.151739] Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes)
[    0.151956] Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
[    0.151957] Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
[    0.151960] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
[    0.151961] Spectre V2 : Mitigation: Full AMD retpoline
[    0.151961] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
[    0.151962] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp
[    0.152179] Freeing SMP alternatives memory: 44K
[    0.153984] TSC deadline timer enabled
[    0.153997] smpboot: CPU0: AMD Ryzen 7 1700X Eight-Core Processor (family: 0x17, model: 0x1, stepping: 0x1)
[    0.154056] Performance Events: Fam17h core perfctr, AMD PMU driver.
[    0.154070] ... version:                0
[    0.154071] ... bit width:              48
[    0.154071] ... generic registers:      6
[    0.154071] ... value mask:             0000ffffffffffff
[    0.154072] ... max period:             00007fffffffffff
[    0.154072] ... fixed-purpose events:   0
[    0.154073] ... event mask:             000000000000003f
[    0.154102] rcu: Hierarchical SRCU implementation.
[    0.154220] random: crng done (trusting CPU's manufacturer)
[    0.154221] Decoding supported only on Scalable MCA processors.
[    0.154260] smp: Bringing up secondary CPUs ...
[    0.154335] x86: Booting SMP configuration:
[    0.154336] .... node  #0, CPUs:      #1
[    0.001382] kvm-clock: cpu 1, msr 1087ca041, secondary cpu clock
[    0.154889] KVM setup async PF for cpu 1
[    0.154889] kvm-stealtime: cpu 1, msr 12a715200
[    0.154889] smp: Brought up 1 node, 2 CPUs
[    0.154889] smpboot: Max logical packages: 2
[    0.154889] smpboot: Total of 2 processors activated (13574.49 BogoMIPS)
[    0.154926] devtmpfs: initialized
[    0.155706] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
[    0.155709] futex hash table entries: 512 (order: 3, 32768 bytes)
[    0.155717] kworker/u4:0 (23) used greatest stack depth: 14576 bytes left
[    0.155800] RTC time: 13:21:26, date: 05/02/20
[    0.155889] NET: Registered protocol family 16
[    0.156012] audit: initializing netlink subsys (disabled)
[    0.156030] audit: type=2000 audit(1588425687.655:1): state=initialized audit_enabled=0 res=1
[    0.156131] cpuidle: using governor menu
[    0.157091] KVM setup pv remote TLB flush
[    0.157091] ACPI: bus type PCI registered
[    0.157091] PCI: Using configuration type 1 for base access
[    0.157091] PCI: Using configuration type 1 for extended access
[    0.159024] kworker/u4:0 (49) used greatest stack depth: 14112 bytes left
[    0.162129] kworker/u4:0 (356) used greatest stack depth: 14056 bytes left
[    0.163970] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
[    0.163994] cryptd: max_cpu_qlen set to 1000
[    0.163994] ACPI: Added _OSI(Module Device)
[    0.163994] ACPI: Added _OSI(Processor Device)
[    0.163994] ACPI: Added _OSI(3.0 _SCP Extensions)
[    0.163994] ACPI: Added _OSI(Processor Aggregator Device)
[    0.163994] ACPI: Added _OSI(Linux-Dell-Video)
[    0.163994] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
[    0.164166] ACPI: 1 ACPI AML tables successfully acquired and loaded
[    0.165199] ACPI: Interpreter enabled
[    0.165209] ACPI: (supports S0 S3 S4 S5)
[    0.165210] ACPI: Using IOAPIC for interrupt routing
[    0.165222] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[    0.165287] ACPI: Enabled 2 GPEs in block 00 to 0F
[    0.166938] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
[    0.166942] acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI]
[    0.166945] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM
[    0.166971] PCI host bridge to bus 0000:00
[    0.166973] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
[    0.166974] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
[    0.166975] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
[    0.166976] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
[    0.166977] pci_bus 0000:00: root bus resource [mem 0x140000000-0x1bfffffff window]
[    0.166978] pci_bus 0000:00: root bus resource [bus 00-ff]
[    0.167016] pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
[    0.167522] pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
[    0.168119] pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
[    0.169589] pci 0000:00:01.1: reg 0x20: [io  0xc220-0xc22f]
[    0.170270] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
[    0.170270] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
[    0.170271] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
[    0.170272] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
[    0.170502] pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300
[    0.171905] pci 0000:00:01.2: reg 0x20: [io  0xc180-0xc19f]
[    0.172903] pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
[    0.173285] pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
[    0.173293] pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
[    0.173510] pci 0000:00:02.0: [1af4:1000] type 00 class 0x020000
[    0.174134] pci 0000:00:02.0: reg 0x10: [io  0xc1a0-0xc1bf]
[    0.175131] pci 0000:00:02.0: reg 0x14: [mem 0xfebc2000-0xfebc2fff]
[    0.177082] pci 0000:00:02.0: reg 0x20: [mem 0xfebec000-0xfebeffff 64bit pref]
[    0.177693] pci 0000:00:02.0: reg 0x30: [mem 0xfeb40000-0xfeb7ffff pref]
[    0.178301] pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000
[    0.179040] pci 0000:00:03.0: reg 0x10: [io  0xc1c0-0xc1df]
[    0.179692] pci 0000:00:03.0: reg 0x14: [mem 0xfebc3000-0xfebc3fff]
[    0.182036] pci 0000:00:03.0: reg 0x20: [mem 0xfebf0000-0xfebf3fff 64bit pref]
[    0.182692] pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref]
[    0.183249] pci 0000:00:04.0: [1000:0012] type 00 class 0x010000
[    0.184357] pci 0000:00:04.0: reg 0x10: [io  0xc000-0xc0ff]
[    0.185485] pci 0000:00:04.0: reg 0x14: [mem 0xfebc4000-0xfebc43ff]
[    0.186056] pci 0000:00:04.0: reg 0x18: [mem 0xfebc0000-0xfebc1fff]
[    0.188930] pci 0000:00:05.0: [1af4:1001] type 00 class 0x010000
[    0.190695] pci 0000:00:05.0: reg 0x10: [io  0xc100-0xc17f]
[    0.192693] pci 0000:00:05.0: reg 0x14: [mem 0xfebc5000-0xfebc5fff]
[    0.195697] pci 0000:00:05.0: reg 0x20: [mem 0xfebf4000-0xfebf7fff 64bit pref]
[    0.198543] pci 0000:00:06.0: [1af4:1002] type 00 class 0x00ff00
[    0.199662] pci 0000:00:06.0: reg 0x10: [io  0xc1e0-0xc1ff]
[    0.201550] pci 0000:00:06.0: reg 0x20: [mem 0xfebf8000-0xfebfbfff 64bit pref]
[    0.202801] pci 0000:00:07.0: [1af4:1005] type 00 class 0x00ff00
[    0.203430] pci 0000:00:07.0: reg 0x10: [io  0xc200-0xc21f]
[    0.205533] pci 0000:00:07.0: reg 0x20: [mem 0xfebfc000-0xfebfffff 64bit pref]
[    0.206888] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11)
[    0.206969] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)
[    0.207039] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
[    0.207107] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11)
[    0.207145] ACPI: PCI Interrupt Link [LNKS] (IRQs *9)
[    0.207286] vgaarb: loaded
[    0.207286] SCSI subsystem initialized
[    0.207286] libata version 3.00 loaded.
[    0.207286] ACPI: bus type USB registered
[    0.207286] usbcore: registered new interface driver usbfs
[    0.207286] usbcore: registered new interface driver hub
[    0.207286] usbcore: registered new device driver usb
[    0.207286] pps_core: LinuxPPS API ver. 1 registered
[    0.207286] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
[    0.207286] PTP clock support registered
[    0.207286] EDAC MC: Ver: 3.0.0
[    0.207738] Advanced Linux Sound Architecture Driver Initialized.
[    0.207751] PCI: Using ACPI for IRQ routing
[    0.207752] PCI: pci_cache_line_size set to 64 bytes
[    0.207940] e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
[    0.207941] e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
[    0.207941] e820: reserve RAM buffer [mem 0x12e700000-0x12fffffff]
[    0.208013] NetLabel: Initializing
[    0.208014] NetLabel:  domain hash size = 128
[    0.208014] NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
[    0.208023] NetLabel:  unlabeled traffic allowed by default
[    0.208043] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
[    0.208043] hpet0: 3 comparators, 64-bit 100.000000 MHz counter
[    0.210729] clocksource: Switched to clocksource kvm-clock
[    0.220590] VFS: Disk quotas dquot_6.6.0
[    0.220600] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[    0.220644] pnp: PnP ACPI init
[    0.220709] pnp 00:00: Plug and Play ACPI device, IDs PNP0b00 (active)
[    0.220735] pnp 00:01: Plug and Play ACPI device, IDs PNP0303 (active)
[    0.220753] pnp 00:02: Plug and Play ACPI device, IDs PNP0f13 (active)
[    0.220759] pnp 00:03: [dma 2]
[    0.220770] pnp 00:03: Plug and Play ACPI device, IDs PNP0700 (active)
[    0.220840] pnp 00:04: Plug and Play ACPI device, IDs PNP0501 (active)
[    0.220984] pnp: PnP ACPI: found 5 devices
[    0.228191] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
[    0.228199] pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
[    0.228200] pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
[    0.228201] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
[    0.228202] pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
[    0.228203] pci_bus 0000:00: resource 8 [mem 0x140000000-0x1bfffffff window]
[    0.228240] NET: Registered protocol family 2
[    0.228343] tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes)
[    0.228352] TCP established hash table entries: 32768 (order: 6, 262144 bytes)
[    0.228393] TCP bind hash table entries: 32768 (order: 7, 524288 bytes)
[    0.228436] TCP: Hash tables configured (established 32768 bind 32768)
[    0.228453] UDP hash table entries: 2048 (order: 4, 65536 bytes)
[    0.228462] UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes)
[    0.228491] NET: Registered protocol family 1
[    0.228617] RPC: Registered named UNIX socket transport module.
[    0.228618] RPC: Registered udp transport module.
[    0.228619] RPC: Registered tcp transport module.
[    0.228620] RPC: Registered tcp NFSv4.1 backchannel transport module.
[    0.228809] pci 0000:00:01.0: PIIX3: Enabling Passive Release
[    0.228829] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
[    0.228847] pci 0000:00:01.0: Activating ISA DMA hang workarounds
[    0.252446] PCI Interrupt Link [LNKD] enabled at IRQ 11
[    0.276822] pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x690 took 46810 usecs
[    0.276958] PCI: CLS 0 bytes, default 64
[    0.277003] Unpacking initramfs...
[    2.422891] Freeing initrd memory: 166196K
[    2.422895] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
[    2.422896] software IO TLB: mapped [mem 0xbbfdb000-0xbffdb000] (64MB)
[    2.423353] Scanning for low memory corruption every 60 seconds
[    2.423727] Initialise system trusted keyrings
[    2.423878] workingset: timestamp_bits=40 max_order=20 bucket_order=0
[    2.426354] NFS: Registering the id_resolver key type
[    2.426357] Key type id_resolver registered
[    2.426357] Key type id_legacy registered
[    2.426360] nfs4filelayout_init: NFSv4 File Layout Driver Registering...
[    2.426361] Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
[    2.426701] fuse init (API version 7.27)
[    2.426759] SGI XFS with ACLs, security attributes, no debug enabled
[    2.428217] NET: Registered protocol family 38
[    2.428218] Key type asymmetric registered
[    2.428219] Asymmetric key parser 'x509' registered
[    2.428225] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
[    2.428227] io scheduler noop registered
[    2.428227] io scheduler deadline registered
[    2.428244] io scheduler cfq registered (default)
[    2.428245] io scheduler mq-deadline registered
[    2.428246] io scheduler kyber registered
[    2.428356] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
[    2.428430] ACPI: Power Button [PWRF]
[    2.440863] PCI Interrupt Link [LNKB] enabled at IRQ 10
[    2.453914] PCI Interrupt Link [LNKC] enabled at IRQ 11
[    2.466882] PCI Interrupt Link [LNKA] enabled at IRQ 10
[    2.495423] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[    2.519155] 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
[    2.519427] Non-volatile memory driver v1.3
[    2.519814] Linux agpgart interface v0.103
[    2.520942] loop: module loaded
[    2.521755] virtio_blk virtio2: [vda] 39062500 512-byte logical blocks (20.0 GB/18.6 GiB)
[    2.524884] VMware PVSCSI driver - version 1.0.7.0-k
[    2.525058] ata_piix 0000:00:01.1: version 2.13
[    2.525825] scsi host0: ata_piix
[    2.525993] scsi host1: ata_piix
[    2.526064] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc220 irq 14
[    2.526066] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc228 irq 15
[    2.526174] tun: Universal TUN/TAP device driver, 1.6
[    2.529836] e100: Intel(R) PRO/100 Network Driver, 3.5.24-k2-NAPI
[    2.529836] e100: Copyright(c) 1999-2006 Intel Corporation
[    2.529849] e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI
[    2.529850] e1000: Copyright (c) 1999-2006 Intel Corporation.
[    2.529864] e1000e: Intel(R) PRO/1000 Network Driver - 3.2.6-k
[    2.529865] e1000e: Copyright(c) 1999 - 2015 Intel Corporation.
[    2.529881] sky2: driver version 1.30
[    2.529929] VMware vmxnet3 virtual NIC driver - version 1.4.16.0-k-NAPI
[    2.530013] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[    2.530014] ehci-pci: EHCI PCI platform driver
[    2.530019] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[    2.530029] ohci-pci: OHCI PCI platform driver
[    2.530034] uhci_hcd: USB Universal Host Controller Interface driver
[    2.542542] uhci_hcd 0000:00:01.2: UHCI Host Controller
[    2.542576] uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
[    2.542746] uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180
[    2.542809] usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 4.19
[    2.542810] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    2.542811] usb usb1: Product: UHCI Host Controller
[    2.542812] usb usb1: Manufacturer: Linux 4.19.107 uhci_hcd
[    2.542812] usb usb1: SerialNumber: 0000:00:01.2
[    2.542872] hub 1-0:1.0: USB hub found
[    2.542875] hub 1-0:1.0: 2 ports detected
[    2.542961] usbcore: registered new interface driver usblp
[    2.542968] usbcore: registered new interface driver usb-storage
[    2.542984] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
[    2.543593] serio: i8042 KBD port at 0x60,0x64 irq 1
[    2.543597] serio: i8042 AUX port at 0x60,0x64 irq 12
[    2.543788] rtc_cmos 00:00: RTC can wake from S4
[    2.544189] rtc_cmos 00:00: registered as rtc0
[    2.544213] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
[    2.544744] rtc_cmos 00:00: alarms up to one day, y3k, 114 bytes nvram, hpet irqs
[    2.544965] device-mapper: ioctl: 4.39.0-ioctl (2018-04-03) initialised: dm-devel@redhat.com
[    2.545683] hidraw: raw HID events driver (C) Jiri Kosina
[    2.545879] usbcore: registered new interface driver usbhid
[    2.545880] usbhid: USB HID core driver
[    2.546024] netem: version 1.3
[    2.546167] Initializing XFRM netlink socket
[    2.546266] NET: Registered protocol family 10
[    2.546569] Segment Routing with IPv6
[    2.546656] sit: IPv6, IPv4 and MPLS over IPv4 tunneling driver
[    2.546798] NET: Registered protocol family 17
[    2.546814] Key type dns_resolver registered
[    2.546818] Key type ceph registered
[    2.546877] libceph: loaded (mon/osd proto 15/24)
[    2.547108] mce: Using 10 MCE banks
[    2.547124] AVX2 version of gcm_enc/dec engaged.
[    2.547124] AES CTR mode by8 optimization enabled
[    2.547495] sched_clock: Marking stable (2547106485, 382834)->(2569669536, -22180217)
[    2.547656] registered taskstats version 1
[    2.547657] Loading compiled-in X.509 certificates
[    2.547917]   Magic number: 4:344:381
[    2.547946] console [netcon0] enabled
[    2.547947] netconsole: network logging started
[    2.548001] cfg80211: Loading compiled-in X.509 certificates for regulatory database
[    2.549277] cfg80211: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
[    2.549281] ALSA device list:
[    2.549282]   No soundcards found.
[    2.549645] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[    2.549647] cfg80211: failed to load regulatory.db
[    2.685865] Freeing unused kernel image memory: 1428K
[    2.690718] Write protecting the kernel read-only data: 20480k
[    2.691359] Freeing unused kernel image memory: 2004K
[    2.691488] Freeing unused kernel image memory: 656K
[    2.691490] Run /init as init process
[    3.074023] systemd[1]: systemd 240 running in system mode. (-PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK +SYSVINIT +UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid)
[    3.074046] systemd[1]: Detected virtualization kvm.
[    3.074048] systemd[1]: Detected architecture x86-64.
[    3.077711] systemd[1]: Set hostname to <minikube>.
[    3.077730] systemd[1]: Initializing machine ID from KVM UUID.
[    3.077865] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
[    3.080067] systemd-fstab-generator[1142]: Ignoring "noauto" for root device
[    3.085712] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[    3.085714] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[    3.091516] systemd[1]: /usr/lib/systemd/system/vmtoolsd.service:7: PIDFile= references path below legacy directory /var/run/, updating /var/run/vmtoolsd.pid \xe2\x86\x92 /run/vmtoolsd.pid; please update the unit file accordingly.
[    3.094768] systemd[1]: /usr/lib/systemd/system/rpc-statd.service:13: PIDFile= references path below legacy directory /var/run/, updating /var/run/rpc.statd.pid \xe2\x86\x92 /run/rpc.statd.pid; please update the unit file accordingly.
[    3.164497] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3
[    3.381137] systemd-journald[1170]: Received request to flush runtime journal from PID 1
[    3.506434] kvm: Nested Virtualization enabled
[    3.506435] kvm: Nested Paging enabled
[    3.845665]  vda: vda1
[    3.901702] fdisk (1775) used greatest stack depth: 14032 bytes left
[    3.902798]  vda: vda1
[    4.031453] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[    4.031455] NFSD: starting 90-second grace period (net f0000098)
[    4.031745] rpc.nfsd (1814) used greatest stack depth: 13640 bytes left
[    6.567335] mkfs.ext4 (1791) used greatest stack depth: 13264 bytes left
[    6.851980] EXT4-fs (vda1): mounted filesystem with ordered data mode. Opts: (null)
[    6.915409] vboxguest: loading out-of-tree module taints kernel.
[    6.918089] vboxguest: PCI device not found, probably running on physical hardware.
[    9.894463] systemd-fstab-generator[1995]: Ignoring "noauto" for root device
[   15.446726] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
[   15.447747] Bridge firewalling registered
[   15.455976] audit: type=1325 audit(1588425702.963:2): table=nat family=2 entries=0
[   15.456471] audit: type=1300 audit(1588425702.963:2): arch=c000003e syscall=313 success=yes exit=0 a0=5 a1=41a8e6 a2=0 a3=5 items=0 ppid=57 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=kernel key=(null)
[   15.456604] audit: type=1327 audit(1588425702.963:2): proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0069707461626C655F6E6174
[   15.476092] audit: type=1325 audit(1588425702.984:3): table=nat family=2 entries=5
[   15.476095] audit: type=1300 audit(1588425702.984:3): arch=c000003e syscall=54 success=yes exit=0 a0=5 a1=0 a2=40 a3=210ca60 items=0 ppid=2005 pid=2068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj=kernel key=(null)
[   15.476097] audit: type=1327 audit(1588425702.984:3): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552
[   15.477704] audit: type=1325 audit(1588425702.985:4): table=filter family=2 entries=4
[   15.477707] audit: type=1300 audit(1588425702.985:4): arch=c000003e syscall=54 success=yes exit=0 a0=5 a1=0 a2=40 a3=2064940 items=0 ppid=2005 pid=2070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj=kernel key=(null)
[   15.477709] audit: type=1327 audit(1588425702.985:4): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552
[   15.479410] audit: type=1325 audit(1588425702.987:5): table=filter family=2 entries=6
[   21.147817] kauditd_printk_skb: 59 callbacks suppressed
[   21.147818] audit: type=1325 audit(1588425708.659:25): table=filter family=2 entries=23
[   21.147822] audit: type=1300 audit(1588425708.659:25): arch=c000003e syscall=54 success=yes exit=0 a0=5 a1=0 a2=40 a3=fba9c0 items=0 ppid=2005 pid=2128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj=kernel key=(null)
[   21.147824] audit: type=1327 audit(1588425708.659:25): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552
[   21.148984] audit: type=1325 audit(1588425708.660:26): table=filter family=2 entries=22
[   21.148987] audit: type=1300 audit(1588425708.660:26): arch=c000003e syscall=54 success=yes exit=0 a0=5 a1=0 a2=40 a3=bbe7a0 items=0 ppid=2005 pid=2129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj=kernel key=(null)
[   21.148989] audit: type=1327 audit(1588425708.660:26): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552
[   32.447478] dockerd (2010) used greatest stack depth: 12224 bytes left
[   57.240294] audit: type=1325 audit(1588425744.760:27): table=nat family=2 entries=11
[   57.240298] audit: type=1300 audit(1588425744.760:27): arch=c000003e syscall=54 success=yes exit=0 a0=5 a1=0 a2=40 a3=1ab8740 items=0 ppid=2189 pid=2220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj=kernel key=(null)
[   57.240301] audit: type=1327 audit(1588425744.760:27): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4400505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552
[   57.241395] audit: type=1325 audit(1588425744.761:28): table=nat family=2 entries=10
[   57.241398] audit: type=1300 audit(1588425744.761:28): arch=c000003e syscall=54 success=yes exit=0 a0=5 a1=0 a2=40 a3=78b570 items=0 ppid=2189 pid=2221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj=kernel key=(null)
[   57.241400] audit: type=1327 audit(1588425744.761:28): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D44004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C0000002D2D647374003132372E302E302E302F38002D6A00444F434B4552
[   57.244618] audit: type=1325 audit(1588425744.764:29): table=nat family=2 entries=9
[   57.244622] audit: type=1300 audit(1588425744.764:29): arch=c000003e syscall=54 success=yes exit=0 a0=5 a1=0 a2=40 a3=2346ec0 items=0 ppid=2189 pid=2225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj=kernel key=(null)
[   57.244624] audit: type=1327 audit(1588425744.764:29): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4600444F434B4552
[   57.245478] audit: type=1325 audit(1588425744.765:30): table=nat family=2 entries=8
[   69.598736] kauditd_printk_skb: 71 callbacks suppressed
[   69.598738] audit: type=1325 audit(1588425757.118:54): table=nat family=2 entries=9
[   69.598744] audit: type=1300 audit(1588425757.118:54): arch=c000003e syscall=54 success=yes exit=0 a0=5 a1=0 a2=40 a3=9b6ce0 items=0 ppid=2189 pid=2287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj=kernel key=(null)
[   69.598748] audit: type=1327 audit(1588425757.118:54): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445
[   69.601257] audit: type=1325 audit(1588425757.121:55): table=nat family=2 entries=10
[   69.601383] audit: type=1300 audit(1588425757.121:55): arch=c000003e syscall=54 success=yes exit=0 a0=5 a1=0 a2=40 a3=a09830 items=0 ppid=2189 pid=2289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj=kernel key=(null)
[   69.601579] audit: type=1327 audit(1588425757.121:55): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E
[   69.604259] audit: type=1325 audit(1588425757.124:56): table=filter family=2 entries=17
[   69.604367] audit: type=1300 audit(1588425757.124:56): arch=c000003e syscall=54 success=yes exit=0 a0=5 a1=0 a2=40 a3=13f6eb0 items=0 ppid=2189 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj=kernel key=(null)
[   69.604571] audit: type=1327 audit(1588425757.124:56): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054
[   69.606280] audit: type=1325 audit(1588425757.126:57): table=filter family=2 entries=18
[   77.192078] systemd-fstab-generator[2403]: Ignoring "noauto" for root device
[   78.298358] systemd-fstab-generator[2630]: Ignoring "noauto" for root device
[   87.511339] kauditd_printk_skb: 26 callbacks suppressed
[   87.511340] audit: type=1325 audit(1588425775.032:66): table=nat family=2 entries=11
[   87.511426] audit: type=1300 audit(1588425775.032:66): arch=c000003e syscall=54 success=yes exit=0 a0=4 a1=0 a2=40 a3=d0f2f0 items=0 ppid=2675 pid=2806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj=kernel key=(null)
[   87.511487] audit: type=1327 audit(1588425775.032:66): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174
[   87.515574] audit: type=1325 audit(1588425775.036:67): table=nat family=2 entries=13
[   87.515672] audit: type=1300 audit(1588425775.036:67): arch=c000003e syscall=54 success=yes exit=0 a0=4 a1=0 a2=40 a3=1790100 items=0 ppid=2675 pid=2812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj=kernel key=(null)
[   87.515760] audit: type=1327 audit(1588425775.036:67): proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D7365742D786D61726B00307830303030383030302F30783030303038303030
[   87.517393] audit: type=1325 audit(1588425775.038:68): table=filter family=2 entries=23
[   87.517463] audit: type=1300 audit(1588425775.038:68): arch=c000003e syscall=54 success=yes exit=0 a0=4 a1=0 a2=40 a3=266e260 items=0 ppid=2675 pid=2817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj=kernel key=(null)
[   87.517533] audit: type=1327 audit(1588425775.038:68): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572
[   87.520188] audit: type=1325 audit(1588425775.041:69): table=filter family=2 entries=25
[  125.533349] NFSD: Unable to end grace period: -110

medyagh commented 4 years ago

@djrollins I am curious if this would help

"minikube start --driver=kvm2 --force-systemd=true"

I have a feeling something is killing your apiserver

and do you mind trying to see if you have the same problem with the docker driver?

minikube delete minikube start --driver=docker

today we are releasing minikube v1.10.0 I recommend trying it with the latest verison

deej-io commented 4 years ago

Hi @medyagh. Thank you for taking a look at this.

The docker driver works with no issues.

I also downloaded the latest version of minikube and tried the --force-systemd=true flag and still get launch issues:

minikube start --driver=kvm2 --force-systemd=true

😄 minikube v1.10.0 on Arch rolling ✨ Using the kvm2 driver based on user configuration 💾 Downloading driver docker-machine-driver-kvm2:

docker-machine-driver-kvm2.sha256: 65 B / 65 B [-------] 100.00% ? p/s 0s docker-machine-driver-kvm2: 13.88 MiB / 13.88 MiB 100.00% 9.52 MiB p/s 2 💿 Downloading VM boot image ... minikube-v1.10.0.iso.sha256: 65 B / 65 B [-------------] 100.00% ? p/s 0s minikube-v1.10.0.iso: 174.99 MiB / 174.99 MiB [] 100.00% 26.66 MiB p/s 7s 👍 Starting control plane node minikube in cluster minikube 💾 Downloading Kubernetes v1.18.1 preload ... preloaded-images-k8s-v3-v1.18.1-docker-overlay2-amd64.tar.lz4: 525.47 MiB 🔥 Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ... 🐳 Preparing Kubernetes v1.18.1 on Docker 19.03.8 ... 💥 initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.1:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [192.168.39.236 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [192.168.39.236 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.

Here is one example how you may list all Kubernetes containers running in docker:
    - 'docker ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'docker logs CONTAINERID'

stderr: W0512 19:09:52.827791 2656 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0512 19:09:56.307424 2656 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" W0512 19:09:56.308684 2656 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher

💣 Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.1:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.

Here is one example how you may list all Kubernetes containers running in docker:
    - 'docker ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'docker logs CONTAINERID'

stderr: W0512 19:19:15.048292 5880 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0512 19:19:16.556720 5880 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" W0512 19:19:16.557723 5880 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher

😿 minikube is exiting due to an error. If the above message is not useful, open an issue: 👉 https://github.com/kubernetes/minikube/issues/new/choose

❌ [NONE_KUBELET] failed to start node startup failed: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.1:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.

Here is one example how you may list all Kubernetes containers running in docker:
    - 'docker ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'docker logs CONTAINERID'

stderr: W0512 19:19:15.048292 5880 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0512 19:19:16.556720 5880 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" W0512 19:19:16.557723 5880 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher

💡 Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start ⁉️ Related issue: https://github.com/kubernetes/minikube/issues/4172

minikube logs

==> Docker <== -- Logs begin at Tue 2020-05-12 19:08:57 UTC, end at Tue 2020-05-12 20:24:14 UTC. -- May 12 19:09:46 minikube dockerd[2388]: time="2020-05-12T19:09:46.545568802Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" May 12 19:09:46 minikube dockerd[2388]: time="2020-05-12T19:09:46.545678747Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" May 12 19:09:46 minikube dockerd[2388]: time="2020-05-12T19:09:46.545760077Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" May 12 19:09:46 minikube dockerd[2388]: time="2020-05-12T19:09:46.546035067Z" level=info msg="Loading containers: start." May 12 19:09:49 minikube dockerd[2388]: time="2020-05-12T19:09:49.126981148Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 12 19:09:50 minikube dockerd[2388]: time="2020-05-12T19:09:50.689913233Z" level=info msg="Loading containers: done." May 12 19:09:51 minikube dockerd[2388]: time="2020-05-12T19:09:51.092986657Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8 May 12 19:09:51 minikube dockerd[2388]: time="2020-05-12T19:09:51.093056522Z" level=info msg="Daemon has completed initialization" May 12 19:09:51 minikube systemd[1]: Started Docker Application Container Engine. May 12 19:09:51 minikube dockerd[2388]: time="2020-05-12T19:09:51.828245859Z" level=info msg="API listen on /var/run/docker.sock" May 12 19:09:51 minikube dockerd[2388]: time="2020-05-12T19:09:51.828309753Z" level=info msg="API listen on [::]:2376" May 12 19:10:46 minikube dockerd[2388]: time="2020-05-12T19:10:46.500258353Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b4dbd808cd92c50bc7439ff59304cdd7542661e2b3bb1f81dee0006ce32fc9d2/shim.sock" debug=false pid=3505 May 12 19:10:55 minikube dockerd[2388]: time="2020-05-12T19:10:55.532995888Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fd3bfaf58d94f74e547194b2fde8676928109ce2f71a38309df0f64968f8a2a4/shim.sock" debug=false pid=3587 May 12 19:11:01 minikube dockerd[2388]: time="2020-05-12T19:11:01.525102451Z" level=error msg="Handler for GET /containers/fd3bfaf58d94f74e547194b2fde8676928109ce2f71a38309df0f64968f8a2a4/json returned error: write unix /var/run/docker.sock->@: write: broken pipe" May 12 19:11:01 minikube dockerd[2388]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11) May 12 19:11:04 minikube dockerd[2388]: time="2020-05-12T19:11:04.220642569Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2d6ff8819b100304220cdebf505e9f64d547079fff6ebd05c131d1859b73ba33/shim.sock" debug=false pid=3643 May 12 19:11:13 minikube dockerd[2388]: time="2020-05-12T19:11:13.021201919Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/08e5132a22b3d568f1a7d6ad9ddaab0072638a61eb6b5eb5a7c3e9cbf60be104/shim.sock" debug=false pid=3856 May 12 19:11:29 minikube dockerd[2388]: time="2020-05-12T19:11:29.570527847Z" level=info msg="shim reaped" id=08e5132a22b3d568f1a7d6ad9ddaab0072638a61eb6b5eb5a7c3e9cbf60be104 May 12 19:11:29 minikube dockerd[2388]: time="2020-05-12T19:11:29.581619862Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" May 12 19:11:40 minikube dockerd[2388]: time="2020-05-12T19:11:40.734021120Z" level=error msg="Handler for GET /containers/e57baec238b26c907d82d689a02df8eebce2bd7d89d41d77c27cd8abc45de611/json returned error: write unix /var/run/docker.sock->@: write: broken pipe" May 12 19:11:40 minikube dockerd[2388]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11) May 12 19:12:06 minikube dockerd[2388]: time="2020-05-12T19:12:06.194158286Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/eda6d03f53fa129f1f38b692cd8c021dfd34b5d9a54e010dec2e7841ac9461dd/shim.sock" debug=false pid=4880 May 12 19:12:06 minikube dockerd[2388]: time="2020-05-12T19:12:06.817675241Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3730c003eabc4e382d127019c237f50f879d23c5a475fb7d509df1062446b194/shim.sock" debug=false pid=4921 May 12 19:12:10 minikube dockerd[2388]: time="2020-05-12T19:12:10.413667427Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0611f4010117054abe8ce50a1e0c84593d516175e22dbdc8e7955a492003c0c8/shim.sock" debug=false pid=5003 May 12 19:12:12 minikube dockerd[2388]: time="2020-05-12T19:12:12.547489394Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/68d545fc1c09efee01da1f82485721d97140498d4fb5a45fcc75355f05702785/shim.sock" debug=false pid=5111 May 12 19:12:23 minikube dockerd[2388]: time="2020-05-12T19:12:23.467473876Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fe58b35561b5ecb46475b219da45d0bfd2997149bdfaaea85061ba18e835a519/shim.sock" debug=false pid=5248 May 12 19:13:49 minikube dockerd[2388]: time="2020-05-12T19:13:49.823033471Z" level=info msg="shim reaped" id=eda6d03f53fa129f1f38b692cd8c021dfd34b5d9a54e010dec2e7841ac9461dd May 12 19:13:49 minikube dockerd[2388]: time="2020-05-12T19:13:49.833335016Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" May 12 19:14:13 minikube dockerd[2388]: time="2020-05-12T19:14:13.094187335Z" level=info msg="shim reaped" id=fe58b35561b5ecb46475b219da45d0bfd2997149bdfaaea85061ba18e835a519 May 12 19:14:13 minikube dockerd[2388]: time="2020-05-12T19:14:13.104578843Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" May 12 19:15:20 minikube dockerd[2388]: time="2020-05-12T19:15:20.141822345Z" level=info msg="shim reaped" id=68d545fc1c09efee01da1f82485721d97140498d4fb5a45fcc75355f05702785 May 12 19:15:20 minikube dockerd[2388]: time="2020-05-12T19:15:20.152087242Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" May 12 19:16:34 minikube dockerd[2388]: time="2020-05-12T19:16:34.510236830Z" level=info msg="shim reaped" id=0611f4010117054abe8ce50a1e0c84593d516175e22dbdc8e7955a492003c0c8 May 12 19:16:34 minikube dockerd[2388]: time="2020-05-12T19:16:34.520469455Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" May 12 19:17:14 minikube dockerd[2388]: time="2020-05-12T19:17:14.197389958Z" level=info msg="shim reaped" id=3730c003eabc4e382d127019c237f50f879d23c5a475fb7d509df1062446b194 May 12 19:17:14 minikube dockerd[2388]: time="2020-05-12T19:17:14.207618790Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" May 12 19:17:55 minikube dockerd[2388]: time="2020-05-12T19:17:55.183301407Z" level=info msg="shim reaped" id=2d6ff8819b100304220cdebf505e9f64d547079fff6ebd05c131d1859b73ba33 May 12 19:17:55 minikube dockerd[2388]: time="2020-05-12T19:17:55.193625628Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" May 12 19:18:21 minikube dockerd[2388]: time="2020-05-12T19:18:21.370582015Z" level=info msg="shim reaped" id=b4dbd808cd92c50bc7439ff59304cdd7542661e2b3bb1f81dee0006ce32fc9d2 May 12 19:18:21 minikube dockerd[2388]: time="2020-05-12T19:18:21.380929360Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" May 12 19:18:41 minikube dockerd[2388]: time="2020-05-12T19:18:41.311648960Z" level=info msg="shim reaped" id=fd3bfaf58d94f74e547194b2fde8676928109ce2f71a38309df0f64968f8a2a4 May 12 19:18:41 minikube dockerd[2388]: time="2020-05-12T19:18:41.321735778Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" May 12 19:20:03 minikube dockerd[2388]: time="2020-05-12T19:20:03.179720359Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3cfbc83cef166850ab2f58b68b4e228d40a6219a595ad74e8fea14b52f5712d5/shim.sock" debug=false pid=6784 May 12 19:20:11 minikube dockerd[2388]: time="2020-05-12T19:20:11.952008842Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bd94ebdd85dee2bf02605d7317b147939088159c95c21c2674f064b2363f59ab/shim.sock" debug=false pid=6837 May 12 19:20:15 minikube dockerd[2388]: time="2020-05-12T19:20:15.000024082Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c733620d71d07006a6424951d79b412bb7549175be65b913e22991b3531f2ad0/shim.sock" debug=false pid=7000 May 12 19:20:15 minikube dockerd[2388]: time="2020-05-12T19:20:15.046696772Z" level=error msg="Handler for GET /containers/bd94ebdd85dee2bf02605d7317b147939088159c95c21c2674f064b2363f59ab/json returned error: write unix /var/run/docker.sock->@: write: broken pipe" May 12 19:20:15 minikube dockerd[2388]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11) May 12 19:20:17 minikube dockerd[2388]: time="2020-05-12T19:20:17.705908924Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4dcf508c3dcf888e5dba2f78d356b48c121f7d8355f94c410a84ab245c6c0a14/shim.sock" debug=false pid=7052 May 12 19:20:18 minikube dockerd[2388]: time="2020-05-12T19:20:18.732809311Z" level=info msg="shim reaped" id=bd94ebdd85dee2bf02605d7317b147939088159c95c21c2674f064b2363f59ab May 12 19:20:18 minikube dockerd[2388]: time="2020-05-12T19:20:18.743215131Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" May 12 19:20:45 minikube dockerd[2388]: time="2020-05-12T19:20:45.200217924Z" level=error msg="4ede6bfb72a05c4a6e87f6868989ba6e036a5ef3f59d5cd9cc8fbb975ac081bd cleanup: failed to delete container from containerd: no such container" May 12 19:20:47 minikube dockerd[2388]: time="2020-05-12T19:20:47.610010110Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b1639bb1b94223a13572fd9a8d3971e1ddde007e2e8abeac16916b2812ea17e7/shim.sock" debug=false pid=7653 May 12 19:20:48 minikube dockerd[2388]: time="2020-05-12T19:20:48.383552398Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4589f55c1a61a53bdde6c8a073861b8e16ce589ab36050ba93ba4f76665dd2eb/shim.sock" debug=false pid=7692 May 12 19:20:50 minikube dockerd[2388]: time="2020-05-12T19:20:50.430946805Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/72ccfe6bc01d43b172fdb0fe2602d8ff6797c4a77780b5cade8e2480b7df12bc/shim.sock" debug=false pid=7868 May 12 19:20:51 minikube dockerd[2388]: time="2020-05-12T19:20:51.919459275Z" level=error msg="Handler for GET /containers/4589f55c1a61a53bdde6c8a073861b8e16ce589ab36050ba93ba4f76665dd2eb/json returned error: write unix /var/run/docker.sock->@: write: broken pipe" May 12 19:20:51 minikube dockerd[2388]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11) May 12 19:21:11 minikube dockerd[2388]: time="2020-05-12T19:21:11.453711074Z" level=info msg="shim reaped" id=72ccfe6bc01d43b172fdb0fe2602d8ff6797c4a77780b5cade8e2480b7df12bc May 12 19:21:11 minikube dockerd[2388]: time="2020-05-12T19:21:11.464000418Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 12 19:21:14 minikube dockerd[2388]: time="2020-05-12T19:21:14.582445007Z" level=error msg="Handler for GET /containers/72ccfe6bc01d43b172fdb0fe2602d8ff6797c4a77780b5cade8e2480b7df12bc/json returned error: write unix /var/run/docker.sock->@: write: broken pipe" May 12 19:21:14 minikube dockerd[2388]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)

==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID b1639bb1b9422 6c9320041a7b5 About an hour ago Running kube-scheduler 0 4dcf508c3dcf8 4589f55c1a61a d1ccdd18e6ed8 About an hour ago Running kube-controller-manager 0 c733620d71d07 72ccfe6bc01d4 a595af0107f98 About an hour ago Exited kube-apiserver 0 3cfbc83cef166 4ede6bfb72a05 303ce5db0e90d About an hour ago Created etcd 0 bd94ebdd85dee

==> describe nodes <== E0512 21:24:14.206796 17130 logs.go:178] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1 stdout:

stderr: The connection to the server localhost:8443 was refused - did you specify the right host or port? output: "\n stderr \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n /stderr "

==> dmesg <== [May12 19:08] You have booted with nomodeset. This means your GPU drivers are DISABLED [ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly [ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it [ +0.025094] Decoding supported only on Scalable MCA processors. [ +2.548743] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 [ +0.547588] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument [ +0.002497] systemd-fstab-generator[1143]: Ignoring "noauto" for root device [ +0.005881] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling. [ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.) [ +0.993796] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack. [May12 19:09] vboxguest: loading out-of-tree module taints kernel. [ +0.003267] vboxguest: PCI device not found, probably running on physical hardware. [ +3.688908] systemd-fstab-generator[2006]: Ignoring "noauto" for root device [ +0.084174] systemd-fstab-generator[2016]: Ignoring "noauto" for root device [ +14.457236] systemd-fstab-generator[2210]: Ignoring "noauto" for root device [ +15.346051] kauditd_printk_skb: 65 callbacks suppressed [ +6.717910] systemd-fstab-generator[2377]: Ignoring "noauto" for root device [ +3.634444] kauditd_printk_skb: 107 callbacks suppressed [ +5.483658] systemd-fstab-generator[2595]: Ignoring "noauto" for root device [ +1.207715] systemd-fstab-generator[2803]: Ignoring "noauto" for root device [May12 19:10] kauditd_printk_skb: 107 callbacks suppressed [ +56.096301] NFSD: Unable to end grace period: -110 [May12 19:19] systemd-fstab-generator[6020]: Ignoring "noauto" for root device

==> etcd [4ede6bfb72a0] <==

==> kernel <== 20:24:14 up 1:15, 0 users, load average: 0.21, 0.22, 0.25 Linux minikube 4.19.107 #1 SMP Mon May 11 14:51:04 PDT 2020 x86_64 GNU/Linux PRETTY_NAME="Buildroot 2019.02.10"

==> kube-apiserver [72ccfe6bc01d] <== Flag --insecure-port has been deprecated, This flag will be removed in a future version. I0512 19:20:50.590725 1 server.go:656] external host was not specified, using 192.168.39.236 I0512 19:20:50.590995 1 server.go:153] Version: v1.18.1 I0512 19:20:51.398743 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0512 19:20:51.398801 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I0512 19:20:51.399738 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0512 19:20:51.399776 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I0512 19:20:51.401286 1 client.go:361] parsed scheme: "endpoint" I0512 19:20:51.401358 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] W0512 19:20:51.401578 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... I0512 19:20:52.399236 1 client.go:361] parsed scheme: "endpoint" I0512 19:20:52.399293 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] W0512 19:20:52.399656 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0512 19:20:52.401903 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0512 19:20:53.400060 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0512 19:20:54.157920 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0512 19:20:54.758522 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0512 19:20:56.229453 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0512 19:20:57.193226 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0512 19:20:59.664697 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0512 19:21:01.666702 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0512 19:21:05.234763 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0512 19:21:07.270222 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... panic: context deadline exceeded

goroutine 1 [running]: k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/registry/customresourcedefinition.NewREST(0xc0002e4d90, 0x50e5040, 0xc00015eb40, 0xc00015ed68) /workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/registry/customresourcedefinition/etcd.go:56 +0x3e7 k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver.completedConfig.New(0xc0002198c0, 0xc0002aa008, 0x51a38a0, 0x77427d8, 0x10, 0x0, 0x0) /workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/apiserver.go:145 +0x14ef k8s.io/kubernetes/cmd/kube-apiserver/app.createAPIExtensionsServer(0xc0002aa000, 0x51a38a0, 0x77427d8, 0x0, 0x50e4c00, 0xc0001105e0) /workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/apiextensions.go:102 +0x59 k8s.io/kubernetes/cmd/kube-apiserver/app.CreateServerChain(0xc00051d600, 0xc0001d8ba0, 0x4559d51, 0xc, 0xc0009b1c48) /workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/server.go:186 +0x2b8 k8s.io/kubernetes/cmd/kube-apiserver/app.Run(0xc00051d600, 0xc0001d8ba0, 0x0, 0x0) /workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/server.go:155 +0x101 k8s.io/kubernetes/cmd/kube-apiserver/app.NewAPIServerCommand.func1(0xc0000cd680, 0xc0006971e0, 0x0, 0x1a, 0x0, 0x0) /workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/server.go:122 +0x104 k8s.io/kubernetes/vendor/github.com/spf13/cobra.(Command).execute(0xc0000cd680, 0xc00004c1d0, 0x1a, 0x1b, 0xc0000cd680, 0xc00004c1d0) /workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826 +0x460 k8s.io/kubernetes/vendor/github.com/spf13/cobra.(Command).ExecuteC(0xc0000cd680, 0x160e5e2970f54de7, 0x7724600, 0xc00006a750) /workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914 +0x2fb k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...) /workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864 main.main() _output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-apiserver/apiserver.go:43 +0xcd

==> kube-controller-manager [4589f55c1a61] <== E0512 20:20:59.842495 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:03.478994 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:06.046677 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:10.050771 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:13.478817 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:16.926150 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:20.551689 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:24.843236 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:28.806542 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:32.924087 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:36.050867 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:39.825471 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:43.654725 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:46.442510 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:50.627082 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:54.547982 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:58.720883 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:02.535346 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:06.873408 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:11.195995 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:13.607874 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:15.637804 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:19.502026 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:21.515771 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:25.237115 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:29.334163 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:31.614863 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:34.270586 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:36.626119 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:40.149377 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:43.334855 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:45.977980 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:48.717196 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:52.131962 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:55.419829 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:58.260486 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:01.246123 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:03.890003 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:07.132564 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:10.394724 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:14.127198 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:16.817262 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:21.182680 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:25.131093 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:27.416795 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:29.912032 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:32.516376 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:35.253846 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:39.222031 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:41.768047 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:45.401072 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:48.264683 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:50.459693 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:54.556942 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:56.931641 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:24:01.273027 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:24:03.558935 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:24:05.595981 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:24:09.353967 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:24:13.266204 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused

==> kube-scheduler [b1639bb1b942] <== E0512 20:19:05.647225 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:19:19.337765 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:19:20.968273 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:19:27.409878 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:19:27.670998 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:19:36.597874 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:19:38.757404 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:19:44.729456 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:19:50.927294 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:19:58.244434 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:20:10.857905 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:20:12.322198 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:20:16.105334 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:20:17.736846 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:20:18.065712 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:20:19.076513 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:20:22.834996 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:20:35.547453 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:20:43.988468 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:20:48.766115 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:20:50.546117 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:20:54.808796 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:00.714038 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:02.482442 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:09.072815 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:11.837046 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:21.767145 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:22.548846 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:34.082777 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:41.395737 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:43.568710 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:43.820049 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:45.407292 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:55.957862 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:00.423490 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:05.365916 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:10.221550 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:15.907561 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:16.149820 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:19.339476 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:23.410785 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:33.140545 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:42.147722 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:47.476680 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:49.684092 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:50.995786 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:51.655045 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:58.927357 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:03.181384 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:08.177571 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:26.056188 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:26.197130 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:37.015536 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:39.048145 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:40.092269 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:42.023689 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:52.387052 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:24:00.828939 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:24:05.718486 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:24:07.789335 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused

==> kubelet <== -- Logs begin at Tue 2020-05-12 19:08:57 UTC, end at Tue 2020-05-12 20:24:14 UTC. -- May 12 20:24:11 minikube kubelet[26616]: E0512 20:24:11.056462 26616 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused May 12 20:24:11 minikube kubelet[26616]: I0512 20:24:11.135052 26616 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach May 12 20:24:11 minikube kubelet[26616]: E0512 20:24:11.138016 26616 kubelet.go:2267] node "minikube" not found May 12 20:24:11 minikube kubelet[26616]: E0512 20:24:11.160978 26616 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed yet May 12 20:24:11 minikube kubelet[26616]: I0512 20:24:11.161629 26616 kubelet_node_status.go:70] Attempting to register node minikube May 12 20:24:11 minikube kubelet[26616]: E0512 20:24:11.161887 26616 kubelet_node_status.go:92] Unable to register node "minikube" with API server: Post https://control-plane.minikube.internal:8443/api/v1/nodes: dial tcp 192.168.39.236:8443: connect: connection refused May 12 20:24:11 minikube kubelet[26616]: I0512 20:24:11.165625 26616 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach May 12 20:24:11 minikube kubelet[26616]: F0512 20:24:11.183710 26616 kubelet.go:1383] Failed to start ContainerManager failed to build map of initial containers from runtime: no PodsandBox found with Id 'bd94ebdd85dee2bf02605d7317b147939088159c95c21c2674f064b2363f59ab' May 12 20:24:11 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION May 12 20:24:11 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'. May 12 20:24:11 minikube systemd[1]: kubelet.service: Service RestartSec=600ms expired, scheduling restart. May 12 20:24:11 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 526. May 12 20:24:11 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent. May 12 20:24:11 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent. May 12 20:24:11 minikube kubelet[26843]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 20:24:11 minikube kubelet[26843]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 20:24:11 minikube kubelet[26843]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 20:24:11 minikube kubelet[26843]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 20:24:11 minikube kubelet[26843]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 20:24:11 minikube kubelet[26843]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 20:24:11 minikube kubelet[26843]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 20:24:11 minikube kubelet[26843]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 20:24:11 minikube kubelet[26843]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 20:24:11 minikube kubelet[26843]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 20:24:11 minikube kubelet[26843]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 20:24:11 minikube kubelet[26843]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.907767 26843 server.go:417] Version: v1.18.1 May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.909087 26843 plugins.go:100] No cloud provider specified. May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.909189 26843 server.go:837] Client rotation is on, will bootstrap in background May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.913787 26843 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.975566 26843 server.go:646] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.975998 26843 container_manager_linux.go:266] container manager verified user specified cgroup-root exists: [] May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.976020 26843 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.976100 26843 topology_manager.go:126] [topologymanager] Creating topology manager with none policy May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.976112 26843 container_manager_linux.go:301] [topologymanager] Initializing Topology Manager with none policy May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.976120 26843 container_manager_linux.go:306] Creating device plugin manager: true May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.976189 26843 client.go:75] Connecting to docker on unix:///var/run/docker.sock May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.976211 26843 client.go:92] Start docker client with request timeout=2m0s May 12 20:24:11 minikube kubelet[26843]: W0512 20:24:11.981580 26843 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth" May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.981612 26843 docker_service.go:238] Hairpin mode set to "hairpin-veth" May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.988524 26843 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.996264 26843 docker_service.go:258] Docker Info: &{ID:PRY2:GDW4:3K6F:5HJE:6XX3:LC46:UGLY:HPWO:OZGR:PQHC:FDGX:ZYTW Containers:7 ContainersRunning:5 ContainersPaused:0 ContainersStopped:2 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem ] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:63 SystemTime:2020-05-12T20:24:11.989231961Z LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:4.19.107 OperatingSystem:Buildroot 2019.02.10 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc000757180 NCPU:2 MemTotal:3841503232 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:minikube Labels:[provider=kvm2] ExperimentalBuild:false ServerVersion:19.03.8 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster: Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:[]} May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.996478 26843 docker_service.go:271] Setting cgroupDriver to systemd May 12 20:24:12 minikube kubelet[26843]: I0512 20:24:12.004224 26843 remote_runtime.go:59] parsed scheme: "" May 12 20:24:12 minikube kubelet[26843]: I0512 20:24:12.004238 26843 remote_runtime.go:59] scheme "" not registered, fallback to default scheme May 12 20:24:12 minikube kubelet[26843]: I0512 20:24:12.004262 26843 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] } May 12 20:24:12 minikube kubelet[26843]: I0512 20:24:12.004269 26843 clientconn.go:933] ClientConn switching balancer to "pick_first" May 12 20:24:12 minikube kubelet[26843]: I0512 20:24:12.004296 26843 remote_image.go:50] parsed scheme: "" May 12 20:24:12 minikube kubelet[26843]: I0512 20:24:12.004301 26843 remote_image.go:50] scheme "" not registered, fallback to default scheme May 12 20:24:12 minikube kubelet[26843]: I0512 20:24:12.004308 26843 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] } May 12 20:24:12 minikube kubelet[26843]: I0512 20:24:12.004313 26843 clientconn.go:933] ClientConn switching balancer to "pick_first" May 12 20:24:12 minikube kubelet[26843]: I0512 20:24:12.004336 26843 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests May 12 20:24:12 minikube kubelet[26843]: I0512 20:24:12.004352 26843 kubelet.go:317] Watching apiserver May 12 20:24:12 minikube kubelet[26843]: E0512 20:24:12.007553 26843 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused May 12 20:24:12 minikube kubelet[26843]: E0512 20:24:12.007956 26843 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused May 12 20:24:12 minikube kubelet[26843]: E0512 20:24:12.008072 26843 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused May 12 20:24:12 minikube kubelet[26843]: E0512 20:24:12.008194 26843 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused May 12 20:24:12 minikube kubelet[26843]: E0512 20:24:12.008910 26843 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused May 12 20:24:12 minikube kubelet[26843]: E0512 20:24:12.010699 26843 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused May 12 20:24:14 minikube kubelet[26843]: E0512 20:24:14.245905 26843 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused

❗ unable to fetch logs for: describe nodes

I am happy using the docker or virtuabox drivers for now. But really interested understanding what is causing these errors.

Many thanks, Daniel

medyagh commented 4 years ago

Thank you @djrollins I am glad that other drivers works, I will keep this issue open. so we find the root cause and fix it for all other users who might have same problem as you. I hope if anyone else has this issue, comment and let us know. thank you agian.

tstromberg commented 4 years ago

I looked at the most recent logs and still don't have a clue what may be going on here. The logs look OK to me.

medyagh commented 4 years ago

Regrettably, there isn't enough information in this issue to make it actionable, and a long enough duration has passed, so this issue is likely difficult to replicate. If you can provide any additional details, such as:

The exact minikube start command line used: preferably with --alsologtostderr -v=8 added. The full output of the command The full output of "minikube logs" Please feel free to do so at any point. Thank you for sharing your experience!

Meanwhile have you tried out newest driver Docker Driver with latest version of minikube? you could try minikube delete minikube start --driver=docker

for more information on the docker driver checkout: https://minikube.sigs.k8s.io/docs/drivers/docker/

deej-io commented 4 years ago

Just a quick update as everything is working fine now:

I recently noticed that CPU virtualisation was disabled in my BIOS for some reason. I never thought to check it as I just assumed libvirtd wouldn't even start if that was the case. Anyway, I switched it on everything seems to run fine now.

Thank you for all of your help, but it seems like this one might have been user error!