Closed deej-io closed 4 years ago
Do you mind checking if upgrading to minikube v1.9.2 fixes the issue? You may need to run minikube delete
first to delete the corrupt state.
Hi. Thank you for getting back to me.
I've updated to v1.9.2 from the arch repos and continue to see similar issues:
😄 minikube v1.9.2 on Arch rolling
✨ Using the kvm2 driver based on user configuration
👍 Starting control plane node m01 in cluster minikube
🔥 Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
🐳 Preparing Kubernetes v1.18.0 on Docker 19.03.8 ...
💥 initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [192.168.39.139 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [192.168.39.139 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0429 21:38:22.966334 2476 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0429 21:38:25.933310 2476 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0429 21:38:25.934837 2476 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
Output from minikube logs
:
==> Docker <==
-- Logs begin at Wed 2020-04-29 21:37:30 UTC, end at Wed 2020-04-29 21:45:47 UTC. --
Apr 29 21:38:20 minikube dockerd[2185]: time="2020-04-29T21:38:20.032249257Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Apr 29 21:38:21 minikube dockerd[2185]: time="2020-04-29T21:38:21.003563886Z" level=info msg="Loading containers: done."
Apr 29 21:38:21 minikube dockerd[2185]: time="2020-04-29T21:38:21.373856798Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8
Apr 29 21:38:21 minikube dockerd[2185]: time="2020-04-29T21:38:21.374510194Z" level=info msg="Daemon has completed initialization"
Apr 29 21:38:22 minikube dockerd[2185]: time="2020-04-29T21:38:22.000875316Z" level=info msg="API listen on /var/run/docker.sock"
Apr 29 21:38:22 minikube dockerd[2185]: time="2020-04-29T21:38:22.000954600Z" level=info msg="API listen on [::]:2376"
Apr 29 21:38:22 minikube systemd[1]: Started Docker Application Container Engine.
Apr 29 21:38:56 minikube dockerd[2185]: time="2020-04-29T21:38:56.882619532Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d27eed3852e26bc27f012e4fe6a0df5d841141655077a1609f8b76eb70f03790/shim.sock" debug=false pid=3266
Apr 29 21:39:16 minikube dockerd[2185]: time="2020-04-29T21:39:16.029890016Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f3506c834b0c37904fea7c8e8c7ce3f0fed094ecd8ac4883b6d82ce4d1822f1b/shim.sock" debug=false pid=3400
Apr 29 21:39:18 minikube dockerd[2185]: time="2020-04-29T21:39:18.314195590Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/cd2efbed5d4bb27d299d549940543730c4775532d7070a06de02170a1ee4613a/shim.sock" debug=false pid=3491
Apr 29 21:39:22 minikube dockerd[2185]: time="2020-04-29T21:39:22.296970438Z" level=error msg="Handler for GET /containers/cd2efbed5d4bb27d299d549940543730c4775532d7070a06de02170a1ee4613a/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Apr 29 21:39:22 minikube dockerd[2185]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
Apr 29 21:39:25 minikube dockerd[2185]: time="2020-04-29T21:39:25.105630811Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8ee497d2b10b3941912e23e1a6bab72abc78e3f6936b3dcf8c4fd2fb27d76772/shim.sock" debug=false pid=3645
Apr 29 21:39:30 minikube dockerd[2185]: time="2020-04-29T21:39:30.782668033Z" level=info msg="shim reaped" id=8ee497d2b10b3941912e23e1a6bab72abc78e3f6936b3dcf8c4fd2fb27d76772
Apr 29 21:39:30 minikube dockerd[2185]: time="2020-04-29T21:39:30.792782319Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 29 21:39:49 minikube dockerd[2185]: time="2020-04-29T21:39:49.755644231Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d26e7a73f510eb54e60bee2fe30cccff8245bd840122b7b807a21f264204f37a/shim.sock" debug=false pid=4063
Apr 29 21:39:52 minikube dockerd[2185]: time="2020-04-29T21:39:52.483488839Z" level=error msg="2b771beddd8e5eeb0d58bd1332f0511c7a45e63327308af768ae4507da595d41 cleanup: failed to delete container from containerd: no such container"
Apr 29 21:39:55 minikube dockerd[2185]: time="2020-04-29T21:39:55.048875629Z" level=error msg="Handler for GET /containers/2b771beddd8e5eeb0d58bd1332f0511c7a45e63327308af768ae4507da595d41/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Apr 29 21:39:55 minikube dockerd[2185]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
Apr 29 21:39:55 minikube dockerd[2185]: time="2020-04-29T21:39:55.441860431Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/149a2cf94f0c8bd63a58f827dc6622219b433c75b983db4adc5f6712afa99bfc/shim.sock" debug=false pid=4244
Apr 29 21:39:56 minikube dockerd[2185]: time="2020-04-29T21:39:56.965894916Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/473b0f93a5d0f6691c92d273c1dfd4fc16e078ae5a4c6b7a7e6251bf22e437bc/shim.sock" debug=false pid=4285
Apr 29 21:39:59 minikube dockerd[2185]: time="2020-04-29T21:39:59.510175211Z" level=error msg="Handler for GET /containers/473b0f93a5d0f6691c92d273c1dfd4fc16e078ae5a4c6b7a7e6251bf22e437bc/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Apr 29 21:39:59 minikube dockerd[2185]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
Apr 29 21:40:28 minikube dockerd[2185]: time="2020-04-29T21:40:28.683504104Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/88d4ad283f4055c927860ba9fc7b91a8b6778226f0d0b229ad2fa1707c553c76/shim.sock" debug=false pid=4572
Apr 29 21:41:13 minikube dockerd[2185]: time="2020-04-29T21:41:13.869682213Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ab31b84140009419ec8e77461d6ac001b479c33e97cc4543741ea96703a5083b/shim.sock" debug=false pid=4764
Apr 29 21:41:30 minikube dockerd[2185]: time="2020-04-29T21:41:30.213957521Z" level=info msg="shim reaped" id=149a2cf94f0c8bd63a58f827dc6622219b433c75b983db4adc5f6712afa99bfc
Apr 29 21:41:30 minikube dockerd[2185]: time="2020-04-29T21:41:30.224131564Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 29 21:41:53 minikube dockerd[2185]: time="2020-04-29T21:41:53.247178838Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/eb2b6fe00a3cf30755188441d439826ee0337aab9884fc81e5035222dd35c02b/shim.sock" debug=false pid=4933
Apr 29 21:42:35 minikube dockerd[2185]: time="2020-04-29T21:42:35.686013096Z" level=info msg="shim reaped" id=eb2b6fe00a3cf30755188441d439826ee0337aab9884fc81e5035222dd35c02b
Apr 29 21:42:35 minikube dockerd[2185]: time="2020-04-29T21:42:35.696279427Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 29 21:42:42 minikube dockerd[2185]: time="2020-04-29T21:42:42.562157609Z" level=info msg="shim reaped" id=ab31b84140009419ec8e77461d6ac001b479c33e97cc4543741ea96703a5083b
Apr 29 21:42:42 minikube dockerd[2185]: time="2020-04-29T21:42:42.572515150Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 29 21:42:50 minikube dockerd[2185]: time="2020-04-29T21:42:50.468443422Z" level=info msg="shim reaped" id=88d4ad283f4055c927860ba9fc7b91a8b6778226f0d0b229ad2fa1707c553c76
Apr 29 21:42:50 minikube dockerd[2185]: time="2020-04-29T21:42:50.479143197Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 29 21:42:58 minikube dockerd[2185]: time="2020-04-29T21:42:58.778845340Z" level=info msg="shim reaped" id=473b0f93a5d0f6691c92d273c1dfd4fc16e078ae5a4c6b7a7e6251bf22e437bc
Apr 29 21:42:58 minikube dockerd[2185]: time="2020-04-29T21:42:58.789213067Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 29 21:43:04 minikube dockerd[2185]: time="2020-04-29T21:43:04.405690537Z" level=info msg="shim reaped" id=d26e7a73f510eb54e60bee2fe30cccff8245bd840122b7b807a21f264204f37a
Apr 29 21:43:04 minikube dockerd[2185]: time="2020-04-29T21:43:04.415998191Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 29 21:43:10 minikube dockerd[2185]: time="2020-04-29T21:43:10.877286978Z" level=info msg="shim reaped" id=cd2efbed5d4bb27d299d549940543730c4775532d7070a06de02170a1ee4613a
Apr 29 21:43:10 minikube dockerd[2185]: time="2020-04-29T21:43:10.887623845Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 29 21:43:18 minikube dockerd[2185]: time="2020-04-29T21:43:18.153162291Z" level=info msg="shim reaped" id=f3506c834b0c37904fea7c8e8c7ce3f0fed094ecd8ac4883b6d82ce4d1822f1b
Apr 29 21:43:18 minikube dockerd[2185]: time="2020-04-29T21:43:18.164114950Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 29 21:43:26 minikube dockerd[2185]: time="2020-04-29T21:43:26.314603335Z" level=info msg="shim reaped" id=d27eed3852e26bc27f012e4fe6a0df5d841141655077a1609f8b76eb70f03790
Apr 29 21:43:26 minikube dockerd[2185]: time="2020-04-29T21:43:26.324865651Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 29 21:44:16 minikube dockerd[2185]: time="2020-04-29T21:44:16.526555961Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4bfa6d5a58609acd232038688e71f53cf3d7c28231f95166f4b0338dbeb9383d/shim.sock" debug=false pid=6173
Apr 29 21:44:27 minikube dockerd[2185]: time="2020-04-29T21:44:27.926354816Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9cf1006f17e332ea6f64f79d3089aae50dd317b3b12ec919fd5104ed0dd8d4f4/shim.sock" debug=false pid=6229
Apr 29 21:44:33 minikube dockerd[2185]: time="2020-04-29T21:44:33.100569137Z" level=error msg="Handler for GET /containers/9cf1006f17e332ea6f64f79d3089aae50dd317b3b12ec919fd5104ed0dd8d4f4/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Apr 29 21:44:33 minikube dockerd[2185]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
Apr 29 21:44:45 minikube dockerd[2185]: time="2020-04-29T21:44:45.154395787Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f35698cb2f225898c6a43e6d25cc03b008397d5c4042a2bde60a8556d893508e/shim.sock" debug=false pid=6320
Apr 29 21:44:51 minikube dockerd[2185]: time="2020-04-29T21:44:51.736488589Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/af248d1f5a56ddbefca8c59e65cc00d2edb97e8b4afdeea51a26aa3b71b44ddc/shim.sock" debug=false pid=6582
Apr 29 21:44:57 minikube dockerd[2185]: time="2020-04-29T21:44:57.131671814Z" level=info msg="shim reaped" id=af248d1f5a56ddbefca8c59e65cc00d2edb97e8b4afdeea51a26aa3b71b44ddc
Apr 29 21:44:57 minikube dockerd[2185]: time="2020-04-29T21:44:57.142006210Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 29 21:45:24 minikube dockerd[2185]: time="2020-04-29T21:45:24.116963232Z" level=error msg="262a9968ddd5133568f3ca6471bc11d73893fd8f2177c1f15513d58f82d743f7 cleanup: failed to delete container from containerd: no such container"
Apr 29 21:45:24 minikube dockerd[2185]: time="2020-04-29T21:45:24.117342780Z" level=error msg="Handler for GET /containers/262a9968ddd5133568f3ca6471bc11d73893fd8f2177c1f15513d58f82d743f7/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Apr 29 21:45:24 minikube dockerd[2185]: time="2020-04-29T21:45:24.117610233Z" level=error msg="Handler for GET /containers/262a9968ddd5133568f3ca6471bc11d73893fd8f2177c1f15513d58f82d743f7/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Apr 29 21:45:24 minikube dockerd[2185]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
Apr 29 21:45:24 minikube dockerd[2185]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
Apr 29 21:45:27 minikube dockerd[2185]: time="2020-04-29T21:45:27.888867151Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3bae131c1002d8d3caa725d3df069ed2c130aa4f2b1c6d7e90b3e35fccc25c92/shim.sock" debug=false pid=7255
Apr 29 21:45:28 minikube dockerd[2185]: time="2020-04-29T21:45:28.630357700Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/87019d1a86c3db5ada4e759e25de1c7c20d2c467766f870552ab82119cb76f06/shim.sock" debug=false pid=7322
Apr 29 21:45:30 minikube dockerd[2185]: time="2020-04-29T21:45:30.547154983Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/42687ce0cff5f57ebd8ec0830b0ddfff6b99271427d0513474f3bb8f124a2787/shim.sock" debug=false pid=7385
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
87019d1a86c3d a31f78c7c8ce1 53 seconds ago Running kube-scheduler 0 4bfa6d5a58609
262a9968ddd51 74060cea7f704 53 seconds ago Created kube-apiserver 0 af248d1f5a56d
42687ce0cff5f d3e55153f52fb 53 seconds ago Running kube-controller-manager 0 9cf1006f17e33
3bae131c1002d 303ce5db0e90d 53 seconds ago Running etcd 0 f35698cb2f225
==> describe nodes <==
E0429 22:45:47.697019 4756 logs.go:178] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
==> dmesg <==
[Apr29 21:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.023979] Decoding supported only on Scalable MCA processors.
[ +2.289089] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +0.488721] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
[ +0.002237] systemd-fstab-generator[1142]: Ignoring "noauto" for root device
[ +0.004554] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[ +0.906074] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[ +2.656639] vboxguest: loading out-of-tree module taints kernel.
[ +0.003772] vboxguest: PCI device not found, probably running on physical hardware.
[ +3.359109] systemd-fstab-generator[1993]: Ignoring "noauto" for root device
[ +11.194772] kauditd_printk_skb: 59 callbacks suppressed
[Apr29 21:38] systemd-fstab-generator[2399]: Ignoring "noauto" for root device
[ +0.977346] systemd-fstab-generator[2620]: Ignoring "noauto" for root device
[ +9.166605] kauditd_printk_skb: 107 callbacks suppressed
[Apr29 21:39] NFSD: Unable to end grace period: -110
[Apr29 21:43] systemd-fstab-generator[5536]: Ignoring "noauto" for root device
==> etcd [3bae131c1002] <==
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-04-29 21:45:27.980386 I | etcdmain: etcd Version: 3.4.3
2020-04-29 21:45:27.980418 I | etcdmain: Git SHA: 3cf2f69b5
2020-04-29 21:45:27.980424 I | etcdmain: Go Version: go1.12.12
2020-04-29 21:45:27.980429 I | etcdmain: Go OS/Arch: linux/amd64
2020-04-29 21:45:27.980441 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-04-29 21:45:27.980516 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-04-29 21:45:27.981148 I | embed: name = minikube
2020-04-29 21:45:27.981166 I | embed: data dir = /var/lib/minikube/etcd
2020-04-29 21:45:27.981172 I | embed: member dir = /var/lib/minikube/etcd/member
2020-04-29 21:45:27.981177 I | embed: heartbeat = 100ms
2020-04-29 21:45:27.981181 I | embed: election = 1000ms
2020-04-29 21:45:27.981186 I | embed: snapshot count = 10000
2020-04-29 21:45:27.981194 I | embed: advertise client URLs = https://192.168.39.139:2379
2020-04-29 21:45:35.203748 W | wal: sync duration of 3.296108838s, expected less than 1s
2020-04-29 21:45:41.641844 I | etcdserver: starting member 3cbdd43a8949db2d in cluster 4af51893258ecb17
raft2020/04/29 21:45:41 INFO: 3cbdd43a8949db2d switched to configuration voters=()
raft2020/04/29 21:45:41 INFO: 3cbdd43a8949db2d became follower at term 0
raft2020/04/29 21:45:41 INFO: newRaft 3cbdd43a8949db2d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
raft2020/04/29 21:45:41 INFO: 3cbdd43a8949db2d became follower at term 1
raft2020/04/29 21:45:41 INFO: 3cbdd43a8949db2d switched to configuration voters=(4376887760750500653)
==> kernel <==
21:45:47 up 8 min, 0 users, load average: 3.61, 2.64, 1.37
Linux minikube 4.19.107 #1 SMP Thu Mar 26 11:33:10 PDT 2020 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2019.02.10"
==> kube-apiserver [262a9968ddd5] <==
==> kube-controller-manager [42687ce0cff5] <==
I0429 21:45:30.942120 1 serving.go:313] Generated self-signed cert in-memory
I0429 21:45:31.114556 1 controllermanager.go:161] Version: v1.18.0
I0429 21:45:31.115287 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0429 21:45:31.115368 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0429 21:45:31.115701 1 secure_serving.go:178] Serving securely on 127.0.0.1:10257
I0429 21:45:31.115780 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0429 21:45:31.116425 1 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I0429 21:45:31.116501 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-controller-manager...
E0429 21:45:31.116828 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://192.168.39.139:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:31.117060 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://192.168.39.139:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:33.446972 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://192.168.39.139:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:36.749658 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://192.168.39.139:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:39.224461 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://192.168.39.139:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:41.679726 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://192.168.39.139:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:45.743630 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://192.168.39.139:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.139:8443: connect: connection refused
==> kube-scheduler [87019d1a86c3] <==
I0429 21:45:28.761875 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0429 21:45:28.762077 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0429 21:45:29.013855 1 serving.go:313] Generated self-signed cert in-memory
W0429 21:45:29.538886 1 authentication.go:297] Error looking up in-cluster authentication configuration: Get https://192.168.39.139:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 192.168.39.139:8443: connect: connection refused
W0429 21:45:29.538968 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
W0429 21:45:29.539008 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0429 21:45:29.543353 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0429 21:45:29.543372 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
W0429 21:45:29.544477 1 authorization.go:47] Authorization is disabled
W0429 21:45:29.544488 1 authentication.go:40] Authentication is disabled
I0429 21:45:29.544495 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0429 21:45:29.545488 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0429 21:45:29.545502 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0429 21:45:29.546066 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: Get https://192.168.39.139:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
I0429 21:45:29.546241 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0429 21:45:29.546314 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0429 21:45:29.546585 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: Get https://192.168.39.139:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.546787 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: Get https://192.168.39.139:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.548135 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://192.168.39.139:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.548310 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://192.168.39.139:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.548327 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://192.168.39.139:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.548485 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://192.168.39.139:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.548659 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://192.168.39.139:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.548672 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://192.168.39.139:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.548927 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://192.168.39.139:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.549723 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: Get https://192.168.39.139:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.550887 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://192.168.39.139:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.551920 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://192.168.39.139:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.552946 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://192.168.39.139:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.554145 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://192.168.39.139:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.555161 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://192.168.39.139:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.556292 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://192.168.39.139:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:29.557331 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://192.168.39.139:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:31.172906 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://192.168.39.139:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:31.369496 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://192.168.39.139:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:31.659989 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://192.168.39.139:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:31.674589 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://192.168.39.139:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:31.816289 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://192.168.39.139:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:31.850700 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://192.168.39.139:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:32.070636 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://192.168.39.139:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:32.243678 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: Get https://192.168.39.139:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:32.395271 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: Get https://192.168.39.139:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:35.589262 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: Get https://192.168.39.139:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:35.863575 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://192.168.39.139:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:36.208376 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://192.168.39.139:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:37.124814 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://192.168.39.139:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:37.180465 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://192.168.39.139:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:37.296715 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://192.168.39.139:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:37.515381 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://192.168.39.139:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:37.649807 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: Get https://192.168.39.139:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:38.177332 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://192.168.39.139:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:42.390921 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://192.168.39.139:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:45.050057 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: Get https://192.168.39.139:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:45.099093 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://192.168.39.139:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:47.030297 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://192.168.39.139:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
E0429 21:45:47.537357 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://192.168.39.139:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
==> kubelet <==
-- Logs begin at Wed 2020-04-29 21:37:30 UTC, end at Wed 2020-04-29 21:45:47 UTC. --
Apr 29 21:45:42 minikube kubelet[6914]: E0429 21:45:42.528081 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:42 minikube kubelet[6914]: E0429 21:45:42.628255 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:42 minikube kubelet[6914]: E0429 21:45:42.728462 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:42 minikube kubelet[6914]: E0429 21:45:42.828623 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:42 minikube kubelet[6914]: E0429 21:45:42.928800 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:43 minikube kubelet[6914]: E0429 21:45:43.028948 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:43 minikube kubelet[6914]: E0429 21:45:43.129073 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:43 minikube kubelet[6914]: E0429 21:45:43.229210 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:43 minikube kubelet[6914]: E0429 21:45:43.329363 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:43 minikube kubelet[6914]: E0429 21:45:43.429542 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:43 minikube kubelet[6914]: E0429 21:45:43.529736 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:43 minikube kubelet[6914]: E0429 21:45:43.629915 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:43 minikube kubelet[6914]: E0429 21:45:43.730080 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:43 minikube kubelet[6914]: E0429 21:45:43.830273 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:43 minikube kubelet[6914]: E0429 21:45:43.930427 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:44 minikube kubelet[6914]: E0429 21:45:44.030570 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:44 minikube kubelet[6914]: E0429 21:45:44.130722 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:44 minikube kubelet[6914]: E0429 21:45:44.230874 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:44 minikube kubelet[6914]: E0429 21:45:44.331021 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:44 minikube kubelet[6914]: E0429 21:45:44.431190 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:44 minikube kubelet[6914]: E0429 21:45:44.531672 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:44 minikube kubelet[6914]: E0429 21:45:44.631827 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:44 minikube kubelet[6914]: E0429 21:45:44.732369 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:44 minikube kubelet[6914]: E0429 21:45:44.832940 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:44 minikube kubelet[6914]: E0429 21:45:44.933195 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:45 minikube kubelet[6914]: E0429 21:45:45.033499 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:45 minikube kubelet[6914]: E0429 21:45:45.133744 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:45 minikube kubelet[6914]: E0429 21:45:45.234171 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:45 minikube kubelet[6914]: E0429 21:45:45.334463 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:45 minikube kubelet[6914]: E0429 21:45:45.435155 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:45 minikube kubelet[6914]: E0429 21:45:45.535975 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:45 minikube kubelet[6914]: E0429 21:45:45.636296 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:45 minikube kubelet[6914]: E0429 21:45:45.737524 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:45 minikube kubelet[6914]: E0429 21:45:45.839174 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:45 minikube kubelet[6914]: E0429 21:45:45.939771 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.040004 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.140234 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.240485 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.282869 6914 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: Get https://192.168.39.139:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s: dial tcp 192.168.39.139:8443: connect: connection refused
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.340718 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.440981 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.542258 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.642491 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.742434 6914 event.go:269] Unable to write event: 'Post https://192.168.39.139:8443/api/v1/namespaces/default/events: dial tcp 192.168.39.139:8443: connect: connection refused' (may retry after sleeping)
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.743154 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.820328 6914 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get node info: node "minikube" not found
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.843329 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.849836 6914 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.39.139:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.139:8443: connect: connection refused
Apr 29 21:45:46 minikube kubelet[6914]: I0429 21:45:46.937354 6914 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.943489 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:46 minikube kubelet[6914]: I0429 21:45:46.957486 6914 kubelet_node_status.go:70] Attempting to register node minikube
Apr 29 21:45:46 minikube kubelet[6914]: E0429 21:45:46.957798 6914 kubelet_node_status.go:92] Unable to register node "minikube" with API server: Post https://192.168.39.139:8443/api/v1/nodes: dial tcp 192.168.39.139:8443: connect: connection refused
Apr 29 21:45:47 minikube kubelet[6914]: E0429 21:45:47.043675 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:47 minikube kubelet[6914]: E0429 21:45:47.143976 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:47 minikube kubelet[6914]: E0429 21:45:47.244178 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:47 minikube kubelet[6914]: E0429 21:45:47.344365 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:47 minikube kubelet[6914]: E0429 21:45:47.444524 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:47 minikube kubelet[6914]: E0429 21:45:47.544672 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:47 minikube kubelet[6914]: E0429 21:45:47.644821 6914 kubelet.go:2267] node "minikube" not found
Apr 29 21:45:47 minikube kubelet[6914]: E0429 21:45:47.744946 6914 kubelet.go:2267] node "minikube" not found
❗ unable to fetch logs for: describe nodes
The apiserver is being hung up some how. These errors in your Docker log are very very unusual:
Apr 29 21:45:24 minikube dockerd[2185]: time="2020-04-29T21:45:24.116963232Z" level=error msg="262a9968ddd5133568f3ca6471bc11d73893fd8f2177c1f15513d58f82d743f7 cleanup: failed to delete container from containerd: no such container"
Apr 29 21:45:24 minikube dockerd[2185]: time="2020-04-29T21:45:24.117342780Z" level=error msg="Handler for GET /containers/262a9968ddd5133568f3ca6471bc11d73893fd8f2177c1f15513d58f82d743f7/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Apr 29 21:45:24 minikube dockerd[2185]: time="2020-04-29T21:45:24.117610233Z" level=error msg="Handler for GET /containers/262a9968ddd5133568f3ca6471bc11d73893fd8f2177c1f15513d58f82d743f7/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Apr 29 21:45:24 minikube dockerd[2185]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
Apr 29 21:45:24 minikube dockerd[2185]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
Apr 29 21:45:27 minikube dockerd[2185]: time="2020-04-29T21:45:27.888867151Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3bae131c1002d8d3caa725d3df069ed2c130aa4f2b1c6d7e90b3e35fccc25c92/shim.sock" debug=false pid=7255
I see the broken pipe error referenced at
https://github.com/moby/moby/issues/22221 - but have no idea how or why it would be triggered in this environment. Most of the references to this error seem to be in reference to running on over-loaded/slow VM's. The load within your VM seems OK: 21:45:47 up 8 min, 0 users, load average: 3.61, 2.64, 1.37
It isn't guaranteed, but I wonder if minikube delete
and minikube start
gets by this error at all.
Any chance I can get you to try that, and report back with the output of:
minikube ssh "sudo dmesg"
Thanks!
Thanks for all of your help so far. I've honestly no idea what I'm looking for so I appreciate you trawling through all of these logs.
I ran a minikube delete
and minikube start --driver=kvm2
again got the same broken pipe error. The VM is still running after the error so I managed to run dmesg
on it.
Out of curiosity, I also ran an ubuntu image via virt-manager
and centos via Vagrant (with --provider libvirt
) to see if it was all VMs that run slow, but both seemed reasonably responsive. I'm not sure if that means anything at all here though.
Please see the relevant logs below:
minikube start --driver=kvm2
😄 minikube v1.9.2 on Arch rolling
✨ Using the kvm2 driver based on user configuration
👍 Starting control plane node m01 in cluster minikube
🔥 Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
🐳 Preparing Kubernetes v1.18.0 on Docker 19.03.8 ...
💥 initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [192.168.39.51 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [192.168.39.51 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0502 13:22:45.461081 2479 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0502 13:22:49.097975 2479 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0502 13:22:49.107964 2479 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
minikube logs
:
==> Docker <==
-- Logs begin at Sat 2020-05-02 13:21:30 UTC, end at Sat 2020-05-02 13:26:52 UTC. --
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031289760Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031305765Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031320085Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031333865Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031406731Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031455515Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031970880Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032010927Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032055502Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032071767Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032086769Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032101400Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032115150Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032129810Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032143650Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032157631Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032171661Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032206756Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032225305Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032240037Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032253757Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032402145Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032466212Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032481044Z" level=info msg="containerd successfully booted in 0.004083s"
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.040508210Z" level=info msg="parsed scheme: \"unix\"" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.040651617Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.040761914Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.040859563Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.041676323Z" level=info msg="parsed scheme: \"unix\"" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.041701718Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.041720758Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.041733807Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.752699727Z" level=warning msg="Your kernel does not support cgroup blkio weight"
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.753331067Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.753425760Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.753501856Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.753576982Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.753649673Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.753951476Z" level=info msg="Loading containers: start."
May 02 13:22:37 minikube dockerd[2189]: time="2020-05-02T13:22:37.116382805Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
May 02 13:22:41 minikube dockerd[2189]: time="2020-05-02T13:22:41.217990970Z" level=info msg="Loading containers: done."
May 02 13:22:43 minikube dockerd[2189]: time="2020-05-02T13:22:43.017991265Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8
May 02 13:22:43 minikube dockerd[2189]: time="2020-05-02T13:22:43.018501481Z" level=info msg="Daemon has completed initialization"
May 02 13:22:44 minikube dockerd[2189]: time="2020-05-02T13:22:44.376320083Z" level=info msg="API listen on /var/run/docker.sock"
May 02 13:22:44 minikube systemd[1]: Started Docker Application Container Engine.
May 02 13:22:44 minikube dockerd[2189]: time="2020-05-02T13:22:44.377061591Z" level=info msg="API listen on [::]:2376"
May 02 13:23:31 minikube dockerd[2189]: time="2020-05-02T13:23:31.469876190Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5e626516c30d06a8ac8952a320d0117d56666a53132649000c2b5ae666a5dc37/shim.sock" debug=false pid=3329
May 02 13:23:35 minikube dockerd[2189]: time="2020-05-02T13:23:35.364775831Z" level=error msg="Handler for GET /containers/5e626516c30d06a8ac8952a320d0117d56666a53132649000c2b5ae666a5dc37/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
May 02 13:23:35 minikube dockerd[2189]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
May 02 13:23:38 minikube dockerd[2189]: time="2020-05-02T13:23:38.179414191Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/add8015f706c306a27f8e23fb735d2d9f3d3c46f23ae7b8834f477ea8c00a781/shim.sock" debug=false pid=3369
May 02 13:23:58 minikube dockerd[2189]: time="2020-05-02T13:23:58.485098468Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ea4e2c9a7c4cb1692a885c79f21f01074f764cd85f05019cb53c7525845b0049/shim.sock" debug=false pid=3475
May 02 13:24:09 minikube dockerd[2189]: time="2020-05-02T13:24:09.414385232Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9abbbecba551cb426178312c873f36102b9ae444efd7542da490bdb3727384ca/shim.sock" debug=false pid=3691
May 02 13:24:21 minikube dockerd[2189]: time="2020-05-02T13:24:21.357659786Z" level=info msg="shim reaped" id=9abbbecba551cb426178312c873f36102b9ae444efd7542da490bdb3727384ca
May 02 13:24:21 minikube dockerd[2189]: time="2020-05-02T13:24:21.369113205Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 02 13:25:15 minikube dockerd[2189]: time="2020-05-02T13:25:15.686564107Z" level=info msg="shim reaped" id=ea4e2c9a7c4cb1692a885c79f21f01074f764cd85f05019cb53c7525845b0049
May 02 13:25:15 minikube dockerd[2189]: time="2020-05-02T13:25:15.696727714Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 02 13:25:40 minikube dockerd[2189]: time="2020-05-02T13:25:40.360936572Z" level=info msg="shim reaped" id=add8015f706c306a27f8e23fb735d2d9f3d3c46f23ae7b8834f477ea8c00a781
May 02 13:25:40 minikube dockerd[2189]: time="2020-05-02T13:25:40.371235877Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 02 13:26:01 minikube dockerd[2189]: time="2020-05-02T13:26:01.782519827Z" level=info msg="shim reaped" id=5e626516c30d06a8ac8952a320d0117d56666a53132649000c2b5ae666a5dc37
May 02 13:26:01 minikube dockerd[2189]: time="2020-05-02T13:26:01.792643419Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
time="2020-05-02T13:26:54Z" level=fatal msg="failed to connect: failed to connect, make sure you are running as root and the runtime has been started: context deadline exceeded"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6008a0593584 a31f78c7c8ce "kube-scheduler --au…" 2 minutes ago Created k8s_kube-scheduler_kube-scheduler-minikube_kube-system_5795d0c442cb997ff93c49feeb9f6386_0
==> describe nodes <==
E0502 14:26:54.272492 14824 logs.go:178] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
==> dmesg <==
[May 2 13:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.024014] Decoding supported only on Scalable MCA processors.
[ +2.395424] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +0.528220] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
[ +0.002202] systemd-fstab-generator[1142]: Ignoring "noauto" for root device
[ +0.005645] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[ +0.945739] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[ +2.883956] vboxguest: loading out-of-tree module taints kernel.
[ +0.002680] vboxguest: PCI device not found, probably running on physical hardware.
[ +2.976374] systemd-fstab-generator[1995]: Ignoring "noauto" for root device
[ +11.253354] kauditd_printk_skb: 59 callbacks suppressed
[May 2 13:22] kauditd_printk_skb: 71 callbacks suppressed
[ +7.593342] systemd-fstab-generator[2403]: Ignoring "noauto" for root device
[ +1.106280] systemd-fstab-generator[2630]: Ignoring "noauto" for root device
[ +9.212981] kauditd_printk_skb: 26 callbacks suppressed
[May 2 13:23] NFSD: Unable to end grace period: -110
==> kernel <==
13:26:54 up 5 min, 0 users, load average: 1.37, 1.67, 0.80
Linux minikube 4.19.107 #1 SMP Thu Mar 26 11:33:10 PDT 2020 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2019.02.10"
==> kube-scheduler [6008a0593584] <==
==> kubelet <==
-- Logs begin at Sat 2020-05-02 13:21:30 UTC, end at Sat 2020-05-02 13:26:54 UTC. --
May 02 13:24:35 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
May 02 13:24:35 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
May 02 13:24:35 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 02 13:24:35 minikube kubelet[4082]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: I0502 13:24:35.684976 4082 server.go:417] Version: v1.18.0
May 02 13:24:35 minikube kubelet[4082]: I0502 13:24:35.687123 4082 plugins.go:100] No cloud provider specified.
May 02 13:24:35 minikube kubelet[4082]: I0502 13:24:35.687251 4082 server.go:837] Client rotation is on, will bootstrap in background
May 02 13:24:35 minikube kubelet[4082]: I0502 13:24:35.690258 4082 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.559772 4082 server.go:646] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.559995 4082 container_manager_linux.go:266] container manager verified user specified cgroup-root exists: []
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.560010 4082 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.560411 4082 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.560421 4082 container_manager_linux.go:301] [topologymanager] Initializing Topology Manager with none policy
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.560425 4082 container_manager_linux.go:306] Creating device plugin manager: true
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.560468 4082 client.go:75] Connecting to docker on unix:///var/run/docker.sock
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.560490 4082 client.go:92] Start docker client with request timeout=2m0s
May 02 13:24:41 minikube kubelet[4082]: W0502 13:24:41.567581 4082 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.567611 4082 docker_service.go:238] Hairpin mode set to "hairpin-veth"
May 02 13:24:41 minikube kubelet[4082]: W0502 13:24:41.567718 4082 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.570212 4082 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.577418 4082 docker_service.go:258] Docker Info: &{ID:P3PS:GBTV:37DI:PEWB:WAFB:ESMR:4BMG:BE3S:C2S3:BPWF:MR5Z:YPBO Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem <unknown>] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2020-05-02T13:24:41.570928951Z LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:4.19.107 OperatingSystem:Buildroot 2019.02.10 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0007703f0 NCPU:2 MemTotal:3840438272 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:minikube Labels:[provider=kvm2] ExperimentalBuild:false ServerVersion:19.03.8 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:[]}
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.577489 4082 docker_service.go:271] Setting cgroupDriver to systemd
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.583913 4082 remote_runtime.go:59] parsed scheme: ""
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.583935 4082 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.583968 4082 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>}
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.583978 4082 clientconn.go:933] ClientConn switching balancer to "pick_first"
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.584018 4082 remote_image.go:50] parsed scheme: ""
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.584026 4082 remote_image.go:50] scheme "" not registered, fallback to default scheme
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.584038 4082 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>}
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.584045 4082 clientconn.go:933] ClientConn switching balancer to "pick_first"
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.584073 4082 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.584092 4082 kubelet.go:317] Watching apiserver
May 02 13:24:41 minikube kubelet[4082]: E0502 13:24:41.590714 4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get https://192.168.39.51:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:41 minikube kubelet[4082]: E0502 13:24:41.590873 4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Get https://192.168.39.51:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:41 minikube kubelet[4082]: E0502 13:24:41.591000 4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.39.51:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:41 minikube kubelet[4082]: E0502 13:24:41.595438 4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get https://192.168.39.51:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:41 minikube kubelet[4082]: E0502 13:24:41.595566 4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.39.51:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:41 minikube kubelet[4082]: E0502 13:24:41.595674 4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Get https://192.168.39.51:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:43 minikube kubelet[4082]: E0502 13:24:43.323866 4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.39.51:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:44 minikube kubelet[4082]: E0502 13:24:44.387713 4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get https://192.168.39.51:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:44 minikube systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
May 02 13:24:44 minikube kubelet[4082]: E0502 13:24:44.683019 4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Get https://192.168.39.51:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:47 minikube kubelet[4082]: E0502 13:24:47.659246 4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.39.51:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:47 minikube kubelet[4082]: E0502 13:24:47.820341 4082 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
May 02 13:24:47 minikube kubelet[4082]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
May 02 13:24:47 minikube kubelet[4082]: I0502 13:24:47.828446 4082 kuberuntime_manager.go:211] Container runtime docker initialized, version: 19.03.8, apiVersion: 1.40.0
May 02 13:24:47 minikube kubelet[4082]: I0502 13:24:47.828874 4082 server.go:1125] Started kubelet
May 02 13:24:47 minikube systemd[1]: kubelet.service: Succeeded.
May 02 13:24:47 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
❗ unable to fetch logs for: describe nodes
~ >>> minikube logs [69]
==> Docker <==
-- Logs begin at Sat 2020-05-02 13:21:30 UTC, end at Sat 2020-05-02 13:26:58 UTC. --
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031289760Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031305765Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031320085Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031333865Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031406731Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031455515Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.031970880Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032010927Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032055502Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032071767Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032086769Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032101400Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032115150Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032129810Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032143650Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032157631Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032171661Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032206756Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032225305Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032240037Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032253757Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032402145Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032466212Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.032481044Z" level=info msg="containerd successfully booted in 0.004083s"
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.040508210Z" level=info msg="parsed scheme: \"unix\"" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.040651617Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.040761914Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.040859563Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.041676323Z" level=info msg="parsed scheme: \"unix\"" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.041701718Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.041720758Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
May 02 13:22:00 minikube dockerd[2189]: time="2020-05-02T13:22:00.041733807Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.752699727Z" level=warning msg="Your kernel does not support cgroup blkio weight"
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.753331067Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.753425760Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.753501856Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.753576982Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.753649673Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
May 02 13:22:24 minikube dockerd[2189]: time="2020-05-02T13:22:24.753951476Z" level=info msg="Loading containers: start."
May 02 13:22:37 minikube dockerd[2189]: time="2020-05-02T13:22:37.116382805Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
May 02 13:22:41 minikube dockerd[2189]: time="2020-05-02T13:22:41.217990970Z" level=info msg="Loading containers: done."
May 02 13:22:43 minikube dockerd[2189]: time="2020-05-02T13:22:43.017991265Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8
May 02 13:22:43 minikube dockerd[2189]: time="2020-05-02T13:22:43.018501481Z" level=info msg="Daemon has completed initialization"
May 02 13:22:44 minikube dockerd[2189]: time="2020-05-02T13:22:44.376320083Z" level=info msg="API listen on /var/run/docker.sock"
May 02 13:22:44 minikube systemd[1]: Started Docker Application Container Engine.
May 02 13:22:44 minikube dockerd[2189]: time="2020-05-02T13:22:44.377061591Z" level=info msg="API listen on [::]:2376"
May 02 13:23:31 minikube dockerd[2189]: time="2020-05-02T13:23:31.469876190Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5e626516c30d06a8ac8952a320d0117d56666a53132649000c2b5ae666a5dc37/shim.sock" debug=false pid=3329
May 02 13:23:35 minikube dockerd[2189]: time="2020-05-02T13:23:35.364775831Z" level=error msg="Handler for GET /containers/5e626516c30d06a8ac8952a320d0117d56666a53132649000c2b5ae666a5dc37/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
May 02 13:23:35 minikube dockerd[2189]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
May 02 13:23:38 minikube dockerd[2189]: time="2020-05-02T13:23:38.179414191Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/add8015f706c306a27f8e23fb735d2d9f3d3c46f23ae7b8834f477ea8c00a781/shim.sock" debug=false pid=3369
May 02 13:23:58 minikube dockerd[2189]: time="2020-05-02T13:23:58.485098468Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ea4e2c9a7c4cb1692a885c79f21f01074f764cd85f05019cb53c7525845b0049/shim.sock" debug=false pid=3475
May 02 13:24:09 minikube dockerd[2189]: time="2020-05-02T13:24:09.414385232Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9abbbecba551cb426178312c873f36102b9ae444efd7542da490bdb3727384ca/shim.sock" debug=false pid=3691
May 02 13:24:21 minikube dockerd[2189]: time="2020-05-02T13:24:21.357659786Z" level=info msg="shim reaped" id=9abbbecba551cb426178312c873f36102b9ae444efd7542da490bdb3727384ca
May 02 13:24:21 minikube dockerd[2189]: time="2020-05-02T13:24:21.369113205Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 02 13:25:15 minikube dockerd[2189]: time="2020-05-02T13:25:15.686564107Z" level=info msg="shim reaped" id=ea4e2c9a7c4cb1692a885c79f21f01074f764cd85f05019cb53c7525845b0049
May 02 13:25:15 minikube dockerd[2189]: time="2020-05-02T13:25:15.696727714Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 02 13:25:40 minikube dockerd[2189]: time="2020-05-02T13:25:40.360936572Z" level=info msg="shim reaped" id=add8015f706c306a27f8e23fb735d2d9f3d3c46f23ae7b8834f477ea8c00a781
May 02 13:25:40 minikube dockerd[2189]: time="2020-05-02T13:25:40.371235877Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 02 13:26:01 minikube dockerd[2189]: time="2020-05-02T13:26:01.782519827Z" level=info msg="shim reaped" id=5e626516c30d06a8ac8952a320d0117d56666a53132649000c2b5ae666a5dc37
May 02 13:26:01 minikube dockerd[2189]: time="2020-05-02T13:26:01.792643419Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
time="2020-05-02T13:27:00Z" level=fatal msg="failed to connect: failed to connect, make sure you are running as root and the runtime has been started: context deadline exceeded"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6008a0593584 a31f78c7c8ce "kube-scheduler --au…" 2 minutes ago Created k8s_kube-scheduler_kube-scheduler-minikube_kube-system_5795d0c442cb997ff93c49feeb9f6386_0
==> describe nodes <==
E0502 14:27:00.197515 14870 logs.go:178] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
==> dmesg <==
[May 2 13:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.024014] Decoding supported only on Scalable MCA processors.
[ +2.395424] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +0.528220] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
[ +0.002202] systemd-fstab-generator[1142]: Ignoring "noauto" for root device
[ +0.005645] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[ +0.945739] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[ +2.883956] vboxguest: loading out-of-tree module taints kernel.
[ +0.002680] vboxguest: PCI device not found, probably running on physical hardware.
[ +2.976374] systemd-fstab-generator[1995]: Ignoring "noauto" for root device
[ +11.253354] kauditd_printk_skb: 59 callbacks suppressed
[May 2 13:22] kauditd_printk_skb: 71 callbacks suppressed
[ +7.593342] systemd-fstab-generator[2403]: Ignoring "noauto" for root device
[ +1.106280] systemd-fstab-generator[2630]: Ignoring "noauto" for root device
[ +9.212981] kauditd_printk_skb: 26 callbacks suppressed
[May 2 13:23] NFSD: Unable to end grace period: -110
==> kernel <==
13:27:00 up 5 min, 0 users, load average: 1.26, 1.65, 0.80
Linux minikube 4.19.107 #1 SMP Thu Mar 26 11:33:10 PDT 2020 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2019.02.10"
==> kube-scheduler [6008a0593584] <==
==> kubelet <==
-- Logs begin at Sat 2020-05-02 13:21:30 UTC, end at Sat 2020-05-02 13:27:00 UTC. --
May 02 13:24:35 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
May 02 13:24:35 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
May 02 13:24:35 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 02 13:24:35 minikube kubelet[4082]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 02 13:24:35 minikube kubelet[4082]: I0502 13:24:35.684976 4082 server.go:417] Version: v1.18.0
May 02 13:24:35 minikube kubelet[4082]: I0502 13:24:35.687123 4082 plugins.go:100] No cloud provider specified.
May 02 13:24:35 minikube kubelet[4082]: I0502 13:24:35.687251 4082 server.go:837] Client rotation is on, will bootstrap in background
May 02 13:24:35 minikube kubelet[4082]: I0502 13:24:35.690258 4082 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.559772 4082 server.go:646] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.559995 4082 container_manager_linux.go:266] container manager verified user specified cgroup-root exists: []
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.560010 4082 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.560411 4082 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.560421 4082 container_manager_linux.go:301] [topologymanager] Initializing Topology Manager with none policy
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.560425 4082 container_manager_linux.go:306] Creating device plugin manager: true
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.560468 4082 client.go:75] Connecting to docker on unix:///var/run/docker.sock
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.560490 4082 client.go:92] Start docker client with request timeout=2m0s
May 02 13:24:41 minikube kubelet[4082]: W0502 13:24:41.567581 4082 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.567611 4082 docker_service.go:238] Hairpin mode set to "hairpin-veth"
May 02 13:24:41 minikube kubelet[4082]: W0502 13:24:41.567718 4082 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.570212 4082 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.577418 4082 docker_service.go:258] Docker Info: &{ID:P3PS:GBTV:37DI:PEWB:WAFB:ESMR:4BMG:BE3S:C2S3:BPWF:MR5Z:YPBO Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem <unknown>] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2020-05-02T13:24:41.570928951Z LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:4.19.107 OperatingSystem:Buildroot 2019.02.10 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0007703f0 NCPU:2 MemTotal:3840438272 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:minikube Labels:[provider=kvm2] ExperimentalBuild:false ServerVersion:19.03.8 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:[]}
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.577489 4082 docker_service.go:271] Setting cgroupDriver to systemd
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.583913 4082 remote_runtime.go:59] parsed scheme: ""
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.583935 4082 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.583968 4082 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>}
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.583978 4082 clientconn.go:933] ClientConn switching balancer to "pick_first"
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.584018 4082 remote_image.go:50] parsed scheme: ""
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.584026 4082 remote_image.go:50] scheme "" not registered, fallback to default scheme
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.584038 4082 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>}
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.584045 4082 clientconn.go:933] ClientConn switching balancer to "pick_first"
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.584073 4082 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
May 02 13:24:41 minikube kubelet[4082]: I0502 13:24:41.584092 4082 kubelet.go:317] Watching apiserver
May 02 13:24:41 minikube kubelet[4082]: E0502 13:24:41.590714 4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get https://192.168.39.51:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:41 minikube kubelet[4082]: E0502 13:24:41.590873 4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Get https://192.168.39.51:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:41 minikube kubelet[4082]: E0502 13:24:41.591000 4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.39.51:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:41 minikube kubelet[4082]: E0502 13:24:41.595438 4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get https://192.168.39.51:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:41 minikube kubelet[4082]: E0502 13:24:41.595566 4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.39.51:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:41 minikube kubelet[4082]: E0502 13:24:41.595674 4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Get https://192.168.39.51:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:43 minikube kubelet[4082]: E0502 13:24:43.323866 4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.39.51:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:44 minikube kubelet[4082]: E0502 13:24:44.387713 4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get https://192.168.39.51:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:44 minikube systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
May 02 13:24:44 minikube kubelet[4082]: E0502 13:24:44.683019 4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Get https://192.168.39.51:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:47 minikube kubelet[4082]: E0502 13:24:47.659246 4082 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.39.51:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.39.51:8443: connect: connection refused
May 02 13:24:47 minikube kubelet[4082]: E0502 13:24:47.820341 4082 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
May 02 13:24:47 minikube kubelet[4082]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
May 02 13:24:47 minikube kubelet[4082]: I0502 13:24:47.828446 4082 kuberuntime_manager.go:211] Container runtime docker initialized, version: 19.03.8, apiVersion: 1.40.0
May 02 13:24:47 minikube kubelet[4082]: I0502 13:24:47.828874 4082 server.go:1125] Started kubelet
May 02 13:24:47 minikube systemd[1]: kubelet.service: Succeeded.
May 02 13:24:47 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
❗ unable to fetch logs for: describe nodes
minikube ssh "sudo dmesg"
:
[ 0.000000] Linux version 4.19.107 (jenkins@jenkins) (gcc version 7.4.0 (Buildroot 2019.02.10)) #1 SMP Thu Mar 26 11:33:10 PDT 2020
[ 0.000000] Command line: BOOT_IMAGE=/boot/bzImage root=/dev/sr0 loglevel=3 console=ttyS0 noembed nomodeset norestore waitusb=10 random.trust_cpu=on hw_rng_model=virtio systemd.legacy_systemd_cgroup_controller=yes initrd=/boot/initrd
[ 0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
[ 0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
[ 0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
[ 0.000000] x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256
[ 0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
[ 0.000000] BIOS-provided physical RAM map:
[ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
[ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
[ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
[ 0.000000] BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000012e6fffff] usable
[ 0.000000] NX (Execute Disable) protection: active
[ 0.000000] SMBIOS 2.8 present.
[ 0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ?-20191223_100556-anatol 04/01/2014
[ 0.000000] Hypervisor detected: KVM
[ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00
[ 0.000000] kvm-clock: cpu 0, msr 1087ca001, primary cpu clock
[ 0.000000] kvm-clock: using sched offset of 554728778 cycles
[ 0.000001] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
[ 0.000002] tsc: Detected 3393.624 MHz processor
[ 0.000428] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
[ 0.000429] e820: remove [mem 0x000a0000-0x000fffff] usable
[ 0.000432] last_pfn = 0x12e700 max_arch_pfn = 0x400000000
[ 0.000459] MTRR default type: write-back
[ 0.000460] MTRR fixed ranges enabled:
[ 0.000460] 00000-9FFFF write-back
[ 0.000461] A0000-BFFFF uncachable
[ 0.000461] C0000-FFFFF write-protect
[ 0.000462] MTRR variable ranges enabled:
[ 0.000463] 0 base 00C0000000 mask FFC0000000 uncachable
[ 0.000463] 1 disabled
[ 0.000463] 2 disabled
[ 0.000463] 3 disabled
[ 0.000464] 4 disabled
[ 0.000464] 5 disabled
[ 0.000464] 6 disabled
[ 0.000464] 7 disabled
[ 0.000474] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT
[ 0.000481] last_pfn = 0xbffdb max_arch_pfn = 0x400000000
[ 0.002431] found SMP MP-table at [mem 0x000f5c50-0x000f5c5f]
[ 0.002471] Scanning 1 areas for low memory corruption
[ 0.002492] Using GB pages for direct mapping
[ 0.002493] BRK [0x108a01000, 0x108a01fff] PGTABLE
[ 0.002494] BRK [0x108a02000, 0x108a02fff] PGTABLE
[ 0.002495] BRK [0x108a03000, 0x108a03fff] PGTABLE
[ 0.002508] BRK [0x108a04000, 0x108a04fff] PGTABLE
[ 0.002510] BRK [0x108a05000, 0x108a05fff] PGTABLE
[ 0.002559] BRK [0x108a06000, 0x108a06fff] PGTABLE
[ 0.002569] BRK [0x108a07000, 0x108a07fff] PGTABLE
[ 0.002576] BRK [0x108a08000, 0x108a08fff] PGTABLE
[ 0.002595] RAMDISK: [mem 0x75db3000-0x7fffffff]
[ 0.002608] ACPI: Early table checksum verification disabled
[ 0.002634] ACPI: RSDP 0x00000000000F5A20 000014 (v00 BOCHS )
[ 0.002640] ACPI: RSDT 0x00000000BFFE15A2 000030 (v01 BOCHS BXPCRSDT 00000001 BXPC 00000001)
[ 0.002643] ACPI: FACP 0x00000000BFFE1476 000074 (v01 BOCHS BXPCFACP 00000001 BXPC 00000001)
[ 0.002646] ACPI: DSDT 0x00000000BFFE0040 001436 (v01 BOCHS BXPCDSDT 00000001 BXPC 00000001)
[ 0.002648] ACPI: FACS 0x00000000BFFE0000 000040
[ 0.002650] ACPI: APIC 0x00000000BFFE14EA 000080 (v01 BOCHS BXPCAPIC 00000001 BXPC 00000001)
[ 0.002651] ACPI: HPET 0x00000000BFFE156A 000038 (v01 BOCHS BXPCHPET 00000001 BXPC 00000001)
[ 0.002656] ACPI: Local APIC address 0xfee00000
[ 0.002951] No NUMA configuration found
[ 0.002952] Faking a node at [mem 0x0000000000000000-0x000000012e6fffff]
[ 0.002955] NODE_DATA(0) allocated [mem 0x12e6fc000-0x12e6fffff]
[ 0.003292] Zone ranges:
[ 0.003292] DMA [mem 0x0000000000001000-0x0000000000ffffff]
[ 0.003293] DMA32 [mem 0x0000000001000000-0x00000000ffffffff]
[ 0.003294] Normal [mem 0x0000000100000000-0x000000012e6fffff]
[ 0.003295] Movable zone start for each node
[ 0.003295] Early memory node ranges
[ 0.003296] node 0: [mem 0x0000000000001000-0x000000000009efff]
[ 0.003296] node 0: [mem 0x0000000000100000-0x00000000bffdafff]
[ 0.003297] node 0: [mem 0x0000000100000000-0x000000012e6fffff]
[ 0.004013] Zeroed struct page in unavailable ranges: 6535 pages
[ 0.004015] Initmem setup node 0 [mem 0x0000000000001000-0x000000012e6fffff]
[ 0.004016] On node 0 totalpages: 976505
[ 0.004017] DMA zone: 64 pages used for memmap
[ 0.004017] DMA zone: 21 pages reserved
[ 0.004018] DMA zone: 3998 pages, LIFO batch:0
[ 0.004077] DMA32 zone: 12224 pages used for memmap
[ 0.004078] DMA32 zone: 782299 pages, LIFO batch:63
[ 0.022226] Normal zone: 2972 pages used for memmap
[ 0.022228] Normal zone: 190208 pages, LIFO batch:63
[ 0.027775] ACPI: PM-Timer IO Port: 0x608
[ 0.027778] ACPI: Local APIC address 0xfee00000
[ 0.027782] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
[ 0.027813] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
[ 0.027814] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[ 0.027816] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
[ 0.027816] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[ 0.027817] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
[ 0.027818] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
[ 0.027819] ACPI: IRQ0 used by override.
[ 0.027819] ACPI: IRQ5 used by override.
[ 0.027820] ACPI: IRQ9 used by override.
[ 0.027820] ACPI: IRQ10 used by override.
[ 0.027820] ACPI: IRQ11 used by override.
[ 0.027822] Using ACPI (MADT) for SMP configuration information
[ 0.027823] ACPI: HPET id: 0x8086a201 base: 0xfed00000
[ 0.027829] smpboot: Allowing 2 CPUs, 0 hotplug CPUs
[ 0.027848] PM: Registered nosave memory: [mem 0x00000000-0x00000fff]
[ 0.027849] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
[ 0.027849] PM: Registered nosave memory: [mem 0x000a0000-0x000effff]
[ 0.027850] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff]
[ 0.027851] PM: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
[ 0.027851] PM: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
[ 0.027852] PM: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
[ 0.027852] PM: Registered nosave memory: [mem 0xff000000-0xfffbffff]
[ 0.027852] PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
[ 0.027854] [mem 0xc0000000-0xfeffbfff] available for PCI devices
[ 0.027854] Booting paravirtualized kernel on KVM
[ 0.027857] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
[ 0.129794] random: get_random_bytes called from start_kernel+0x8f/0x4bc with crng_init=0
[ 0.129800] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64 nr_cpu_ids:2 nr_node_ids:1
[ 0.130114] percpu: Embedded 43 pages/cpu s137176 r8192 d30760 u1048576
[ 0.130117] pcpu-alloc: s137176 r8192 d30760 u1048576 alloc=1*2097152
[ 0.130118] pcpu-alloc: [0] 0 1
[ 0.130139] KVM setup async PF for cpu 0
[ 0.130143] kvm-stealtime: cpu 0, msr 12a615200
[ 0.130147] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes)
[ 0.130151] Built 1 zonelists, mobility grouping on. Total pages: 961224
[ 0.130151] Policy zone: Normal
[ 0.130152] Kernel command line: BOOT_IMAGE=/boot/bzImage root=/dev/sr0 loglevel=3 console=ttyS0 noembed nomodeset norestore waitusb=10 random.trust_cpu=on hw_rng_model=virtio systemd.legacy_systemd_cgroup_controller=yes initrd=/boot/initrd
[ 0.130207] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ 0.130207] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ 0.130207] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ 0.139713] Calgary: detecting Calgary via BIOS EBDA area
[ 0.139714] Calgary: Unable to locate Rio Grande table in EBDA - bailing!
[ 0.147414] Memory: 3580100K/3906020K available (14348K kernel code, 1636K rwdata, 3440K rodata, 1428K init, 2356K bss, 325920K reserved, 0K cma-reserved)
[ 0.147732] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
[ 0.148030] rcu: Hierarchical RCU implementation.
[ 0.148030] rcu: RCU event tracing is enabled.
[ 0.148031] rcu: RCU restricting CPUs from NR_CPUS=64 to nr_cpu_ids=2.
[ 0.148032] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
[ 0.148147] NR_IRQS: 4352, nr_irqs: 440, preallocated irqs: 16
[ 0.148408] Console: colour *CGA 80x25
[ 0.148445] console [ttyS0] enabled
[ 0.148452] ACPI: Core revision 20180810
[ 0.148624] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns
[ 0.148695] hpet clockevent registered
[ 0.148715] APIC: Switch to symmetric I/O mode setup
[ 0.148717] KVM setup pv IPIs
[ 0.149671] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
[ 0.149687] tsc: Marking TSC unstable due to TSCs unsynchronized
[ 0.149694] Calibrating delay loop (skipped) preset value.. 6787.24 BogoMIPS (lpj=3393624)
[ 0.149696] pid_max: default: 32768 minimum: 301
[ 0.149708] Security Framework initialized
[ 0.149709] SELinux: Initializing.
[ 0.151426] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes)
[ 0.151726] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes)
[ 0.151734] Mount-cache hash table entries: 8192 (order: 4, 65536 bytes)
[ 0.151739] Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes)
[ 0.151956] Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
[ 0.151957] Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
[ 0.151960] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
[ 0.151961] Spectre V2 : Mitigation: Full AMD retpoline
[ 0.151961] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
[ 0.151962] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp
[ 0.152179] Freeing SMP alternatives memory: 44K
[ 0.153984] TSC deadline timer enabled
[ 0.153997] smpboot: CPU0: AMD Ryzen 7 1700X Eight-Core Processor (family: 0x17, model: 0x1, stepping: 0x1)
[ 0.154056] Performance Events: Fam17h core perfctr, AMD PMU driver.
[ 0.154070] ... version: 0
[ 0.154071] ... bit width: 48
[ 0.154071] ... generic registers: 6
[ 0.154071] ... value mask: 0000ffffffffffff
[ 0.154072] ... max period: 00007fffffffffff
[ 0.154072] ... fixed-purpose events: 0
[ 0.154073] ... event mask: 000000000000003f
[ 0.154102] rcu: Hierarchical SRCU implementation.
[ 0.154220] random: crng done (trusting CPU's manufacturer)
[ 0.154221] Decoding supported only on Scalable MCA processors.
[ 0.154260] smp: Bringing up secondary CPUs ...
[ 0.154335] x86: Booting SMP configuration:
[ 0.154336] .... node #0, CPUs: #1
[ 0.001382] kvm-clock: cpu 1, msr 1087ca041, secondary cpu clock
[ 0.154889] KVM setup async PF for cpu 1
[ 0.154889] kvm-stealtime: cpu 1, msr 12a715200
[ 0.154889] smp: Brought up 1 node, 2 CPUs
[ 0.154889] smpboot: Max logical packages: 2
[ 0.154889] smpboot: Total of 2 processors activated (13574.49 BogoMIPS)
[ 0.154926] devtmpfs: initialized
[ 0.155706] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
[ 0.155709] futex hash table entries: 512 (order: 3, 32768 bytes)
[ 0.155717] kworker/u4:0 (23) used greatest stack depth: 14576 bytes left
[ 0.155800] RTC time: 13:21:26, date: 05/02/20
[ 0.155889] NET: Registered protocol family 16
[ 0.156012] audit: initializing netlink subsys (disabled)
[ 0.156030] audit: type=2000 audit(1588425687.655:1): state=initialized audit_enabled=0 res=1
[ 0.156131] cpuidle: using governor menu
[ 0.157091] KVM setup pv remote TLB flush
[ 0.157091] ACPI: bus type PCI registered
[ 0.157091] PCI: Using configuration type 1 for base access
[ 0.157091] PCI: Using configuration type 1 for extended access
[ 0.159024] kworker/u4:0 (49) used greatest stack depth: 14112 bytes left
[ 0.162129] kworker/u4:0 (356) used greatest stack depth: 14056 bytes left
[ 0.163970] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
[ 0.163994] cryptd: max_cpu_qlen set to 1000
[ 0.163994] ACPI: Added _OSI(Module Device)
[ 0.163994] ACPI: Added _OSI(Processor Device)
[ 0.163994] ACPI: Added _OSI(3.0 _SCP Extensions)
[ 0.163994] ACPI: Added _OSI(Processor Aggregator Device)
[ 0.163994] ACPI: Added _OSI(Linux-Dell-Video)
[ 0.163994] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
[ 0.164166] ACPI: 1 ACPI AML tables successfully acquired and loaded
[ 0.165199] ACPI: Interpreter enabled
[ 0.165209] ACPI: (supports S0 S3 S4 S5)
[ 0.165210] ACPI: Using IOAPIC for interrupt routing
[ 0.165222] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[ 0.165287] ACPI: Enabled 2 GPEs in block 00 to 0F
[ 0.166938] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
[ 0.166942] acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI]
[ 0.166945] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM
[ 0.166971] PCI host bridge to bus 0000:00
[ 0.166973] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window]
[ 0.166974] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window]
[ 0.166975] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
[ 0.166976] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
[ 0.166977] pci_bus 0000:00: root bus resource [mem 0x140000000-0x1bfffffff window]
[ 0.166978] pci_bus 0000:00: root bus resource [bus 00-ff]
[ 0.167016] pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
[ 0.167522] pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
[ 0.168119] pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
[ 0.169589] pci 0000:00:01.1: reg 0x20: [io 0xc220-0xc22f]
[ 0.170270] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7]
[ 0.170270] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6]
[ 0.170271] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177]
[ 0.170272] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376]
[ 0.170502] pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300
[ 0.171905] pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f]
[ 0.172903] pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
[ 0.173285] pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI
[ 0.173293] pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB
[ 0.173510] pci 0000:00:02.0: [1af4:1000] type 00 class 0x020000
[ 0.174134] pci 0000:00:02.0: reg 0x10: [io 0xc1a0-0xc1bf]
[ 0.175131] pci 0000:00:02.0: reg 0x14: [mem 0xfebc2000-0xfebc2fff]
[ 0.177082] pci 0000:00:02.0: reg 0x20: [mem 0xfebec000-0xfebeffff 64bit pref]
[ 0.177693] pci 0000:00:02.0: reg 0x30: [mem 0xfeb40000-0xfeb7ffff pref]
[ 0.178301] pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000
[ 0.179040] pci 0000:00:03.0: reg 0x10: [io 0xc1c0-0xc1df]
[ 0.179692] pci 0000:00:03.0: reg 0x14: [mem 0xfebc3000-0xfebc3fff]
[ 0.182036] pci 0000:00:03.0: reg 0x20: [mem 0xfebf0000-0xfebf3fff 64bit pref]
[ 0.182692] pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref]
[ 0.183249] pci 0000:00:04.0: [1000:0012] type 00 class 0x010000
[ 0.184357] pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc0ff]
[ 0.185485] pci 0000:00:04.0: reg 0x14: [mem 0xfebc4000-0xfebc43ff]
[ 0.186056] pci 0000:00:04.0: reg 0x18: [mem 0xfebc0000-0xfebc1fff]
[ 0.188930] pci 0000:00:05.0: [1af4:1001] type 00 class 0x010000
[ 0.190695] pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc17f]
[ 0.192693] pci 0000:00:05.0: reg 0x14: [mem 0xfebc5000-0xfebc5fff]
[ 0.195697] pci 0000:00:05.0: reg 0x20: [mem 0xfebf4000-0xfebf7fff 64bit pref]
[ 0.198543] pci 0000:00:06.0: [1af4:1002] type 00 class 0x00ff00
[ 0.199662] pci 0000:00:06.0: reg 0x10: [io 0xc1e0-0xc1ff]
[ 0.201550] pci 0000:00:06.0: reg 0x20: [mem 0xfebf8000-0xfebfbfff 64bit pref]
[ 0.202801] pci 0000:00:07.0: [1af4:1005] type 00 class 0x00ff00
[ 0.203430] pci 0000:00:07.0: reg 0x10: [io 0xc200-0xc21f]
[ 0.205533] pci 0000:00:07.0: reg 0x20: [mem 0xfebfc000-0xfebfffff 64bit pref]
[ 0.206888] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11)
[ 0.206969] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)
[ 0.207039] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
[ 0.207107] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11)
[ 0.207145] ACPI: PCI Interrupt Link [LNKS] (IRQs *9)
[ 0.207286] vgaarb: loaded
[ 0.207286] SCSI subsystem initialized
[ 0.207286] libata version 3.00 loaded.
[ 0.207286] ACPI: bus type USB registered
[ 0.207286] usbcore: registered new interface driver usbfs
[ 0.207286] usbcore: registered new interface driver hub
[ 0.207286] usbcore: registered new device driver usb
[ 0.207286] pps_core: LinuxPPS API ver. 1 registered
[ 0.207286] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
[ 0.207286] PTP clock support registered
[ 0.207286] EDAC MC: Ver: 3.0.0
[ 0.207738] Advanced Linux Sound Architecture Driver Initialized.
[ 0.207751] PCI: Using ACPI for IRQ routing
[ 0.207752] PCI: pci_cache_line_size set to 64 bytes
[ 0.207940] e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
[ 0.207941] e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
[ 0.207941] e820: reserve RAM buffer [mem 0x12e700000-0x12fffffff]
[ 0.208013] NetLabel: Initializing
[ 0.208014] NetLabel: domain hash size = 128
[ 0.208014] NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO
[ 0.208023] NetLabel: unlabeled traffic allowed by default
[ 0.208043] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
[ 0.208043] hpet0: 3 comparators, 64-bit 100.000000 MHz counter
[ 0.210729] clocksource: Switched to clocksource kvm-clock
[ 0.220590] VFS: Disk quotas dquot_6.6.0
[ 0.220600] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[ 0.220644] pnp: PnP ACPI init
[ 0.220709] pnp 00:00: Plug and Play ACPI device, IDs PNP0b00 (active)
[ 0.220735] pnp 00:01: Plug and Play ACPI device, IDs PNP0303 (active)
[ 0.220753] pnp 00:02: Plug and Play ACPI device, IDs PNP0f13 (active)
[ 0.220759] pnp 00:03: [dma 2]
[ 0.220770] pnp 00:03: Plug and Play ACPI device, IDs PNP0700 (active)
[ 0.220840] pnp 00:04: Plug and Play ACPI device, IDs PNP0501 (active)
[ 0.220984] pnp: PnP ACPI: found 5 devices
[ 0.228191] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
[ 0.228199] pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window]
[ 0.228200] pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window]
[ 0.228201] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
[ 0.228202] pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
[ 0.228203] pci_bus 0000:00: resource 8 [mem 0x140000000-0x1bfffffff window]
[ 0.228240] NET: Registered protocol family 2
[ 0.228343] tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes)
[ 0.228352] TCP established hash table entries: 32768 (order: 6, 262144 bytes)
[ 0.228393] TCP bind hash table entries: 32768 (order: 7, 524288 bytes)
[ 0.228436] TCP: Hash tables configured (established 32768 bind 32768)
[ 0.228453] UDP hash table entries: 2048 (order: 4, 65536 bytes)
[ 0.228462] UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes)
[ 0.228491] NET: Registered protocol family 1
[ 0.228617] RPC: Registered named UNIX socket transport module.
[ 0.228618] RPC: Registered udp transport module.
[ 0.228619] RPC: Registered tcp transport module.
[ 0.228620] RPC: Registered tcp NFSv4.1 backchannel transport module.
[ 0.228809] pci 0000:00:01.0: PIIX3: Enabling Passive Release
[ 0.228829] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
[ 0.228847] pci 0000:00:01.0: Activating ISA DMA hang workarounds
[ 0.252446] PCI Interrupt Link [LNKD] enabled at IRQ 11
[ 0.276822] pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x690 took 46810 usecs
[ 0.276958] PCI: CLS 0 bytes, default 64
[ 0.277003] Unpacking initramfs...
[ 2.422891] Freeing initrd memory: 166196K
[ 2.422895] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
[ 2.422896] software IO TLB: mapped [mem 0xbbfdb000-0xbffdb000] (64MB)
[ 2.423353] Scanning for low memory corruption every 60 seconds
[ 2.423727] Initialise system trusted keyrings
[ 2.423878] workingset: timestamp_bits=40 max_order=20 bucket_order=0
[ 2.426354] NFS: Registering the id_resolver key type
[ 2.426357] Key type id_resolver registered
[ 2.426357] Key type id_legacy registered
[ 2.426360] nfs4filelayout_init: NFSv4 File Layout Driver Registering...
[ 2.426361] Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
[ 2.426701] fuse init (API version 7.27)
[ 2.426759] SGI XFS with ACLs, security attributes, no debug enabled
[ 2.428217] NET: Registered protocol family 38
[ 2.428218] Key type asymmetric registered
[ 2.428219] Asymmetric key parser 'x509' registered
[ 2.428225] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
[ 2.428227] io scheduler noop registered
[ 2.428227] io scheduler deadline registered
[ 2.428244] io scheduler cfq registered (default)
[ 2.428245] io scheduler mq-deadline registered
[ 2.428246] io scheduler kyber registered
[ 2.428356] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
[ 2.428430] ACPI: Power Button [PWRF]
[ 2.440863] PCI Interrupt Link [LNKB] enabled at IRQ 10
[ 2.453914] PCI Interrupt Link [LNKC] enabled at IRQ 11
[ 2.466882] PCI Interrupt Link [LNKA] enabled at IRQ 10
[ 2.495423] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[ 2.519155] 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
[ 2.519427] Non-volatile memory driver v1.3
[ 2.519814] Linux agpgart interface v0.103
[ 2.520942] loop: module loaded
[ 2.521755] virtio_blk virtio2: [vda] 39062500 512-byte logical blocks (20.0 GB/18.6 GiB)
[ 2.524884] VMware PVSCSI driver - version 1.0.7.0-k
[ 2.525058] ata_piix 0000:00:01.1: version 2.13
[ 2.525825] scsi host0: ata_piix
[ 2.525993] scsi host1: ata_piix
[ 2.526064] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc220 irq 14
[ 2.526066] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc228 irq 15
[ 2.526174] tun: Universal TUN/TAP device driver, 1.6
[ 2.529836] e100: Intel(R) PRO/100 Network Driver, 3.5.24-k2-NAPI
[ 2.529836] e100: Copyright(c) 1999-2006 Intel Corporation
[ 2.529849] e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI
[ 2.529850] e1000: Copyright (c) 1999-2006 Intel Corporation.
[ 2.529864] e1000e: Intel(R) PRO/1000 Network Driver - 3.2.6-k
[ 2.529865] e1000e: Copyright(c) 1999 - 2015 Intel Corporation.
[ 2.529881] sky2: driver version 1.30
[ 2.529929] VMware vmxnet3 virtual NIC driver - version 1.4.16.0-k-NAPI
[ 2.530013] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[ 2.530014] ehci-pci: EHCI PCI platform driver
[ 2.530019] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[ 2.530029] ohci-pci: OHCI PCI platform driver
[ 2.530034] uhci_hcd: USB Universal Host Controller Interface driver
[ 2.542542] uhci_hcd 0000:00:01.2: UHCI Host Controller
[ 2.542576] uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
[ 2.542746] uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180
[ 2.542809] usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 4.19
[ 2.542810] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 2.542811] usb usb1: Product: UHCI Host Controller
[ 2.542812] usb usb1: Manufacturer: Linux 4.19.107 uhci_hcd
[ 2.542812] usb usb1: SerialNumber: 0000:00:01.2
[ 2.542872] hub 1-0:1.0: USB hub found
[ 2.542875] hub 1-0:1.0: 2 ports detected
[ 2.542961] usbcore: registered new interface driver usblp
[ 2.542968] usbcore: registered new interface driver usb-storage
[ 2.542984] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
[ 2.543593] serio: i8042 KBD port at 0x60,0x64 irq 1
[ 2.543597] serio: i8042 AUX port at 0x60,0x64 irq 12
[ 2.543788] rtc_cmos 00:00: RTC can wake from S4
[ 2.544189] rtc_cmos 00:00: registered as rtc0
[ 2.544213] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
[ 2.544744] rtc_cmos 00:00: alarms up to one day, y3k, 114 bytes nvram, hpet irqs
[ 2.544965] device-mapper: ioctl: 4.39.0-ioctl (2018-04-03) initialised: dm-devel@redhat.com
[ 2.545683] hidraw: raw HID events driver (C) Jiri Kosina
[ 2.545879] usbcore: registered new interface driver usbhid
[ 2.545880] usbhid: USB HID core driver
[ 2.546024] netem: version 1.3
[ 2.546167] Initializing XFRM netlink socket
[ 2.546266] NET: Registered protocol family 10
[ 2.546569] Segment Routing with IPv6
[ 2.546656] sit: IPv6, IPv4 and MPLS over IPv4 tunneling driver
[ 2.546798] NET: Registered protocol family 17
[ 2.546814] Key type dns_resolver registered
[ 2.546818] Key type ceph registered
[ 2.546877] libceph: loaded (mon/osd proto 15/24)
[ 2.547108] mce: Using 10 MCE banks
[ 2.547124] AVX2 version of gcm_enc/dec engaged.
[ 2.547124] AES CTR mode by8 optimization enabled
[ 2.547495] sched_clock: Marking stable (2547106485, 382834)->(2569669536, -22180217)
[ 2.547656] registered taskstats version 1
[ 2.547657] Loading compiled-in X.509 certificates
[ 2.547917] Magic number: 4:344:381
[ 2.547946] console [netcon0] enabled
[ 2.547947] netconsole: network logging started
[ 2.548001] cfg80211: Loading compiled-in X.509 certificates for regulatory database
[ 2.549277] cfg80211: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
[ 2.549281] ALSA device list:
[ 2.549282] No soundcards found.
[ 2.549645] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ 2.549647] cfg80211: failed to load regulatory.db
[ 2.685865] Freeing unused kernel image memory: 1428K
[ 2.690718] Write protecting the kernel read-only data: 20480k
[ 2.691359] Freeing unused kernel image memory: 2004K
[ 2.691488] Freeing unused kernel image memory: 656K
[ 2.691490] Run /init as init process
[ 3.074023] systemd[1]: systemd 240 running in system mode. (-PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK +SYSVINIT +UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid)
[ 3.074046] systemd[1]: Detected virtualization kvm.
[ 3.074048] systemd[1]: Detected architecture x86-64.
[ 3.077711] systemd[1]: Set hostname to <minikube>.
[ 3.077730] systemd[1]: Initializing machine ID from KVM UUID.
[ 3.077865] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
[ 3.080067] systemd-fstab-generator[1142]: Ignoring "noauto" for root device
[ 3.085712] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[ 3.085714] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[ 3.091516] systemd[1]: /usr/lib/systemd/system/vmtoolsd.service:7: PIDFile= references path below legacy directory /var/run/, updating /var/run/vmtoolsd.pid \xe2\x86\x92 /run/vmtoolsd.pid; please update the unit file accordingly.
[ 3.094768] systemd[1]: /usr/lib/systemd/system/rpc-statd.service:13: PIDFile= references path below legacy directory /var/run/, updating /var/run/rpc.statd.pid \xe2\x86\x92 /run/rpc.statd.pid; please update the unit file accordingly.
[ 3.164497] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3
[ 3.381137] systemd-journald[1170]: Received request to flush runtime journal from PID 1
[ 3.506434] kvm: Nested Virtualization enabled
[ 3.506435] kvm: Nested Paging enabled
[ 3.845665] vda: vda1
[ 3.901702] fdisk (1775) used greatest stack depth: 14032 bytes left
[ 3.902798] vda: vda1
[ 4.031453] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[ 4.031455] NFSD: starting 90-second grace period (net f0000098)
[ 4.031745] rpc.nfsd (1814) used greatest stack depth: 13640 bytes left
[ 6.567335] mkfs.ext4 (1791) used greatest stack depth: 13264 bytes left
[ 6.851980] EXT4-fs (vda1): mounted filesystem with ordered data mode. Opts: (null)
[ 6.915409] vboxguest: loading out-of-tree module taints kernel.
[ 6.918089] vboxguest: PCI device not found, probably running on physical hardware.
[ 9.894463] systemd-fstab-generator[1995]: Ignoring "noauto" for root device
[ 15.446726] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
[ 15.447747] Bridge firewalling registered
[ 15.455976] audit: type=1325 audit(1588425702.963:2): table=nat family=2 entries=0
[ 15.456471] audit: type=1300 audit(1588425702.963:2): arch=c000003e syscall=313 success=yes exit=0 a0=5 a1=41a8e6 a2=0 a3=5 items=0 ppid=57 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=kernel key=(null)
[ 15.456604] audit: type=1327 audit(1588425702.963:2): proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0069707461626C655F6E6174
[ 15.476092] audit: type=1325 audit(1588425702.984:3): table=nat family=2 entries=5
[ 15.476095] audit: type=1300 audit(1588425702.984:3): arch=c000003e syscall=54 success=yes exit=0 a0=5 a1=0 a2=40 a3=210ca60 items=0 ppid=2005 pid=2068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj=kernel key=(null)
[ 15.476097] audit: type=1327 audit(1588425702.984:3): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552
[ 15.477704] audit: type=1325 audit(1588425702.985:4): table=filter family=2 entries=4
[ 15.477707] audit: type=1300 audit(1588425702.985:4): arch=c000003e syscall=54 success=yes exit=0 a0=5 a1=0 a2=40 a3=2064940 items=0 ppid=2005 pid=2070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj=kernel key=(null)
[ 15.477709] audit: type=1327 audit(1588425702.985:4): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552
[ 15.479410] audit: type=1325 audit(1588425702.987:5): table=filter family=2 entries=6
[ 21.147817] kauditd_printk_skb: 59 callbacks suppressed
[ 21.147818] audit: type=1325 audit(1588425708.659:25): table=filter family=2 entries=23
[ 21.147822] audit: type=1300 audit(1588425708.659:25): arch=c000003e syscall=54 success=yes exit=0 a0=5 a1=0 a2=40 a3=fba9c0 items=0 ppid=2005 pid=2128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj=kernel key=(null)
[ 21.147824] audit: type=1327 audit(1588425708.659:25): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552
[ 21.148984] audit: type=1325 audit(1588425708.660:26): table=filter family=2 entries=22
[ 21.148987] audit: type=1300 audit(1588425708.660:26): arch=c000003e syscall=54 success=yes exit=0 a0=5 a1=0 a2=40 a3=bbe7a0 items=0 ppid=2005 pid=2129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj=kernel key=(null)
[ 21.148989] audit: type=1327 audit(1588425708.660:26): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552
[ 32.447478] dockerd (2010) used greatest stack depth: 12224 bytes left
[ 57.240294] audit: type=1325 audit(1588425744.760:27): table=nat family=2 entries=11
[ 57.240298] audit: type=1300 audit(1588425744.760:27): arch=c000003e syscall=54 success=yes exit=0 a0=5 a1=0 a2=40 a3=1ab8740 items=0 ppid=2189 pid=2220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj=kernel key=(null)
[ 57.240301] audit: type=1327 audit(1588425744.760:27): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4400505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552
[ 57.241395] audit: type=1325 audit(1588425744.761:28): table=nat family=2 entries=10
[ 57.241398] audit: type=1300 audit(1588425744.761:28): arch=c000003e syscall=54 success=yes exit=0 a0=5 a1=0 a2=40 a3=78b570 items=0 ppid=2189 pid=2221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj=kernel key=(null)
[ 57.241400] audit: type=1327 audit(1588425744.761:28): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D44004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C0000002D2D647374003132372E302E302E302F38002D6A00444F434B4552
[ 57.244618] audit: type=1325 audit(1588425744.764:29): table=nat family=2 entries=9
[ 57.244622] audit: type=1300 audit(1588425744.764:29): arch=c000003e syscall=54 success=yes exit=0 a0=5 a1=0 a2=40 a3=2346ec0 items=0 ppid=2189 pid=2225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj=kernel key=(null)
[ 57.244624] audit: type=1327 audit(1588425744.764:29): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4600444F434B4552
[ 57.245478] audit: type=1325 audit(1588425744.765:30): table=nat family=2 entries=8
[ 69.598736] kauditd_printk_skb: 71 callbacks suppressed
[ 69.598738] audit: type=1325 audit(1588425757.118:54): table=nat family=2 entries=9
[ 69.598744] audit: type=1300 audit(1588425757.118:54): arch=c000003e syscall=54 success=yes exit=0 a0=5 a1=0 a2=40 a3=9b6ce0 items=0 ppid=2189 pid=2287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj=kernel key=(null)
[ 69.598748] audit: type=1327 audit(1588425757.118:54): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445
[ 69.601257] audit: type=1325 audit(1588425757.121:55): table=nat family=2 entries=10
[ 69.601383] audit: type=1300 audit(1588425757.121:55): arch=c000003e syscall=54 success=yes exit=0 a0=5 a1=0 a2=40 a3=a09830 items=0 ppid=2189 pid=2289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj=kernel key=(null)
[ 69.601579] audit: type=1327 audit(1588425757.121:55): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E
[ 69.604259] audit: type=1325 audit(1588425757.124:56): table=filter family=2 entries=17
[ 69.604367] audit: type=1300 audit(1588425757.124:56): arch=c000003e syscall=54 success=yes exit=0 a0=5 a1=0 a2=40 a3=13f6eb0 items=0 ppid=2189 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj=kernel key=(null)
[ 69.604571] audit: type=1327 audit(1588425757.124:56): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054
[ 69.606280] audit: type=1325 audit(1588425757.126:57): table=filter family=2 entries=18
[ 77.192078] systemd-fstab-generator[2403]: Ignoring "noauto" for root device
[ 78.298358] systemd-fstab-generator[2630]: Ignoring "noauto" for root device
[ 87.511339] kauditd_printk_skb: 26 callbacks suppressed
[ 87.511340] audit: type=1325 audit(1588425775.032:66): table=nat family=2 entries=11
[ 87.511426] audit: type=1300 audit(1588425775.032:66): arch=c000003e syscall=54 success=yes exit=0 a0=4 a1=0 a2=40 a3=d0f2f0 items=0 ppid=2675 pid=2806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj=kernel key=(null)
[ 87.511487] audit: type=1327 audit(1588425775.032:66): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174
[ 87.515574] audit: type=1325 audit(1588425775.036:67): table=nat family=2 entries=13
[ 87.515672] audit: type=1300 audit(1588425775.036:67): arch=c000003e syscall=54 success=yes exit=0 a0=4 a1=0 a2=40 a3=1790100 items=0 ppid=2675 pid=2812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj=kernel key=(null)
[ 87.515760] audit: type=1327 audit(1588425775.036:67): proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D7365742D786D61726B00307830303030383030302F30783030303038303030
[ 87.517393] audit: type=1325 audit(1588425775.038:68): table=filter family=2 entries=23
[ 87.517463] audit: type=1300 audit(1588425775.038:68): arch=c000003e syscall=54 success=yes exit=0 a0=4 a1=0 a2=40 a3=266e260 items=0 ppid=2675 pid=2817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj=kernel key=(null)
[ 87.517533] audit: type=1327 audit(1588425775.038:68): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572
[ 87.520188] audit: type=1325 audit(1588425775.041:69): table=filter family=2 entries=25
[ 125.533349] NFSD: Unable to end grace period: -110
@djrollins I am curious if this would help
"minikube start --driver=kvm2 --force-systemd=true"
I have a feeling something is killing your apiserver
and do you mind trying to see if you have the same problem with the docker driver?
minikube delete minikube start --driver=docker
today we are releasing minikube v1.10.0 I recommend trying it with the latest verison
Hi @medyagh. Thank you for taking a look at this.
The docker driver works with no issues.
I also downloaded the latest version of minikube and tried the --force-systemd=true
flag and still get launch issues:
minikube start --driver=kvm2 --force-systemd=true
docker-machine-driver-kvm2.sha256: 65 B / 65 B [-------] 100.00% ? p/s 0s docker-machine-driver-kvm2: 13.88 MiB / 13.88 MiB 100.00% 9.52 MiB p/s 2 💿 Downloading VM boot image ... minikube-v1.10.0.iso.sha256: 65 B / 65 B [-------------] 100.00% ? p/s 0s minikube-v1.10.0.iso: 174.99 MiB / 174.99 MiB [] 100.00% 26.66 MiB p/s 7s 👍 Starting control plane node minikube in cluster minikube 💾 Downloading Kubernetes v1.18.1 preload ... preloaded-images-k8s-v3-v1.18.1-docker-overlay2-amd64.tar.lz4: 525.47 MiB 🔥 Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ... 🐳 Preparing Kubernetes v1.18.1 on Docker 19.03.8 ... 💥 initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.1:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [192.168.39.236 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [192.168.39.236 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr: W0512 19:09:52.827791 2656 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0512 19:09:56.307424 2656 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" W0512 19:09:56.308684 2656 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher
💣 Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.1:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr: W0512 19:19:15.048292 5880 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0512 19:19:16.556720 5880 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" W0512 19:19:16.557723 5880 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher
😿 minikube is exiting due to an error. If the above message is not useful, open an issue: 👉 https://github.com/kubernetes/minikube/issues/new/choose
❌ [NONE_KUBELET] failed to start node startup failed: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.1:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr: W0512 19:19:15.048292 5880 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0512 19:19:16.556720 5880 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" W0512 19:19:16.557723 5880 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher
💡 Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start ⁉️ Related issue: https://github.com/kubernetes/minikube/issues/4172
minikube logs
==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID b1639bb1b9422 6c9320041a7b5 About an hour ago Running kube-scheduler 0 4dcf508c3dcf8 4589f55c1a61a d1ccdd18e6ed8 About an hour ago Running kube-controller-manager 0 c733620d71d07 72ccfe6bc01d4 a595af0107f98 About an hour ago Exited kube-apiserver 0 3cfbc83cef166 4ede6bfb72a05 303ce5db0e90d About an hour ago Created etcd 0 bd94ebdd85dee
==> describe nodes <== E0512 21:24:14.206796 17130 logs.go:178] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1 stdout:
stderr: The connection to the server localhost:8443 was refused - did you specify the right host or port? output: "\n stderr \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n /stderr "
==> dmesg <== [May12 19:08] You have booted with nomodeset. This means your GPU drivers are DISABLED [ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly [ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it [ +0.025094] Decoding supported only on Scalable MCA processors. [ +2.548743] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 [ +0.547588] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument [ +0.002497] systemd-fstab-generator[1143]: Ignoring "noauto" for root device [ +0.005881] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling. [ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.) [ +0.993796] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack. [May12 19:09] vboxguest: loading out-of-tree module taints kernel. [ +0.003267] vboxguest: PCI device not found, probably running on physical hardware. [ +3.688908] systemd-fstab-generator[2006]: Ignoring "noauto" for root device [ +0.084174] systemd-fstab-generator[2016]: Ignoring "noauto" for root device [ +14.457236] systemd-fstab-generator[2210]: Ignoring "noauto" for root device [ +15.346051] kauditd_printk_skb: 65 callbacks suppressed [ +6.717910] systemd-fstab-generator[2377]: Ignoring "noauto" for root device [ +3.634444] kauditd_printk_skb: 107 callbacks suppressed [ +5.483658] systemd-fstab-generator[2595]: Ignoring "noauto" for root device [ +1.207715] systemd-fstab-generator[2803]: Ignoring "noauto" for root device [May12 19:10] kauditd_printk_skb: 107 callbacks suppressed [ +56.096301] NFSD: Unable to end grace period: -110 [May12 19:19] systemd-fstab-generator[6020]: Ignoring "noauto" for root device
==> etcd [4ede6bfb72a0] <==
==> kernel <== 20:24:14 up 1:15, 0 users, load average: 0.21, 0.22, 0.25 Linux minikube 4.19.107 #1 SMP Mon May 11 14:51:04 PDT 2020 x86_64 GNU/Linux PRETTY_NAME="Buildroot 2019.02.10"
==> kube-apiserver [72ccfe6bc01d] <==
Flag --insecure-port has been deprecated, This flag will be removed in a future version.
I0512 19:20:50.590725 1 server.go:656] external host was not specified, using 192.168.39.236
I0512 19:20:50.590995 1 server.go:153] Version: v1.18.1
I0512 19:20:51.398743 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0512 19:20:51.398801 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0512 19:20:51.399738 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0512 19:20:51.399776 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0512 19:20:51.401286 1 client.go:361] parsed scheme: "endpoint"
I0512 19:20:51.401358 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379
goroutine 1 [running]: k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/registry/customresourcedefinition.NewREST(0xc0002e4d90, 0x50e5040, 0xc00015eb40, 0xc00015ed68) /workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/registry/customresourcedefinition/etcd.go:56 +0x3e7 k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver.completedConfig.New(0xc0002198c0, 0xc0002aa008, 0x51a38a0, 0x77427d8, 0x10, 0x0, 0x0) /workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/apiserver.go:145 +0x14ef k8s.io/kubernetes/cmd/kube-apiserver/app.createAPIExtensionsServer(0xc0002aa000, 0x51a38a0, 0x77427d8, 0x0, 0x50e4c00, 0xc0001105e0) /workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/apiextensions.go:102 +0x59 k8s.io/kubernetes/cmd/kube-apiserver/app.CreateServerChain(0xc00051d600, 0xc0001d8ba0, 0x4559d51, 0xc, 0xc0009b1c48) /workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/server.go:186 +0x2b8 k8s.io/kubernetes/cmd/kube-apiserver/app.Run(0xc00051d600, 0xc0001d8ba0, 0x0, 0x0) /workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/server.go:155 +0x101 k8s.io/kubernetes/cmd/kube-apiserver/app.NewAPIServerCommand.func1(0xc0000cd680, 0xc0006971e0, 0x0, 0x1a, 0x0, 0x0) /workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/server.go:122 +0x104 k8s.io/kubernetes/vendor/github.com/spf13/cobra.(Command).execute(0xc0000cd680, 0xc00004c1d0, 0x1a, 0x1b, 0xc0000cd680, 0xc00004c1d0) /workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826 +0x460 k8s.io/kubernetes/vendor/github.com/spf13/cobra.(Command).ExecuteC(0xc0000cd680, 0x160e5e2970f54de7, 0x7724600, 0xc00006a750) /workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914 +0x2fb k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...) /workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864 main.main() _output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-apiserver/apiserver.go:43 +0xcd
==> kube-controller-manager [4589f55c1a61] <== E0512 20:20:59.842495 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:03.478994 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:06.046677 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:10.050771 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:13.478817 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:16.926150 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:20.551689 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:24.843236 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:28.806542 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:32.924087 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:36.050867 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:39.825471 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:43.654725 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:46.442510 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:50.627082 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:54.547982 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:58.720883 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:02.535346 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:06.873408 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:11.195995 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:13.607874 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:15.637804 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:19.502026 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:21.515771 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:25.237115 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:29.334163 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:31.614863 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:34.270586 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:36.626119 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:40.149377 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:43.334855 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:45.977980 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:48.717196 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:52.131962 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:55.419829 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:58.260486 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:01.246123 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:03.890003 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:07.132564 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:10.394724 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:14.127198 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:16.817262 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:21.182680 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:25.131093 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:27.416795 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:29.912032 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:32.516376 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:35.253846 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:39.222031 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:41.768047 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:45.401072 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:48.264683 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:50.459693 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:54.556942 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:56.931641 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:24:01.273027 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:24:03.558935 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:24:05.595981 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:24:09.353967 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:24:13.266204 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 192.168.39.236:8443: connect: connection refused
==> kube-scheduler [b1639bb1b942] <== E0512 20:19:05.647225 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:19:19.337765 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:19:20.968273 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:19:27.409878 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:19:27.670998 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:19:36.597874 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:19:38.757404 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:19:44.729456 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:19:50.927294 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:19:58.244434 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:20:10.857905 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:20:12.322198 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:20:16.105334 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:20:17.736846 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:20:18.065712 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:20:19.076513 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:20:22.834996 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:20:35.547453 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:20:43.988468 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:20:48.766115 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:20:50.546117 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:20:54.808796 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:00.714038 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:02.482442 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:09.072815 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:11.837046 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:21.767145 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:22.548846 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:34.082777 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:41.395737 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:43.568710 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:43.820049 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:45.407292 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:21:55.957862 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:00.423490 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:05.365916 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:10.221550 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:15.907561 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:16.149820 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:19.339476 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:23.410785 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:33.140545 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:42.147722 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:47.476680 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:49.684092 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:50.995786 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:51.655045 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:22:58.927357 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:03.181384 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:08.177571 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:26.056188 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:26.197130 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:37.015536 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:39.048145 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:40.092269 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:42.023689 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:23:52.387052 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:24:00.828939 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:24:05.718486 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused E0512 20:24:07.789335 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused
==> kubelet <==
-- Logs begin at Tue 2020-05-12 19:08:57 UTC, end at Tue 2020-05-12 20:24:14 UTC. --
May 12 20:24:11 minikube kubelet[26616]: E0512 20:24:11.056462 26616 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.39.236:8443: connect: connection refused
May 12 20:24:11 minikube kubelet[26616]: I0512 20:24:11.135052 26616 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 12 20:24:11 minikube kubelet[26616]: E0512 20:24:11.138016 26616 kubelet.go:2267] node "minikube" not found
May 12 20:24:11 minikube kubelet[26616]: E0512 20:24:11.160978 26616 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed yet
May 12 20:24:11 minikube kubelet[26616]: I0512 20:24:11.161629 26616 kubelet_node_status.go:70] Attempting to register node minikube
May 12 20:24:11 minikube kubelet[26616]: E0512 20:24:11.161887 26616 kubelet_node_status.go:92] Unable to register node "minikube" with API server: Post https://control-plane.minikube.internal:8443/api/v1/nodes: dial tcp 192.168.39.236:8443: connect: connection refused
May 12 20:24:11 minikube kubelet[26616]: I0512 20:24:11.165625 26616 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 12 20:24:11 minikube kubelet[26616]: F0512 20:24:11.183710 26616 kubelet.go:1383] Failed to start ContainerManager failed to build map of initial containers from runtime: no PodsandBox found with Id 'bd94ebdd85dee2bf02605d7317b147939088159c95c21c2674f064b2363f59ab'
May 12 20:24:11 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
May 12 20:24:11 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'.
May 12 20:24:11 minikube systemd[1]: kubelet.service: Service RestartSec=600ms expired, scheduling restart.
May 12 20:24:11 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 526.
May 12 20:24:11 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
May 12 20:24:11 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 12 20:24:11 minikube kubelet[26843]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 12 20:24:11 minikube kubelet[26843]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 12 20:24:11 minikube kubelet[26843]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 12 20:24:11 minikube kubelet[26843]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 12 20:24:11 minikube kubelet[26843]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 12 20:24:11 minikube kubelet[26843]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 12 20:24:11 minikube kubelet[26843]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 12 20:24:11 minikube kubelet[26843]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 12 20:24:11 minikube kubelet[26843]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 12 20:24:11 minikube kubelet[26843]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 12 20:24:11 minikube kubelet[26843]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 12 20:24:11 minikube kubelet[26843]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.907767 26843 server.go:417] Version: v1.18.1
May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.909087 26843 plugins.go:100] No cloud provider specified.
May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.909189 26843 server.go:837] Client rotation is on, will bootstrap in background
May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.913787 26843 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.975566 26843 server.go:646] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.975998 26843 container_manager_linux.go:266] container manager verified user specified cgroup-root exists: []
May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.976020 26843 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.976100 26843 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.976112 26843 container_manager_linux.go:301] [topologymanager] Initializing Topology Manager with none policy
May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.976120 26843 container_manager_linux.go:306] Creating device plugin manager: true
May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.976189 26843 client.go:75] Connecting to docker on unix:///var/run/docker.sock
May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.976211 26843 client.go:92] Start docker client with request timeout=2m0s
May 12 20:24:11 minikube kubelet[26843]: W0512 20:24:11.981580 26843 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.981612 26843 docker_service.go:238] Hairpin mode set to "hairpin-veth"
May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.988524 26843 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
May 12 20:24:11 minikube kubelet[26843]: I0512 20:24:11.996264 26843 docker_service.go:258] Docker Info: &{ID:PRY2:GDW4:3K6F:5HJE:6XX3:LC46:UGLY:HPWO:OZGR:PQHC:FDGX:ZYTW Containers:7 ContainersRunning:5 ContainersPaused:0 ContainersStopped:2 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem
❗ unable to fetch logs for: describe nodes
I am happy using the docker or virtuabox drivers for now. But really interested understanding what is causing these errors.
Many thanks, Daniel
Thank you @djrollins I am glad that other drivers works, I will keep this issue open. so we find the root cause and fix it for all other users who might have same problem as you. I hope if anyone else has this issue, comment and let us know. thank you agian.
I looked at the most recent logs and still don't have a clue what may be going on here. The logs look OK to me.
Regrettably, there isn't enough information in this issue to make it actionable, and a long enough duration has passed, so this issue is likely difficult to replicate. If you can provide any additional details, such as:
The exact minikube start command line used: preferably with --alsologtostderr -v=8 added. The full output of the command The full output of "minikube logs" Please feel free to do so at any point. Thank you for sharing your experience!
Meanwhile have you tried out newest driver Docker Driver with latest version of minikube? you could try minikube delete minikube start --driver=docker
for more information on the docker driver checkout: https://minikube.sigs.k8s.io/docs/drivers/docker/
Just a quick update as everything is working fine now:
I recently noticed that CPU virtualisation was disabled in my BIOS for some reason. I never thought to check it as I just assumed libvirtd wouldn't even start if that was the case. Anyway, I switched it on everything seems to run fine now.
Thank you for all of your help, but it seems like this one might have been user error!
When using the kvm2 driver, the "Launching Kubernetes" task is painfully slow and often fails with an error due to some connection timeout. The resulting error from minikube is different everytime so I have just presented the latest below.
In the case where
minikube up
does succeed, I regularly get 503 errors when runningkubectl get componentstatuses
and commands likeminikube dashboard
often fail with timeouts.Using the virtualbox driver works perfectly fine, so I can use it for now. But I am a little stumped as to how to resolve this.
Many thanks,
Daniel
The exact command to reproduce the issue:
The full output of the command that failed:
The output of the
minikube logs
command:The operating system version: Arch Linux. Linux snowden 5.5.10-arch1-1 #1 SMP PREEMPT Wed, 18 Mar 2020 08:40:35 +0000 x86_64 GNU/Linux