kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.23k stars 4.87k forks source link

host reuse on older ISO: Failed to preload container runtime Docker #6938

Closed prasadkatti closed 4 years ago

prasadkatti commented 4 years ago

The exact command to reproduce the issue: minikube start on Mac with minikube version 1.8.1.

The full output of the command that failed:

$ minikube start 😄 minikube v1.8.1 on Darwin 10.14.6 ✨ Using the hyperkit driver based on existing profile 💾 Downloading driver docker-machine-driver-hyperkit:

docker-machine-driver-hyperkit.sha256: 65 B / 65 B [---] 100.00% ? p/s 0s docker-machine-driver-hyperkit: 10.90 MiB / 10.90 MiB 100.00% 8.69 MiB p 🔑 The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

$ sudo chown root:wheel /Users/pk186040/.minikube/bin/docker-machine-driver-hyperkit
$ sudo chmod u+s /Users/pk186040/.minikube/bin/docker-machine-driver-hyperkit

Password: 💿 Downloading VM boot image ...

minikube-v1.8.0.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s minikube-v1.8.0.iso: 173.56 MiB / 173.56 MiB [] 100.00% 10.09 MiB p/s 18s ⌛ Reconfiguring existing host ... 🏃 Using the running hyperkit "minikube" VM ... 💾 Downloading preloaded images tarball for k8s v1.17.3 ... preloaded-images-k8s-v1-v1.17.3-docker-overlay2.tar.lz4: 499.26 MiB / 499 E0307 00:43:09.645355 22278 config.go:71] Failed to preload container runtime Docker: extracting tarball: stderr tar: invalid option -- 'I' BusyBox v1.29.3 (2020-02-20 00:24:17 PST) multi-call binary.

Usage: tar c|x|t [-hvokO] [-f TARFILE] [-C DIR] [-T FILE] [-X FILE] [--exclude PATTERN]... [FILE]...

Create, extract, or list files from a tar file

c   Create
x   Extract
t   List
-f FILE Name of TARFILE ('-' for stdin/out)
-C DIR  Change to DIR before operation
-v  Verbose
-O  Extract to stdout
-o  Don't restore user:group
-k  Don't replace existing files
-h  Follow symlinks
-T FILE File with names to include
-X FILE File with glob patterns to exclude
--exclude PATTERN   Glob pattern to exclude

/stderr : sudo tar -I lz4 -C /var -xvf /preloaded.tar.lz4: Process exited with status 1 stdout:

stderr: tar: invalid option -- 'I' BusyBox v1.29.3 (2020-02-20 00:24:17 PST) multi-call binary.

Usage: tar c|x|t [-hvokO] [-f TARFILE] [-C DIR] [-T FILE] [-X FILE] [--exclude PATTERN]... [FILE]...

Create, extract, or list files from a tar file

c   Create
x   Extract
t   List
-f FILE Name of TARFILE ('-' for stdin/out)
-C DIR  Change to DIR before operation
-v  Verbose
-O  Extract to stdout
-o  Don't restore user:group
-k  Don't replace existing files
-h  Follow symlinks
-T FILE File with names to include
-X FILE File with glob patterns to exclude
--exclude PATTERN   Glob pattern to exclude

, falling back to caching images 🐳 Preparing Kubernetes v1.17.3 on Docker 19.03.6 ... 🚀 Launching Kubernetes ... 🌟 Enabling addons: default-storageclass, storage-provisioner 🏄 Done! kubectl is now configured to use "minikube"

The output of the minikube logs command:

==> Docker <== -- Logs begin at Sat 2020-03-07 01:25:02 UTC, end at Sat 2020-03-07 08:58:46 UTC. -- Mar 07 08:52:33 minikube dockerd[10037]: time="2020-03-07T08:52:33.227871610Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c8fb204e7c521f4990cbd63346d4fdb9f1053cc0d0573491708e9eabfd6c8ccc/shim.sock" debug=false pid=26774 Mar 07 08:52:33 minikube dockerd[10037]: time="2020-03-07T08:52:33.499967324Z" level=info msg="shim reaped" id=c8fb204e7c521f4990cbd63346d4fdb9f1053cc0d0573491708e9eabfd6c8ccc Mar 07 08:52:33 minikube dockerd[10037]: time="2020-03-07T08:52:33.510219035Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Mar 07 08:52:53 minikube dockerd[10037]: time="2020-03-07T08:52:53.482653762Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e29e1be084980e764ae7799e5853ebcc6e6f964befca25a25fe2169e4a175985/shim.sock" debug=false pid=27458 Mar 07 08:52:53 minikube dockerd[10037]: time="2020-03-07T08:52:53.749118042Z" level=info msg="shim reaped" id=e29e1be084980e764ae7799e5853ebcc6e6f964befca25a25fe2169e4a175985 Mar 07 08:52:53 minikube dockerd[10037]: time="2020-03-07T08:52:53.759351586Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Mar 07 08:53:10 minikube dockerd[10037]: time="2020-03-07T08:53:10.747603187Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/eb5d9d4e69ca5417375be8fef3276349b17d78c7645593474aedbb9c37caaa6c/shim.sock" debug=false pid=27798 Mar 07 08:53:10 minikube dockerd[10037]: time="2020-03-07T08:53:10.975606482Z" level=info msg="shim reaped" id=eb5d9d4e69ca5417375be8fef3276349b17d78c7645593474aedbb9c37caaa6c Mar 07 08:53:10 minikube dockerd[10037]: time="2020-03-07T08:53:10.982072034Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Mar 07 08:53:31 minikube dockerd[10037]: time="2020-03-07T08:53:31.275665320Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e89336c8d6e9dcc374bb54ccbe9b6a950f3b024ee2d72c9eda74749296ac3245/shim.sock" debug=false pid=28497 Mar 07 08:53:31 minikube dockerd[10037]: time="2020-03-07T08:53:31.512679634Z" level=info msg="shim reaped" id=e89336c8d6e9dcc374bb54ccbe9b6a950f3b024ee2d72c9eda74749296ac3245 Mar 07 08:53:31 minikube dockerd[10037]: time="2020-03-07T08:53:31.523017777Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Mar 07 08:53:54 minikube dockerd[10037]: time="2020-03-07T08:53:54.516634669Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9eb745e91091899b1dd479e6fddc10bdd9eaf553fc40182315000f96ef9b7f65/shim.sock" debug=false pid=28858 Mar 07 08:53:54 minikube dockerd[10037]: time="2020-03-07T08:53:54.779206185Z" level=info msg="shim reaped" id=9eb745e91091899b1dd479e6fddc10bdd9eaf553fc40182315000f96ef9b7f65 Mar 07 08:53:54 minikube dockerd[10037]: time="2020-03-07T08:53:54.790182789Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Mar 07 08:54:09 minikube dockerd[10037]: time="2020-03-07T08:54:09.898473877Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2ee8f29530687843a84a49dd160523ccecba8e36fee3d15b1c660508b419ca62/shim.sock" debug=false pid=29488 Mar 07 08:54:10 minikube dockerd[10037]: time="2020-03-07T08:54:10.171680200Z" level=info msg="shim reaped" id=2ee8f29530687843a84a49dd160523ccecba8e36fee3d15b1c660508b419ca62 Mar 07 08:54:10 minikube dockerd[10037]: time="2020-03-07T08:54:10.181746001Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Mar 07 08:54:26 minikube dockerd[10037]: time="2020-03-07T08:54:26.222311472Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f944e5931c2a236a00e6b1d65f82f0cdc726ecb14e5a61ec6ac312076008f5c7/shim.sock" debug=false pid=29840 Mar 07 08:54:26 minikube dockerd[10037]: time="2020-03-07T08:54:26.445838547Z" level=info msg="shim reaped" id=f944e5931c2a236a00e6b1d65f82f0cdc726ecb14e5a61ec6ac312076008f5c7 Mar 07 08:54:26 minikube dockerd[10037]: time="2020-03-07T08:54:26.456022411Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Mar 07 08:54:46 minikube dockerd[10037]: time="2020-03-07T08:54:46.997520879Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3576461142ee2b5edd3b0a64e044fdb34cc142d17fad563df8aeb7f15a6035fc/shim.sock" debug=false pid=30503 Mar 07 08:54:47 minikube dockerd[10037]: time="2020-03-07T08:54:47.232393026Z" level=info msg="shim reaped" id=3576461142ee2b5edd3b0a64e044fdb34cc142d17fad563df8aeb7f15a6035fc Mar 07 08:54:47 minikube dockerd[10037]: time="2020-03-07T08:54:47.242482957Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Mar 07 08:55:07 minikube dockerd[10037]: time="2020-03-07T08:55:07.264900434Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f858267071f5247b64b66dbb5eedb0a786b497427e3cb899efa8410ad9e75052/shim.sock" debug=false pid=30865 Mar 07 08:55:07 minikube dockerd[10037]: time="2020-03-07T08:55:07.502180788Z" level=info msg="shim reaped" id=f858267071f5247b64b66dbb5eedb0a786b497427e3cb899efa8410ad9e75052 Mar 07 08:55:07 minikube dockerd[10037]: time="2020-03-07T08:55:07.512270040Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Mar 07 08:55:23 minikube dockerd[10037]: time="2020-03-07T08:55:23.954771941Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9f2defadabf0c3b8d3a6e00c7549921a16e6ca2f2ae9ecd2e24911f61ab67c9c/shim.sock" debug=false pid=31507 Mar 07 08:55:24 minikube dockerd[10037]: time="2020-03-07T08:55:24.184653745Z" level=info msg="shim reaped" id=9f2defadabf0c3b8d3a6e00c7549921a16e6ca2f2ae9ecd2e24911f61ab67c9c Mar 07 08:55:24 minikube dockerd[10037]: time="2020-03-07T08:55:24.194958637Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Mar 07 08:55:38 minikube dockerd[10037]: time="2020-03-07T08:55:38.179447155Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6456d077255aa515399720c4e1809f1542040d2dd7acffda5b9b1efd481301d2/shim.sock" debug=false pid=31830 Mar 07 08:55:38 minikube dockerd[10037]: time="2020-03-07T08:55:38.410727345Z" level=info msg="shim reaped" id=6456d077255aa515399720c4e1809f1542040d2dd7acffda5b9b1efd481301d2 Mar 07 08:55:38 minikube dockerd[10037]: time="2020-03-07T08:55:38.420667883Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Mar 07 08:56:03 minikube dockerd[10037]: time="2020-03-07T08:56:03.303183590Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fee752e4718c685028c6bb208ada7a7ed7de6b42651966362a88396bdc12cf6a/shim.sock" debug=false pid=32551 Mar 07 08:56:03 minikube dockerd[10037]: time="2020-03-07T08:56:03.552184896Z" level=info msg="shim reaped" id=fee752e4718c685028c6bb208ada7a7ed7de6b42651966362a88396bdc12cf6a Mar 07 08:56:03 minikube dockerd[10037]: time="2020-03-07T08:56:03.562279119Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Mar 07 08:56:24 minikube dockerd[10037]: time="2020-03-07T08:56:24.572900908Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d73a4c5e9c0acfff493a139743b9fe87207e48d08282963e37f6baf006de6b1d/shim.sock" debug=false pid=464 Mar 07 08:56:24 minikube dockerd[10037]: time="2020-03-07T08:56:24.818731047Z" level=info msg="shim reaped" id=d73a4c5e9c0acfff493a139743b9fe87207e48d08282963e37f6baf006de6b1d Mar 07 08:56:24 minikube dockerd[10037]: time="2020-03-07T08:56:24.828763007Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Mar 07 08:56:40 minikube dockerd[10037]: time="2020-03-07T08:56:40.296127213Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e36021884ed23c35fd24fe2b907ddf19153d0200fca760d1803181053872e28f/shim.sock" debug=false pid=1009 Mar 07 08:56:40 minikube dockerd[10037]: time="2020-03-07T08:56:40.557957049Z" level=info msg="shim reaped" id=e36021884ed23c35fd24fe2b907ddf19153d0200fca760d1803181053872e28f Mar 07 08:56:40 minikube dockerd[10037]: time="2020-03-07T08:56:40.568089485Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Mar 07 08:56:59 minikube dockerd[10037]: time="2020-03-07T08:56:59.775420868Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2a7dd0910e02635c8b219977f9ee83f53a8b95d7f562177cb725cddf6b026a5b/shim.sock" debug=false pid=1479 Mar 07 08:57:00 minikube dockerd[10037]: time="2020-03-07T08:56:59.999946409Z" level=info msg="shim reaped" id=2a7dd0910e02635c8b219977f9ee83f53a8b95d7f562177cb725cddf6b026a5b Mar 07 08:57:00 minikube dockerd[10037]: time="2020-03-07T08:57:00.010116657Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Mar 07 08:57:19 minikube dockerd[10037]: time="2020-03-07T08:57:19.274921479Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2935b5d2fc1e47fe8c81feb03ad98eb0c6fb3832dd9e596d8d4e5d46a5d07c7f/shim.sock" debug=false pid=2215 Mar 07 08:57:19 minikube dockerd[10037]: time="2020-03-07T08:57:19.520553806Z" level=info msg="shim reaped" id=2935b5d2fc1e47fe8c81feb03ad98eb0c6fb3832dd9e596d8d4e5d46a5d07c7f Mar 07 08:57:19 minikube dockerd[10037]: time="2020-03-07T08:57:19.530745552Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Mar 07 08:57:38 minikube dockerd[10037]: time="2020-03-07T08:57:38.481643442Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fff2a312d7f64d009673516af317c5be6f77547cdbcc9fd659aac682a5e3a732/shim.sock" debug=false pid=2568 Mar 07 08:57:38 minikube dockerd[10037]: time="2020-03-07T08:57:38.703719392Z" level=info msg="shim reaped" id=fff2a312d7f64d009673516af317c5be6f77547cdbcc9fd659aac682a5e3a732 Mar 07 08:57:38 minikube dockerd[10037]: time="2020-03-07T08:57:38.714080099Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Mar 07 08:57:55 minikube dockerd[10037]: time="2020-03-07T08:57:55.947469349Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/19366314df2a2a59d05445986765f963314fea97d91ffd4e055d6c39a4ac7737/shim.sock" debug=false pid=3082 Mar 07 08:57:56 minikube dockerd[10037]: time="2020-03-07T08:57:56.287883433Z" level=info msg="shim reaped" id=19366314df2a2a59d05445986765f963314fea97d91ffd4e055d6c39a4ac7737 Mar 07 08:57:56 minikube dockerd[10037]: time="2020-03-07T08:57:56.299378802Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Mar 07 08:58:06 minikube dockerd[10037]: time="2020-03-07T08:58:06.589517838Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b4594e86f3fa81e31dc70534a6f6d78f5ac3b1ee5383eedfc891f7f378770892/shim.sock" debug=false pid=3529 Mar 07 08:58:06 minikube dockerd[10037]: time="2020-03-07T08:58:06.852208094Z" level=info msg="shim reaped" id=b4594e86f3fa81e31dc70534a6f6d78f5ac3b1ee5383eedfc891f7f378770892 Mar 07 08:58:06 minikube dockerd[10037]: time="2020-03-07T08:58:06.862609803Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Mar 07 08:58:34 minikube dockerd[10037]: time="2020-03-07T08:58:34.437714241Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/cfde064cd9e809c1e0fb82281bb77b7a82b367286272d75ec28fe61229081824/shim.sock" debug=false pid=4257 Mar 07 08:58:34 minikube dockerd[10037]: time="2020-03-07T08:58:34.681394709Z" level=info msg="shim reaped" id=cfde064cd9e809c1e0fb82281bb77b7a82b367286272d75ec28fe61229081824 Mar 07 08:58:34 minikube dockerd[10037]: time="2020-03-07T08:58:34.695702122Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete"

==> container status <== time="2020-03-07T08:58:48Z" level=fatal msg="failed to connect: failed to connect, make sure you are running as root and the runtime has been started: context deadline exceeded" CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES cfde064cd9e8 90d27391b780 "kube-apiserver --ad…" 14 seconds ago Exited (1) 13 seconds ago k8s_kube-apiserver_kube-apiserver-m01_kube-system_843fdef6ad4425676e0cf00e2b405f5d_55 032a96727d3c 303ce5db0e90 "etcd --advertise-cl…" 15 minutes ago Up 15 minutes k8s_etcd_etcd-m01_kube-system_3753eb30cd4909ae02795771db823bb6_0 c61affc04199 k8s.gcr.io/pause:3.1 "/pause" 15 minutes ago Up 15 minutes k8s_POD_etcd-m01_kube-system_3753eb30cd4909ae02795771db823bb6_0 e6a5e4907cca d109c0821a2b "kube-scheduler --au…" 15 minutes ago Up 15 minutes k8s_kube-scheduler_kube-scheduler-m01_kube-system_e3025acd90e7465e66fa19c71b916366_10 4bb26e750c91 b0f1517c1f4b "kube-controller-man…" 15 minutes ago Up 15 minutes k8s_kube-controller-manager_kube-controller-manager-m01_kube-system_67b7e5352c5d7693f9bfac40cd9df88f_9 cd94e4fcc4b3 k8s.gcr.io/pause:3.1 "/pause" 15 minutes ago Up 15 minutes k8s_POD_kube-scheduler-m01_kube-system_e3025acd90e7465e66fa19c71b916366_10 f2d71f5621b5 k8s.gcr.io/pause:3.1 "/pause" 15 minutes ago Up 15 minutes k8s_POD_kube-controller-manager-m01_kube-system_67b7e5352c5d7693f9bfac40cd9df88f_11 de6165f0accd k8s.gcr.io/pause:3.1 "/pause" 15 minutes ago Up 15 minutes k8s_POD_kube-apiserver-m01_kube-system_843fdef6ad4425676e0cf00e2b405f5d_8 64991637316b d109c0821a2b "kube-scheduler --au…" 26 hours ago Exited (2) 16 minutes ago k8s_kube-scheduler_kube-scheduler-minikube_kube-system_e3025acd90e7465e66fa19c71b916366_9 9fb58023370f k8s.gcr.io/pause:3.1 "/pause" 26 hours ago Exited (0) 16 minutes ago k8s_POD_kube-scheduler-minikube_kube-system_e3025acd90e7465e66fa19c71b916366_9 f041302fa986 a90209bb39e3 "nginx -g 'daemon of…" 26 hours ago Exited (0) 16 minutes ago k8s_echoserver_hello-minikube-5cd8589d76-g46wp_default_b675124f-a89b-4289-99ba-450c95a63a88_2 c603327a318e k8s.gcr.io/pause:3.1 "/pause" 26 hours ago Exited (0) 16 minutes ago k8s_POD_hello-minikube-5cd8589d76-g46wp_default_b675124f-a89b-4289-99ba-450c95a63a88_2 9eb9f79a6264 b0f1517c1f4b "kube-controller-man…" 26 hours ago Exited (2) 16 minutes ago k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_67b7e5352c5d7693f9bfac40cd9df88f_8 8c7aee89b568 k8s.gcr.io/pause:3.1 "/pause" 26 hours ago Exited (0) 16 minutes ago k8s_POD_kube-controller-manager-minikube_kube-system_67b7e5352c5d7693f9bfac40cd9df88f_10 01a181751732 4689081edb10 "/storage-provisioner" 28 hours ago Exited (2) 26 hours ago k8s_storage-provisioner_storage-provisioner_kube-system_ecd0272a-2765-44d2-b243-d26cc5fb3548_9 683316f57f35 nginx "nginx -g 'daemon of…" 28 hours ago Exited (0) 26 hours ago k8s_nginx_nginx-86c57db685-jrzss_default_dd2ed034-000a-424e-bac2-d6cd138eaa39_1 28fb3131d414 70f311871ae1 "/coredns -conf /etc…" 28 hours ago Exited (0) 26 hours ago k8s_coredns_coredns-6955765f44-dgc8v_kube-system_b6f300c8-ad52-4ee2-89c8-0c9b0ba483c2_6 29909eec7da0 ae853e93800d "/usr/local/bin/kube…" 28 hours ago Exited (2) 26 hours ago k8s_kube-proxy_kube-proxy-6g2g4_kube-system_332ca3f8-facc-4627-9a74-06df2581a591_8 02ebfc024e86 70f311871ae1 "/coredns -conf /etc…" 28 hours ago Exited (0) 26 hours ago k8s_coredns_coredns-6955765f44-6wlbs_kube-system_df5a880b-de8f-46c7-b2a7-4c6fd20c67f5_6 c8725d879eac k8s.gcr.io/pause:3.1 "/pause" 28 hours ago Exited (0) 26 hours ago k8s_POD_nginx-86c57db685-jrzss_default_dd2ed034-000a-424e-bac2-d6cd138eaa39_1 63d2f19c62de k8s.gcr.io/pause:3.1 "/pause" 28 hours ago Exited (0) 26 hours ago k8s_POD_kube-proxy-6g2g4_kube-system_332ca3f8-facc-4627-9a74-06df2581a591_8 ec1599197621 k8s.gcr.io/pause:3.1 "/pause" 28 hours ago Exited (0) 26 hours ago k8s_POD_coredns-6955765f44-dgc8v_kube-system_b6f300c8-ad52-4ee2-89c8-0c9b0ba483c2_7 ef72f2619380 k8s.gcr.io/pause:3.1 "/pause" 28 hours ago Exited (0) 26 hours ago k8s_POD_coredns-6955765f44-6wlbs_kube-system_df5a880b-de8f-46c7-b2a7-4c6fd20c67f5_6 061d86d9b1c4 k8s.gcr.io/pause:3.1 "/pause" 28 hours ago Exited (0) 26 hours ago k8s_POD_storage-provisioner_kube-system_ecd0272a-2765-44d2-b243-d26cc5fb3548_6 17f41fc49450 303ce5db0e90 "etcd --advertise-cl…" 28 hours ago Exited (0) 26 hours ago k8s_etcd_etcd-minikube_kube-system_8231cd179f46356704fffd1de2a53955_6 46bc7a52747e k8s.gcr.io/pause:3.1 "/pause" 28 hours ago Exited (0) 26 hours ago k8s_POD_etcd-minikube_kube-system_8231cd179f46356704fffd1de2a53955_7

==> coredns [02ebfc024e86] <== .:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.5 linux/amd64, go1.13.4, c2fd1b2 [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" I0306 04:49:10.697548 1 trace.go:82] Trace[591074266]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-03-06 04:48:40.695314778 +0000 UTC m=+0.146673528) (total time: 30.002100219s): Trace[591074266]: [30.002100219s] [30.002100219s] END E0306 04:49:10.697758 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0306 04:49:10.697758 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0306 04:49:10.697758 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0306 04:49:10.697758 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I0306 04:49:10.698378 1 trace.go:82] Trace[1201310217]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-03-06 04:48:40.695428387 +0000 UTC m=+0.146787135) (total time: 30.002927261s): Trace[1201310217]: [30.002927261s] [30.002927261s] END E0306 04:49:10.698587 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0306 04:49:10.698587 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0306 04:49:10.698587 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0306 04:49:10.698587 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I0306 04:49:10.699189 1 trace.go:82] Trace[538829256]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-03-06 04:48:40.698931736 +0000 UTC m=+0.150290478) (total time: 30.000246527s): Trace[538829256]: [30.000246527s] [30.000246527s] END E0306 04:49:10.699223 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0306 04:49:10.699223 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0306 04:49:10.699223 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0306 04:49:10.699223 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout [INFO] plugin/ready: Still waiting on: "kubernetes" E0306 06:55:31.036174 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=243149&timeout=9m34s&timeoutSeconds=574&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E0306 06:55:31.036355 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=231217&timeout=8m36s&timeoutSeconds=516&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E0306 06:55:31.036424 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=231217&timeout=6m51s&timeoutSeconds=411&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E0306 06:55:31.036174 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=243149&timeout=9m34s&timeoutSeconds=574&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E0306 06:55:31.036174 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=243149&timeout=9m34s&timeoutSeconds=574&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E0306 06:55:31.036174 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=243149&timeout=9m34s&timeoutSeconds=574&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E0306 06:55:31.036355 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=231217&timeout=8m36s&timeoutSeconds=516&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E0306 06:55:31.036355 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=231217&timeout=8m36s&timeoutSeconds=516&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E0306 06:55:31.036355 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=231217&timeout=8m36s&timeoutSeconds=516&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E0306 06:55:31.036424 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=231217&timeout=6m51s&timeoutSeconds=411&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E0306 06:55:31.036424 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=231217&timeout=6m51s&timeoutSeconds=411&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E0306 06:55:31.036424 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=231217&timeout=6m51s&timeoutSeconds=411&watch=true: dial tcp 10.96.0.1:443: connect: connection refused [INFO] SIGTERM: Shutting down servers then terminating [INFO] plugin/health: Going into lameduck mode for 5s

==> coredns [28fb3131d414] <== [INFO] plugin/ready: Still waiting on: "kubernetes" .:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.5 linux/amd64, go1.13.4, c2fd1b2 [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" I0306 04:49:10.700468 1 trace.go:82] Trace[1257420654]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-03-06 04:48:40.695650699 +0000 UTC m=+0.071269985) (total time: 30.00473718s): Trace[1257420654]: [30.00473718s] [30.00473718s] END E0306 04:49:10.700562 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0306 04:49:10.700562 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0306 04:49:10.700562 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0306 04:49:10.700562 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I0306 04:49:10.700863 1 trace.go:82] Trace[1117763322]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-03-06 04:48:40.700153274 +0000 UTC m=+0.075772535) (total time: 30.00069515s): Trace[1117763322]: [30.00069515s] [30.00069515s] END E0306 04:49:10.700953 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0306 04:49:10.700953 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0306 04:49:10.700953 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0306 04:49:10.700953 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I0306 04:49:10.701557 1 trace.go:82] Trace[1555033039]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-03-06 04:48:40.695682082 +0000 UTC m=+0.071301358) (total time: 30.00585841s): Trace[1555033039]: [30.00585841s] [30.00585841s] END E0306 04:49:10.701594 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0306 04:49:10.701594 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0306 04:49:10.701594 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0306 04:49:10.701594 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0306 06:55:31.032854 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=231217&timeout=9m47s&timeoutSeconds=587&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E0306 06:55:31.032938 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=243149&timeout=5m34s&timeoutSeconds=334&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E0306 06:55:31.032985 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVeErsio3n0=62 30162:1575&:t3i1m.e0o3u2t8=564m 5 2 s & t i m1eoruetfSleeccotnodrs.=g4o1:22&8w3a]t cphk=gt/rmuoed:/ kd8isa.li ot/ccpl i1e0n.t9-6g.o0@.v10:.404.30:- 2c0o1n9n062ec0t0:8 5c1o0n1n-e7c8tdi2oanf 7r9e2bafbu/steodo ls/cache/reflector.go:98: Failed to watch v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=231217&timeout=9m47s&timeoutSeconds=587&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E0306 06:55:31.032854 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=231217&timeout=9m47s&timeoutSeconds=587&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E0306 06:55:31.032854 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=231217&timeout=9m47s&timeoutSeconds=587&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E0306 06:55:31.032938 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=243149&timeout=5m34s&timeoutSeconds=334&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E0306 06:55:31.032938 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=243149&timeout=5m34s&timeoutSeconds=334&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E0306 06:55:31.032938 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=243149&timeout=5m34s&timeoutSeconds=334&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E0306 06:55:31.032985 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=231217&timeout=6m52s&timeoutSeconds=412&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E0306 06:55:31.032985 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=231217&timeout=6m52s&timeoutSeconds=412&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E0306 06:55:31.032985 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=231217&timeout=6m52s&timeoutSeconds=412&watch=true: dial tcp 10.96.0.1:443: connect: connection refused [INFO] SIGTERM: Shutting down servers then terminating [INFO] plugin/health: Going into lameduck mode for 5s

==> dmesg <== [Mar 6 21:28] ERROR: earlyprintk= earlyser already used [ +0.000000] You have booted with nomodeset. This means your GPU drivers are DISABLED [ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly [ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it [ +0.006455] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xC0, should be 0x1D (20180810/tbprint-177) [Mar 6 21:29] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-184) [ +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-620) [ +0.005986] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 [ +1.823812] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument [ +0.005440] systemd-fstab-generator[1096]: Ignoring "noauto" for root device [ +0.002768] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling. [ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.) [ +0.933979] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack. [ +0.056134] vboxguest: loading out-of-tree module taints kernel. [ +0.004214] vboxguest: PCI device not found, probably running on physical hardware. [ +3.059953] systemd-fstab-generator[1856]: Ignoring "noauto" for root device [ +0.154427] systemd-fstab-generator[1872]: Ignoring "noauto" for root device [ +4.414437] systemd-fstab-generator[2395]: Ignoring "noauto" for root device [ +9.444321] kauditd_printk_skb: 107 callbacks suppressed [ +8.201117] kauditd_printk_skb: 32 callbacks suppressed [ +21.557968] kauditd_printk_skb: 56 callbacks suppressed [Mar 6 21:30] kauditd_printk_skb: 2 callbacks suppressed [Mar 6 21:31] NFSD: Unable to end grace period: -110 [Mar 6 22:29] clocksource: timekeeping watchdog on CPU0: Marking clocksource 'tsc' as unstable because the skew is too large: [ +0.000096] clocksource: 'hpet' wd_now: 70e06bcc wd_last: 7001de97 mask: ffffffff [ +0.000058] clocksource: 'tsc' cs_now: 8a2d25685ec cs_last: 8a1fc16f538 mask: ffffffffffffffff [ +0.000222] TSC found unstable after boot, most likely due to broken BIOS. Use 'tsc=unstable'. [ +11.392967] hrtimer: interrupt took 1351932 ns [Mar 6 22:59] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [Mar 6 23:02] systemd-fstab-generator[25121]: Ignoring "noauto" for root device [ +15.946110] kauditd_printk_skb: 11 callbacks suppressed [ +22.268856] kauditd_printk_skb: 56 callbacks suppressed [Mar 6 23:03] kauditd_printk_skb: 2 callbacks suppressed [Mar 7 08:33] kauditd_printk_skb: 8 callbacks suppressed [Mar 7 08:42] systemd-fstab-generator[10005]: Ignoring "noauto" for root device [ +0.203922] systemd-fstab-generator[10021]: Ignoring "noauto" for root device [ +0.849182] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000003] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.011860] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000002] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.075393] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000003] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.101479] kauditd_printk_skb: 8 callbacks suppressed [Mar 7 08:43] systemd-fstab-generator[10496]: Ignoring "noauto" for root device [ +13.438925] kauditd_printk_skb: 48 callbacks suppressed

==> kernel <== 08:58:48 up 11:29, 0 users, load average: 0.74, 0.57, 0.52 Linux minikube 4.19.94 #1 SMP Thu Feb 20 00:37:50 PST 2020 x86_64 GNU/Linux PRETTY_NAME="Buildroot 2019.02.9"

==> kube-apiserver [cfde064cd9e8] <== api/all=true|false controls all API versions api/ga=true|false controls all API versions of the form v[0-9]+ api/beta=true|false controls all API versions of the form v[0-9]+beta[0-9]+ api/alpha=true|false controls all API versions of the form v[0-9]+alpha[0-9]+ api/legacy is deprecated, and will be removed in a future version

Egress selector flags:

  --egress-selector-config-file string   File with apiserver egress selector configuration.

Admission flags:

  --admission-control strings              Admission is divided into two phases. In the first phase, only mutating admission plugins run. In the second phase, only validating admission plugins run. The names in the below list may represent a validating plugin, a mutating plugin, or both. The order of plugins in which they are passed to this flag does not matter. Comma-delimited list of: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. (DEPRECATED: Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.)
  --admission-control-config-file string   File with admission control configuration.
  --disable-admission-plugins strings      admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, RuntimeClass, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
  --enable-admission-plugins strings       admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, RuntimeClass, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.

Metrics flags:

  --show-hidden-metrics-for-version string   The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.

Misc flags:

  --allow-privileged                          If true, allow privileged containers. [default=false]
  --apiserver-count int                       The number of apiservers running in the cluster, must be a positive number. (In use when --endpoint-reconciler-type=master-count is enabled.) (default 1)
  --enable-aggregator-routing                 Turns on aggregator routing requests to endpoints IP rather than cluster IP.
  --endpoint-reconciler-type string           Use an endpoint reconciler (master-count, lease, none) (default "lease")
  --event-ttl duration                        Amount of time to retain events. (default 1h0m0s)
  --kubelet-certificate-authority string      Path to a cert file for the certificate authority.
  --kubelet-client-certificate string         Path to a client cert file for TLS.
  --kubelet-client-key string                 Path to a client key file for TLS.
  --kubelet-https                             Use https for kubelet connections. (default true)
  --kubelet-preferred-address-types strings   List of the preferred NodeAddressTypes to use for kubelet connections. (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP])
  --kubelet-timeout duration                  Timeout for kubelet operations. (default 5s)
  --kubernetes-service-node-port int          If non-zero, the Kubernetes master service (which apiserver creates/maintains) will be of type NodePort, using this as the value of the port. If zero, the Kubernetes master service will be of type ClusterIP.
  --max-connection-bytes-per-sec int          If non-zero, throttle each user connection to this number of bytes/sec. Currently only applies to long-running requests.
  --proxy-client-cert-file string             Client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins. It is expected that this cert includes a signature from the CA in the --requestheader-client-ca-file flag. That CA is published in the 'extension-apiserver-authentication' configmap in the kube-system namespace. Components receiving calls from kube-aggregator should use that CA to perform their half of the mutual TLS verification.
  --proxy-client-key-file string              Private key for the client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins.
  --service-account-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)
  --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.
  --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)

Global flags:

  --add-dir-header                   If true, adds the file directory to the header
  --alsologtostderr                  log to standard error as well as files

-h, --help help for kube-apiserver --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0) --log-dir string If non-empty, write log files in this directory --log-file string If non-empty, use this log file --log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800) --log-flush-frequency duration Maximum number of seconds between log flushes (default 5s) --logtostderr log to standard error instead of files (default true) --skip-headers If true, avoid header prefixes in the log messages --skip-log-headers If true, avoid headers when opening log files --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level number for the log level verbosity --version version[=true] Print version information and quit --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging

==> kube-controller-manager [4bb26e750c91] <== I0307 08:43:48.116445 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts I0307 08:43:48.116883 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions I0307 08:43:48.117194 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps I0307 08:43:48.117528 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps I0307 08:43:48.117870 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io I0307 08:43:48.118211 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io I0307 08:43:48.118562 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io I0307 08:43:48.118655 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps I0307 08:43:48.118711 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch I0307 08:43:48.119122 1 controllermanager.go:533] Started "resourcequota" I0307 08:43:48.119228 1 resource_quota_controller.go:271] Starting resource quota controller I0307 08:43:48.119252 1 shared_informer.go:197] Waiting for caches to sync for resource quota I0307 08:43:48.119310 1 resource_quota_monitor.go:303] QuotaMonitor running I0307 08:43:48.153077 1 controllermanager.go:533] Started "namespace" I0307 08:43:48.153117 1 namespace_controller.go:200] Starting namespace controller I0307 08:43:48.153914 1 shared_informer.go:197] Waiting for caches to sync for namespace I0307 08:43:48.163230 1 controllermanager.go:533] Started "bootstrapsigner" I0307 08:43:48.163539 1 shared_informer.go:197] Waiting for caches to sync for bootstrap_signer I0307 08:43:48.216423 1 controllermanager.go:533] Started "persistentvolume-expander" I0307 08:43:48.216983 1 expand_controller.go:319] Starting expand controller I0307 08:43:48.217229 1 shared_informer.go:197] Waiting for caches to sync for expand W0307 08:43:48.262732 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I0307 08:43:48.268663 1 shared_informer.go:204] Caches are synced for certificate-csrsigning I0307 08:43:48.269136 1 shared_informer.go:204] Caches are synced for certificate-csrapproving I0307 08:43:48.270739 1 shared_informer.go:204] Caches are synced for persistent volume I0307 08:43:48.271676 1 shared_informer.go:204] Caches are synced for bootstrap_signer I0307 08:43:48.274382 1 shared_informer.go:204] Caches are synced for daemon sets I0307 08:43:48.295639 1 shared_informer.go:204] Caches are synced for PV protection I0307 08:43:48.315080 1 shared_informer.go:204] Caches are synced for ReplicaSet I0307 08:43:48.316337 1 shared_informer.go:204] Caches are synced for PVC protection I0307 08:43:48.317160 1 shared_informer.go:204] Caches are synced for GC I0307 08:43:48.318151 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator I0307 08:43:48.318721 1 shared_informer.go:204] Caches are synced for TTL I0307 08:43:48.319109 1 shared_informer.go:204] Caches are synced for expand I0307 08:43:48.326582 1 shared_informer.go:204] Caches are synced for HPA I0307 08:43:48.335468 1 shared_informer.go:204] Caches are synced for stateful set I0307 08:43:48.337757 1 shared_informer.go:204] Caches are synced for job I0307 08:43:48.348999 1 shared_informer.go:204] Caches are synced for taint I0307 08:43:48.349273 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: I0307 08:43:48.349582 1 taint_manager.go:186] Starting NoExecuteTaintManager W0307 08:43:48.349760 1 node_lifecycle_controller.go:1058] Missing timestamp for Node minikube. Assuming now as a timestamp. I0307 08:43:48.350886 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal. I0307 08:43:48.352175 1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"527ec157-f937-4081-915b-37a7760fe20b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller I0307 08:43:48.355366 1 shared_informer.go:204] Caches are synced for namespace I0307 08:43:48.365770 1 shared_informer.go:204] Caches are synced for service account I0307 08:43:48.417949 1 shared_informer.go:204] Caches are synced for ReplicationController I0307 08:43:48.652761 1 shared_informer.go:204] Caches are synced for attach detach I0307 08:43:48.717685 1 shared_informer.go:204] Caches are synced for deployment I0307 08:43:48.765195 1 shared_informer.go:204] Caches are synced for disruption I0307 08:43:48.765260 1 disruption.go:338] Sending events to api server. I0307 08:43:48.768195 1 shared_informer.go:204] Caches are synced for endpoint I0307 08:43:48.819793 1 shared_informer.go:204] Caches are synced for resource quota I0307 08:43:48.827611 1 shared_informer.go:204] Caches are synced for garbage collector I0307 08:43:48.827690 1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0307 08:43:49.316708 1 shared_informer.go:197] Waiting for caches to sync for garbage collector I0307 08:43:49.316783 1 shared_informer.go:204] Caches are synced for garbage collector I0307 08:43:49.861236 1 shared_informer.go:197] Waiting for caches to sync for resource quota I0307 08:43:49.861301 1 shared_informer.go:204] Caches are synced for resource quota I0307 08:44:28.367732 1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"527ec157-f937-4081-915b-37a7760fe20b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node minikube status is now: NodeNotReady I0307 08:44:28.459970 1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.

==> kube-controller-manager [9eb9f79a6264] <== I0306 06:58:48.556484 1 disruption.go:330] Starting disruption controller I0306 06:58:48.556502 1 shared_informer.go:197] Waiting for caches to sync for disruption I0306 06:58:48.708018 1 controllermanager.go:533] Started "csrcleaner" I0306 06:58:48.708221 1 cleaner.go:81] Starting CSR cleaner controller I0306 06:58:48.861018 1 controllermanager.go:533] Started "persistentvolume-binder" I0306 06:58:48.861375 1 pv_controller_base.go:294] Starting persistent volume controller I0306 06:58:48.861532 1 shared_informer.go:197] Waiting for caches to sync for persistent volume I0306 06:58:49.009406 1 controllermanager.go:533] Started "pvc-protection" I0306 06:58:49.009486 1 pvc_protection_controller.go:100] Starting PVC protection controller I0306 06:58:49.009509 1 shared_informer.go:197] Waiting for caches to sync for PVC protection I0306 06:58:49.812333 1 controllermanager.go:533] Started "garbagecollector" I0306 06:58:49.812692 1 garbagecollector.go:129] Starting garbage collector controller I0306 06:58:49.812711 1 shared_informer.go:197] Waiting for caches to sync for garbage collector I0306 06:58:49.812738 1 graph_builder.go:282] GraphBuilder running I0306 06:58:49.828480 1 controllermanager.go:533] Started "cronjob" I0306 06:58:49.829322 1 cronjob_controller.go:97] Starting CronJob Manager I0306 06:58:49.844762 1 controllermanager.go:533] Started "ttl" W0306 06:58:49.844805 1 controllermanager.go:525] Skipping "ttl-after-finished" I0306 06:58:49.844963 1 shared_informer.go:197] Waiting for caches to sync for resource quota I0306 06:58:49.845416 1 ttl_controller.go:116] Starting TTL controller I0306 06:58:49.845437 1 shared_informer.go:197] Waiting for caches to sync for TTL W0306 06:58:49.891330 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I0306 06:58:49.910598 1 shared_informer.go:204] Caches are synced for PV protection I0306 06:58:49.910712 1 shared_informer.go:204] Caches are synced for PVC protection I0306 06:58:49.910883 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator I0306 06:58:49.911157 1 shared_informer.go:204] Caches are synced for certificate-csrapproving I0306 06:58:49.912511 1 shared_informer.go:204] Caches are synced for certificate-csrsigning I0306 06:58:49.932523 1 shared_informer.go:204] Caches are synced for namespace I0306 06:58:49.939503 1 shared_informer.go:204] Caches are synced for endpoint I0306 06:58:49.946602 1 shared_informer.go:204] Caches are synced for TTL I0306 06:58:49.959467 1 shared_informer.go:204] Caches are synced for ReplicationController I0306 06:58:49.961528 1 shared_informer.go:204] Caches are synced for bootstrap_signer I0306 06:58:49.965702 1 shared_informer.go:204] Caches are synced for taint I0306 06:58:49.966703 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: W0306 06:58:49.966879 1 node_lifecycle_controller.go:1058] Missing timestamp for Node minikube. Assuming now as a timestamp. I0306 06:58:49.967254 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal. I0306 06:58:49.967830 1 taint_manager.go:186] Starting NoExecuteTaintManager I0306 06:58:49.969219 1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"527ec157-f937-4081-915b-37a7760fe20b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller I0306 06:58:49.968396 1 shared_informer.go:204] Caches are synced for GC I0306 06:58:49.975016 1 shared_informer.go:204] Caches are synced for service account I0306 06:58:49.987840 1 shared_informer.go:204] Caches are synced for job I0306 06:58:50.012284 1 shared_informer.go:204] Caches are synced for attach detach I0306 06:58:50.062039 1 shared_informer.go:204] Caches are synced for persistent volume I0306 06:58:50.070948 1 shared_informer.go:204] Caches are synced for expand I0306 06:58:50.215430 1 shared_informer.go:204] Caches are synced for HPA I0306 06:58:50.250062 1 shared_informer.go:204] Caches are synced for ReplicaSet I0306 06:58:50.259823 1 shared_informer.go:204] Caches are synced for daemon sets I0306 06:58:50.310470 1 shared_informer.go:204] Caches are synced for stateful set I0306 06:58:50.363769 1 shared_informer.go:204] Caches are synced for deployment I0306 06:58:50.418680 1 shared_informer.go:204] Caches are synced for garbage collector I0306 06:58:50.418850 1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0306 06:58:50.445620 1 shared_informer.go:204] Caches are synced for resource quota I0306 06:58:50.457102 1 shared_informer.go:204] Caches are synced for disruption I0306 06:58:50.457121 1 disruption.go:338] Sending events to api server. I0306 06:58:50.463904 1 shared_informer.go:204] Caches are synced for resource quota I0306 06:58:51.308850 1 shared_informer.go:197] Waiting for caches to sync for garbage collector I0306 06:58:51.308981 1 shared_informer.go:204] Caches are synced for garbage collector I0307 04:23:18.334163 1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"527ec157-f937-4081-915b-37a7760fe20b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node minikube status is now: NodeNotReady I0307 04:23:19.258066 1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode. I0307 04:23:29.260103 1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.

==> kube-proxy [29909eec7da0] <== W0306 04:48:40.879474 1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy I0306 04:48:40.900865 1 node.go:135] Successfully retrieved node IP: 192.168.64.3 I0306 04:48:40.900890 1 server_others.go:145] Using iptables Proxier. W0306 04:48:40.901549 1 proxier.go:286] clusterCIDR not specified, unable to distinguish between internal and external traffic I0306 04:48:40.901737 1 server.go:571] Version: v1.17.3 I0306 04:48:40.903128 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 I0306 04:48:40.903179 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I0306 04:48:40.903851 1 conntrack.go:83] Setting conntrack hashsize to 32768 I0306 04:48:40.909191 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I0306 04:48:40.909263 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I0306 04:48:40.914005 1 config.go:313] Starting service config controller I0306 04:48:40.914047 1 shared_informer.go:197] Waiting for caches to sync for service config I0306 04:48:40.914152 1 config.go:131] Starting endpoints config controller I0306 04:48:40.914160 1 shared_informer.go:197] Waiting for caches to sync for endpoints config I0306 04:48:41.014342 1 shared_informer.go:204] Caches are synced for endpoints config I0306 04:48:41.014427 1 shared_informer.go:204] Caches are synced for service config E0306 06:55:31.054834 1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=243149&timeout=6m32s&timeoutSeconds=392&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:55:31.054942 1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=231217&timeout=7m5s&timeoutSeconds=425&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused

==> kube-scheduler [64991637316b] <== E0306 06:56:12.100883 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:12.101704 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:12.103534 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:12.105799 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:12.107981 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:12.109524 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:12.112039 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:12.113163 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:12.114677 1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list v1.ConfigMap: Get https://localhost:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:12.114685 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: Get https://localhost:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:12.116340 1 reflector.go:153] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:13.104725 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:13.104765 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:13.105607 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:13.105740 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:13.107211 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:13.109740 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:13.111434 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:13.113197 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:13.114230 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:13.116012 1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list v1.ConfigMap: Get https://localhost:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:13.116928 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: Get https://localhost:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:13.118405 1 reflector.go:153] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:14.106360 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:14.107451 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:14.109007 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:14.109950 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:14.111432 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:14.112578 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:14.113934 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:14.115358 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:14.116674 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:14.118231 1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list v1.ConfigMap: Get https://localhost:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:14.119581 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: Get https://localhost:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:14.120910 1 reflector.go:153] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:15.110665 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:15.111555 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:15.111661 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:15.111982 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:15.113198 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:15.113769 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:15.114961 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:15.117228 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:15.117997 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:15.119403 1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list v1.ConfigMap: Get https://localhost:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:15.120717 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: Get https://localhost:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:15.121759 1 reflector.go:153] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:16.112814 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:16.113705 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:16.115121 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:16.116867 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:16.118863 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:16.121173 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:16.121857 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:16.122924 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:16.125454 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:16.125461 1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list v1.ConfigMap: Get https://localhost:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:16.127145 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: Get https://localhost:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:16.128525 1 reflector.go:153] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused E0306 06:56:17.118414 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused

==> kube-scheduler [e6a5e4907cca] <== I0307 08:43:24.847082 1 serving.go:312] Generated self-signed cert in-memory W0307 08:43:25.352605 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found W0307 08:43:25.353263 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found W0307 08:43:26.080354 1 authorization.go:47] Authorization is disabled W0307 08:43:26.080900 1 authentication.go:92] Authentication is disabled I0307 08:43:26.081358 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I0307 08:43:26.083705 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 I0307 08:43:26.085427 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0307 08:43:26.085932 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0307 08:43:26.085481 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0307 08:43:26.096534 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0307 08:43:26.085525 1 tlsconfig.go:219] Starting DynamicServingCertificateController I0307 08:43:26.186501 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0307 08:43:26.187047 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler... I0307 08:43:26.197463 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0307 08:43:41.857199 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler

==> kubelet <== -- Logs begin at Sat 2020-03-07 01:25:02 UTC, end at Sat 2020-03-07 08:58:48 UTC. -- Mar 07 08:58:43 minikube kubelet[3732]: E0307 08:58:43.457353 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:43 minikube kubelet[3732]: E0307 08:58:43.558258 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:43 minikube kubelet[3732]: E0307 08:58:43.659123 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:43 minikube kubelet[3732]: E0307 08:58:43.724146 3732 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get node info: node "m01" not found Mar 07 08:58:43 minikube kubelet[3732]: E0307 08:58:43.759532 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:43 minikube kubelet[3732]: E0307 08:58:43.860569 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:43 minikube kubelet[3732]: E0307 08:58:43.961054 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:44 minikube kubelet[3732]: E0307 08:58:44.061939 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:44 minikube kubelet[3732]: E0307 08:58:44.162520 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:44 minikube kubelet[3732]: E0307 08:58:44.264227 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:44 minikube kubelet[3732]: E0307 08:58:44.366438 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:44 minikube kubelet[3732]: E0307 08:58:44.468381 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:44 minikube kubelet[3732]: E0307 08:58:44.569408 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:44 minikube kubelet[3732]: E0307 08:58:44.670114 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:44 minikube kubelet[3732]: E0307 08:58:44.771355 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:44 minikube kubelet[3732]: E0307 08:58:44.872131 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:44 minikube kubelet[3732]: E0307 08:58:44.972914 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:45 minikube kubelet[3732]: E0307 08:58:45.074166 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:45 minikube kubelet[3732]: E0307 08:58:45.174556 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:45 minikube kubelet[3732]: E0307 08:58:45.275627 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:45 minikube kubelet[3732]: E0307 08:58:45.377470 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:45 minikube kubelet[3732]: E0307 08:58:45.478391 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:45 minikube kubelet[3732]: E0307 08:58:45.580075 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:45 minikube kubelet[3732]: E0307 08:58:45.681495 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:45 minikube kubelet[3732]: E0307 08:58:45.782287 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:45 minikube kubelet[3732]: E0307 08:58:45.885857 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:45 minikube kubelet[3732]: E0307 08:58:45.986603 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:46 minikube kubelet[3732]: E0307 08:58:46.064552 3732 controller.go:135] failed to ensure node lease exists, will retry in 7s, error: leases.coordination.k8s.io "m01" is forbidden: User "system:node:minikube" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease": can only access node lease with the same name as the requesting node Mar 07 08:58:46 minikube kubelet[3732]: E0307 08:58:46.087225 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:46 minikube kubelet[3732]: E0307 08:58:46.188029 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:46 minikube kubelet[3732]: E0307 08:58:46.288386 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:46 minikube kubelet[3732]: E0307 08:58:46.388706 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:46 minikube kubelet[3732]: E0307 08:58:46.489175 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:46 minikube kubelet[3732]: E0307 08:58:46.589952 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:46 minikube kubelet[3732]: E0307 08:58:46.690750 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:46 minikube kubelet[3732]: E0307 08:58:46.792757 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:46 minikube kubelet[3732]: E0307 08:58:46.896520 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:46 minikube kubelet[3732]: E0307 08:58:46.998974 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:47 minikube kubelet[3732]: E0307 08:58:47.099455 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:47 minikube kubelet[3732]: E0307 08:58:47.199743 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:47 minikube kubelet[3732]: E0307 08:58:47.301008 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:47 minikube kubelet[3732]: E0307 08:58:47.401781 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:47 minikube kubelet[3732]: E0307 08:58:47.504634 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:47 minikube kubelet[3732]: E0307 08:58:47.607279 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:47 minikube kubelet[3732]: I0307 08:58:47.665119 3732 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach Mar 07 08:58:47 minikube kubelet[3732]: I0307 08:58:47.706067 3732 kubelet_node_status.go:70] Attempting to register node m01 Mar 07 08:58:47 minikube kubelet[3732]: E0307 08:58:47.708374 3732 kubelet_node_status.go:92] Unable to register node "m01" with API server: nodes "m01" is forbidden: node "minikube" is not allowed to modify node "m01" Mar 07 08:58:47 minikube kubelet[3732]: E0307 08:58:47.709568 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:47 minikube kubelet[3732]: E0307 08:58:47.810237 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:47 minikube kubelet[3732]: E0307 08:58:47.910899 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:48 minikube kubelet[3732]: E0307 08:58:48.011786 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:48 minikube kubelet[3732]: E0307 08:58:48.112427 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:48 minikube kubelet[3732]: E0307 08:58:48.213389 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:48 minikube kubelet[3732]: E0307 08:58:48.313997 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:48 minikube kubelet[3732]: E0307 08:58:48.415147 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:48 minikube kubelet[3732]: E0307 08:58:48.515527 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:48 minikube kubelet[3732]: E0307 08:58:48.615974 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:48 minikube kubelet[3732]: E0307 08:58:48.716363 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:48 minikube kubelet[3732]: E0307 08:58:48.816585 3732 kubelet.go:2263] node "m01" not found Mar 07 08:58:48 minikube kubelet[3732]: E0307 08:58:48.916979 3732 kubelet.go:2263] node "m01" not found

==> storage-provisioner [01a181751732] <== E0306 06:55:31.051766 1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=81, ErrCode=NO_ERROR, debug="" E0306 06:55:31.052812 1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=81, ErrCode=NO_ERROR, debug="" E0306 06:55:31.052858 1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=81, ErrCode=NO_ERROR, debug="" E0306 06:55:31.070546 1 reflector.go:315] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:411: Failed to watch v1.PersistentVolumeClaim: Get https://10.96.0.1:443/api/v1/persistentvolumeclaims?resourceVersion=231217&timeoutSeconds=314&watch=true: dial tcp 10.96.0.1:443: getsockopt: connection refused E0306 06:55:31.070676 1 reflector.go:315] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:379: Failed to watch v1.StorageClass: Get https://10.96.0.1:443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=231217&timeoutSeconds=309&watch=true: dial tcp 10.96.0.1:443: getsockopt: connection refused

The operating system version: MacOS Mojave 10.14.6

afbjorklund commented 4 years ago

Since you are using an existing machine, it uses the older ISO that doesn't have lz4 support. But it should work OK, and fall back to caching images rather than loading the pre-loaded tar ?

afbjorklund commented 4 years ago

We could check if the lz4 binary is available, before trying to use it with the flag to tar. That would avoid the scary output, especially when it's dumping the entire usage (twice)...

tstromberg commented 4 years ago

Thanks for the report! The error is mostly harmless, but states that a start-time optimization isn't available. The error definitely looks scarier than it should be.

If it bothers anyone, you can run minikube delete to create a new cluster from the latest ISO. On the plus side, it'll boot 10 seconds faster with this optimization.

hejunion commented 4 years ago

Able to start after two times "minikube delete" Started OK at 3nd time to run ( using Git bash /Window10) minikube start --vm-driver=virtualbox

just wait longer.. seems that more time needed when * Preparing Kubernetes v1.17.3 on Docker 19.03.6 ...