kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.44k stars 4.89k forks source link

hyperv: Client.Timeout exceeded while awaiting headers #7945

Closed hatharom closed 4 years ago

hatharom commented 4 years ago

Steps to reproduce the issue: Minikube was working fine until I upgraded to 1.9.2 1.minikube delete then minikube startworks fine (started with hyper-v driver) 2.minikube status show:

m01
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

3.Launching helm init. Tiller says ImagePullBackOff because of timeout. First I thought it is a helm issue. But the I ssh-d into minikube 4.minikube ssh

  1. In minikube I gave out the following commands: ping xyz.com -> works well , got response curl xyz.com-> just hangs docker pull anything* -> timeout

*by anything I mean really..anything: tiller:docker pull helm's tiller gcr.io/kubernetes-helm/tiller:v2.16.5 some common image:docker pull postgres ; image already on minikube: docker pull docker k8s.gcr.io/etcd

Full output of failed command: Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

Optional: Full output of minikube logs command:

$ minikube logs * ==> Docker <== * -- Logs begin at Thu 2020-04-30 07:59:35 UTC, end at Thu 2020-04-30 09:59:33 UTC. -- * Apr 30 08:00:14 minikube dockerd[2688]: time="2020-04-30T08:00:14.711284600Z" level=info msg="API listen on [::]:2376" * Apr 30 08:00:28 minikube dockerd[2688]: time="2020-04-30T08:00:28.651397300Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ef0a42a61e59d54484d5e3dca1dbde7b1676393e9d8c3066fea90c9e24933f7b/shim.sock" debug=false pid=3621 * Apr 30 08:00:28 minikube dockerd[2688]: time="2020-04-30T08:00:28.664849000Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bad8b13fde24ac80feefb5b6cd09e54d9d4cef2b23a02170f403c01c9f832d9a/shim.sock" debug=false pid=3631 * Apr 30 08:00:28 minikube dockerd[2688]: time="2020-04-30T08:00:28.686555300Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ec83da431042b02371c3f6d53ab67a9b9af16c6f6464319a3583d281035704ed/shim.sock" debug=false pid=3658 * Apr 30 08:00:28 minikube dockerd[2688]: time="2020-04-30T08:00:28.708696800Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/647ee256d82deb113eb9251617f128a7509593b1ee0c7c096f6a0a79d95b8cc0/shim.sock" debug=false pid=3670 * Apr 30 08:00:28 minikube dockerd[2688]: time="2020-04-30T08:00:28.969709500Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f0f0630f87332f5df0ebe2ccfd66ba941a23f296f0403a310c862b4ab35884bc/shim.sock" debug=false pid=3843 * Apr 30 08:00:29 minikube dockerd[2688]: time="2020-04-30T08:00:29.023688100Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/08094dd06697df364dbb906bd768941664bb38d1b5770de564841d32befd7038/shim.sock" debug=false pid=3885 * Apr 30 08:00:29 minikube dockerd[2688]: time="2020-04-30T08:00:29.058693100Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c839bfc1f3f405053fe546b5fa25ab7ea2aaeed9dfed9afc367f4d3e26458cd6/shim.sock" debug=false pid=3913 * Apr 30 08:00:29 minikube dockerd[2688]: time="2020-04-30T08:00:29.059996700Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8b163373344ed14d7be800064e1e0582f4c4ff4a3be0b356a6af37270d655356/shim.sock" debug=false pid=3920 * Apr 30 08:00:46 minikube dockerd[2688]: time="2020-04-30T08:00:46.307373100Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/47b98e64f2b06255ef246126421ff8af20c2269e2ee1ec5895d171e5690557c0/shim.sock" debug=false pid=4575 * Apr 30 08:00:46 minikube dockerd[2688]: time="2020-04-30T08:00:46.315352200Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c13956052d8540708754f3406e394f8a21095c36e771870fcc18cf43f41d746a/shim.sock" debug=false pid=4588 * Apr 30 08:00:46 minikube dockerd[2688]: time="2020-04-30T08:00:46.558905300Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/485ffd631e6d0de764da76348302e47b59b3ac0a326fc1f0e486450f9f4c4b58/shim.sock" debug=false pid=4694 * Apr 30 08:00:46 minikube dockerd[2688]: time="2020-04-30T08:00:46.683873300Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a6829d30ee59b55b6046fea7a13c4e37430973028e60b23aec2e85c832a22979/shim.sock" debug=false pid=4732 * Apr 30 08:00:46 minikube dockerd[2688]: time="2020-04-30T08:00:46.706928400Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b8c1db1a64cc67211332e3d62f460833b2d67d510e9aedb7df70fe9317c90b07/shim.sock" debug=false pid=4744 * Apr 30 08:00:46 minikube dockerd[2688]: time="2020-04-30T08:00:46.806368100Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/edf02143dbc6eeab75995ce5fcc7479c8810957e262c0af046e97eca95da1d82/shim.sock" debug=false pid=4793 * Apr 30 08:00:50 minikube dockerd[2688]: time="2020-04-30T08:00:50.623484400Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/641be2c34ec8382dfab0d2368d49b06840d6fa3fdacc0d7d2dcfb5c7ad9be24b/shim.sock" debug=false pid=4920 * Apr 30 08:00:50 minikube dockerd[2688]: time="2020-04-30T08:00:50.822945200Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e8826d2d70a665749f02502e0d4a1735370a4414256f9a7b460292fd548f41cd/shim.sock" debug=false pid=4959 * Apr 30 08:05:13 minikube dockerd[2688]: time="2020-04-30T08:05:13.910813900Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/38de437dd21748c1b4a05f6433d4f7b9b29529df88cf4e78de85892da7839115/shim.sock" debug=false pid=6038 * Apr 30 08:05:29 minikube dockerd[2688]: time="2020-04-30T08:05:29.194112900Z" level=warning msg="Error getting v2 registry: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:05:29 minikube dockerd[2688]: time="2020-04-30T08:05:29.194185300Z" level=info msg="Attempting next endpoint for pull after error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:05:29 minikube dockerd[2688]: time="2020-04-30T08:05:29.194256500Z" level=error msg="Handler for POST /images/create returned error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:05:59 minikube dockerd[2688]: time="2020-04-30T08:05:59.760031300Z" level=warning msg="Error getting v2 registry: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:05:59 minikube dockerd[2688]: time="2020-04-30T08:05:59.760099000Z" level=info msg="Attempting next endpoint for pull after error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:05:59 minikube dockerd[2688]: time="2020-04-30T08:05:59.760185700Z" level=error msg="Handler for POST /images/create returned error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:06:42 minikube dockerd[2688]: time="2020-04-30T08:06:42.763022800Z" level=warning msg="Error getting v2 registry: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:06:42 minikube dockerd[2688]: time="2020-04-30T08:06:42.763077600Z" level=info msg="Attempting next endpoint for pull after error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:06:42 minikube dockerd[2688]: time="2020-04-30T08:06:42.763127000Z" level=error msg="Handler for POST /images/create returned error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:07:50 minikube dockerd[2688]: time="2020-04-30T08:07:50.767014200Z" level=warning msg="Error getting v2 registry: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:07:50 minikube dockerd[2688]: time="2020-04-30T08:07:50.767084400Z" level=info msg="Attempting next endpoint for pull after error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:07:50 minikube dockerd[2688]: time="2020-04-30T08:07:50.767136600Z" level=error msg="Handler for POST /images/create returned error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:09:33 minikube dockerd[2688]: time="2020-04-30T08:09:33.760216000Z" level=warning msg="Error getting v2 registry: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:09:33 minikube dockerd[2688]: time="2020-04-30T08:09:33.760714600Z" level=info msg="Attempting next endpoint for pull after error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:09:33 minikube dockerd[2688]: time="2020-04-30T08:09:33.760755000Z" level=error msg="Handler for POST /images/create returned error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:12:30 minikube dockerd[2688]: time="2020-04-30T08:12:30.824905200Z" level=warning msg="Error getting v2 registry: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:12:30 minikube dockerd[2688]: time="2020-04-30T08:12:30.824955400Z" level=info msg="Attempting next endpoint for pull after error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:12:30 minikube dockerd[2688]: time="2020-04-30T08:12:30.824988400Z" level=error msg="Handler for POST /images/create returned error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:17:51 minikube dockerd[2688]: time="2020-04-30T08:17:51.816553500Z" level=warning msg="Error getting v2 registry: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:17:51 minikube dockerd[2688]: time="2020-04-30T08:17:51.816617600Z" level=info msg="Attempting next endpoint for pull after error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:17:51 minikube dockerd[2688]: time="2020-04-30T08:17:51.816654500Z" level=error msg="Handler for POST /images/create returned error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:21:49 minikube dockerd[2688]: time="2020-04-30T08:21:49.678425900Z" level=warning msg="Error getting v2 registry: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:21:49 minikube dockerd[2688]: time="2020-04-30T08:21:49.678509300Z" level=info msg="Attempting next endpoint for pull after error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:21:49 minikube dockerd[2688]: time="2020-04-30T08:21:49.678598800Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:23:18 minikube dockerd[2688]: time="2020-04-30T08:23:18.842087000Z" level=warning msg="Error getting v2 registry: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:23:18 minikube dockerd[2688]: time="2020-04-30T08:23:18.842180500Z" level=info msg="Attempting next endpoint for pull after error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:23:18 minikube dockerd[2688]: time="2020-04-30T08:23:18.842243000Z" level=error msg="Handler for POST /images/create returned error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:23:27 minikube dockerd[2688]: time="2020-04-30T08:23:27.755916600Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:23:27 minikube dockerd[2688]: time="2020-04-30T08:23:27.755982800Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:23:27 minikube dockerd[2688]: time="2020-04-30T08:23:27.756036800Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:28:43 minikube dockerd[2688]: time="2020-04-30T08:28:43.828352400Z" level=warning msg="Error getting v2 registry: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:28:43 minikube dockerd[2688]: time="2020-04-30T08:28:43.828409100Z" level=info msg="Attempting next endpoint for pull after error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:28:43 minikube dockerd[2688]: time="2020-04-30T08:28:43.828444200Z" level=error msg="Handler for POST /images/create returned error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:32:13 minikube dockerd[2688]: time="2020-04-30T08:32:13.561733400Z" level=warning msg="Error getting v2 registry: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:32:13 minikube dockerd[2688]: time="2020-04-30T08:32:13.561800600Z" level=error msg="Not continuing with pull after error: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:32:13 minikube dockerd[2688]: time="2020-04-30T08:32:13.561943000Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:34:00 minikube dockerd[2688]: time="2020-04-30T08:34:00.841845100Z" level=warning msg="Error getting v2 registry: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:34:00 minikube dockerd[2688]: time="2020-04-30T08:34:00.842788600Z" level=info msg="Attempting next endpoint for pull after error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:34:00 minikube dockerd[2688]: time="2020-04-30T08:34:00.842912400Z" level=error msg="Handler for POST /images/create returned error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:39:28 minikube dockerd[2688]: time="2020-04-30T08:39:28.882234700Z" level=warning msg="Error getting v2 registry: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:39:28 minikube dockerd[2688]: time="2020-04-30T08:39:28.882306300Z" level=info msg="Attempting next endpoint for pull after error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:39:28 minikube dockerd[2688]: time="2020-04-30T08:39:28.882359200Z" level=error msg="Handler for POST /images/create returned error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID * e8826d2d70a66 4689081edb103 44 minutes ago Running storage-provisioner 0 641be2c34ec83 * edf02143dbc6e 43940c34f24f3 44 minutes ago Running kube-proxy 0 485ffd631e6d0 * b8c1db1a64cc6 67da37a9a360e 44 minutes ago Running coredns 0 c13956052d854 * a6829d30ee59b 67da37a9a360e 44 minutes ago Running coredns 0 47b98e64f2b06 * 8b163373344ed 303ce5db0e90d 44 minutes ago Running etcd 0 ec83da431042b * c839bfc1f3f40 a31f78c7c8ce1 44 minutes ago Running kube-scheduler 0 647ee256d82de * 08094dd06697d d3e55153f52fb 44 minutes ago Running kube-controller-manager 0 bad8b13fde24a * f0f0630f87332 74060cea7f704 44 minutes ago Running kube-apiserver 0 ef0a42a61e59d * * ==> coredns [a6829d30ee59] <== * .:53 * [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 * CoreDNS-1.6.7 * linux/amd64, go1.13.6, da7f65b * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * I0430 08:01:16.835978 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-30 08:00:46.835179 +0000 UTC m=+0.025744101) (total time: 30.0006646s): * Trace[2019727887]: [30.0006646s] [30.0006646s] END * E0430 08:01:16.837152 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0430 08:01:16.836522 1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-30 08:00:46.835774 +0000 UTC m=+0.026339101) (total time: 30.0007074s): * Trace[1427131847]: [30.0007074s] [30.0007074s] END * E0430 08:01:16.837274 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0430 08:01:16.836530 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-30 08:00:46.8356098 +0000 UTC m=+0.026174901) (total time: 30.0008928s): * Trace[939984059]: [30.0008928s] [30.0008928s] END * E0430 08:01:16.837335 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * * ==> coredns [b8c1db1a64cc] <== * [INFO] plugin/ready: Still waiting on: "kubernetes" * .:53 * [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 * CoreDNS-1.6.7 * linux/amd64, go1.13.6, da7f65b * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * I0430 08:01:16.885460 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-30 08:00:46.8848241 +0000 UTC m=+0.020665501) (total time: 30.0005588s): * Trace[2019727887]: [30.0005588s] [30.0005588s] END * E0430 08:01:16.885510 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0430 08:01:16.887965 1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-30 08:00:46.8854012 +0000 UTC m=+0.021242701) (total time: 30.0025469s): * Trace[1427131847]: [30.0025469s] [30.0025469s] END * E0430 08:01:16.887977 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0430 08:01:16.888073 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-30 08:00:46.8875724 +0000 UTC m=+0.023413901) (total time: 30.0004716s): * Trace[939984059]: [30.0004716s] [30.0004716s] END * E0430 08:01:16.888085 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * * ==> describe nodes <== * Name: minikube * Roles: master * Labels: beta.kubernetes.io/arch=amd64 * beta.kubernetes.io/os=linux * kubernetes.io/arch=amd64 * kubernetes.io/hostname=minikube * kubernetes.io/os=linux * minikube.k8s.io/commit=93af9c1e43cab9618e301bc9fa720c63d5efa393 * minikube.k8s.io/name=minikube * minikube.k8s.io/updated_at=2020_04_30T10_00_37_0700 * minikube.k8s.io/version=v1.9.2 * node-role.kubernetes.io/master= * Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock * node.alpha.kubernetes.io/ttl: 0 * volumes.kubernetes.io/controller-managed-attach-detach: true * CreationTimestamp: Thu, 30 Apr 2020 08:00:34 +0000 * Taints: * Unschedulable: false * Lease: * HolderIdentity: minikube * AcquireTime: * RenewTime: Thu, 30 Apr 2020 08:44:45 +0000 * Conditions: * Type Status LastHeartbeatTime LastTransitionTime Reason Message * ---- ------ ----------------- ------------------ ------ ------- * MemoryPressure False Thu, 30 Apr 2020 08:40:53 +0000 Thu, 30 Apr 2020 08:00:28 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available * DiskPressure False Thu, 30 Apr 2020 08:40:53 +0000 Thu, 30 Apr 2020 08:00:28 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure * PIDPressure False Thu, 30 Apr 2020 08:40:53 +0000 Thu, 30 Apr 2020 08:00:28 +0000 KubeletHasSufficientPID kubelet has sufficient PID available * Ready True Thu, 30 Apr 2020 08:40:53 +0000 Thu, 30 Apr 2020 08:00:43 +0000 KubeletReady kubelet is posting ready status * Addresses: * InternalIP: 172.17.181.94 * Hostname: minikube * Capacity: * cpu: 2 * ephemeral-storage: 17784752Ki * hugepages-2Mi: 0 * memory: 3925652Ki * pods: 110 * Allocatable: * cpu: 2 * ephemeral-storage: 17784752Ki * hugepages-2Mi: 0 * memory: 3925652Ki * pods: 110 * System Info: * Machine ID: 1102b6568e8647a8abc99988c4d600e6 * System UUID: 99a0fccf-b3b5-f34b-9a6d-315e74982f1f * Boot ID: fd0ea0d2-daa1-4ac7-959d-432d1d7d860d * Kernel Version: 4.19.107 * OS Image: Buildroot 2019.02.10 * Operating System: linux * Architecture: amd64 * Container Runtime Version: docker://19.3.8 * Kubelet Version: v1.18.0 * Kube-Proxy Version: v1.18.0 * Non-terminated Pods: (9 in total) * Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE * --------- ---- ------------ ---------- --------------- ------------- --- * kube-system coredns-66bff467f8-htrg2 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 44m * kube-system coredns-66bff467f8-qwr2j 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 44m * kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 44m * kube-system kube-apiserver-minikube 250m (12%) 0 (0%) 0 (0%) 0 (0%) 44m * kube-system kube-controller-manager-minikube 200m (10%) 0 (0%) 0 (0%) 0 (0%) 44m * kube-system kube-proxy-zdtnt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 44m * kube-system kube-scheduler-minikube 100m (5%) 0 (0%) 0 (0%) 0 (0%) 44m * kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 44m * kube-system tiller-deploy-754f98dbfc-vpr4k 0 (0%) 0 (0%) 0 (0%) 0 (0%) 39m * Allocated resources: * (Total limits may be over 100 percent, i.e., overcommitted.) * Resource Requests Limits * -------- -------- ------ * cpu 750m (37%) 0 (0%) * memory 140Mi (3%) 340Mi (8%) * ephemeral-storage 0 (0%) 0 (0%) * hugepages-2Mi 0 (0%) 0 (0%) * Events: * Type Reason Age From Message * ---- ------ ---- ---- ------- * Normal NodeHasSufficientMemory 44m (x5 over 44m) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory * Normal NodeHasNoDiskPressure 44m (x4 over 44m) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure * Normal NodeHasSufficientPID 44m (x4 over 44m) kubelet, minikube Node minikube status is now: NodeHasSufficientPID * Normal Starting 44m kubelet, minikube Starting kubelet. * Normal NodeHasSufficientMemory 44m kubelet, minikube Node minikube status is now: NodeHasSufficientMemory * Normal NodeHasNoDiskPressure 44m kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure * Normal NodeHasSufficientPID 44m kubelet, minikube Node minikube status is now: NodeHasSufficientPID * Normal NodeNotReady 44m kubelet, minikube Node minikube status is now: NodeNotReady * Normal NodeAllocatableEnforced 44m kubelet, minikube Updated Node Allocatable limit across pods * Normal NodeReady 44m kubelet, minikube Node minikube status is now: NodeReady * Normal Starting 44m kube-proxy, minikube Starting kube-proxy. * * ==> dmesg <== * [Apr30 07:59] smpboot: 128 Processors exceeds NR_CPUS limit of 64 * [ +0.113586] You have booted with nomodeset. This means your GPU drivers are DISABLED * [ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly * [ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it * [ +0.104510] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. * [ +0.014452] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug, * * this clock source is slow. Consider trying other clock sources * [ +2.238888] Unstable clock detected, switching default tracing clock to "global" * If you want to keep using the local clock, then add: * "trace_clock=local" * on the kernel command line * [ +0.000036] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 * [ +0.750787] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons * [ +0.239152] hrtimer: interrupt took 6302300 ns * [ +0.311908] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument * [ +0.003299] systemd-fstab-generator[1285]: Ignoring "noauto" for root device * [ +0.007149] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling. * [ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.) * [ +4.248845] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack. * [ +0.958035] vboxguest: loading out-of-tree module taints kernel. * [ +0.003387] vboxguest: PCI device not found, probably running on physical hardware. * [ +10.537940] systemd-fstab-generator[2463]: Ignoring "noauto" for root device * [Apr30 08:00] kauditd_printk_skb: 65 callbacks suppressed * [ +1.819714] systemd-fstab-generator[2902]: Ignoring "noauto" for root device * [ +1.247219] systemd-fstab-generator[3112]: Ignoring "noauto" for root device * [ +10.122864] kauditd_printk_skb: 107 callbacks suppressed * [ +9.257892] systemd-fstab-generator[4236]: Ignoring "noauto" for root device * [ +9.108993] kauditd_printk_skb: 32 callbacks suppressed * [Apr30 08:01] kauditd_printk_skb: 44 callbacks suppressed * [ +18.776391] NFSD: Unable to end grace period: -110 * [Apr30 08:05] kauditd_printk_skb: 2 callbacks suppressed * * ==> etcd [8b163373344e] <== * 2020-04-30 08:00:30.745913 W | auth: simple token is not cryptographically signed * 2020-04-30 08:00:30.752781 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided] * 2020-04-30 08:00:30.754791 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = * 2020-04-30 08:00:30.754967 I | embed: listening for metrics on http://127.0.0.1:2381 * 2020-04-30 08:00:30.755184 I | embed: listening for peers on 172.17.181.94:2380 * 2020-04-30 08:00:30.755320 I | etcdserver: 31ff663ce1468d35 as single-node; fast-forwarding 9 ticks (election ticks 10) * raft2020/04/30 08:00:30 INFO: 31ff663ce1468d35 switched to configuration voters=(3602710638583254325) * 2020-04-30 08:00:30.755834 I | etcdserver/membership: added member 31ff663ce1468d35 [https://172.17.181.94:2380] to cluster 83fd5329ee03fc6f * raft2020/04/30 08:00:30 INFO: 31ff663ce1468d35 is starting a new election at term 1 * raft2020/04/30 08:00:30 INFO: 31ff663ce1468d35 became candidate at term 2 * raft2020/04/30 08:00:30 INFO: 31ff663ce1468d35 received MsgVoteResp from 31ff663ce1468d35 at term 2 * raft2020/04/30 08:00:30 INFO: 31ff663ce1468d35 became leader at term 2 * raft2020/04/30 08:00:30 INFO: raft.node: 31ff663ce1468d35 elected leader 31ff663ce1468d35 at term 2 * 2020-04-30 08:00:30.933768 I | etcdserver: setting up the initial cluster version to 3.4 * 2020-04-30 08:00:30.935612 N | etcdserver/membership: set the initial cluster version to 3.4 * 2020-04-30 08:00:30.935753 I | etcdserver/api: enabled capabilities for version 3.4 * 2020-04-30 08:00:30.935831 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.181.94:2379]} to cluster 83fd5329ee03fc6f * 2020-04-30 08:00:30.936027 I | embed: ready to serve client requests * 2020-04-30 08:00:30.936675 I | embed: ready to serve client requests * 2020-04-30 08:00:30.937593 I | embed: serving client requests on 172.17.181.94:2379 * 2020-04-30 08:00:30.943165 I | embed: serving client requests on 127.0.0.1:2379 * 2020-04-30 08:00:31.260260 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitiont\" count_only:true " with result "range_response_count:0 size:4" took too long (229.1695ms) to execute * 2020-04-30 08:00:31.260725 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" limit:10000 " with result "range_response_count:0 size:4" took too long (229.7834ms) to execute * 2020-04-30 08:00:31.261063 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitiont\" count_only:true " with result "range_response_count:0 size:4" took too long (176.952ms) to execute * 2020-04-30 08:00:31.261394 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" limit:10000 " with result "range_response_count:0 size:4" took too long (177.3469ms) to execute * 2020-04-30 08:00:31.261722 W | etcdserver: read-only range request "key:\"/registry/podtemplates\" range_end:\"/registry/podtemplatet\" count_only:true " with result "range_response_count:0 size:4" took too long (153.1651ms) to execute * 2020-04-30 08:00:31.262041 W | etcdserver: read-only range request "key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" limit:10000 " with result "range_response_count:0 size:4" took too long (153.5366ms) to execute * 2020-04-30 08:00:31.262625 W | etcdserver: read-only range request "key:\"/registry/events\" range_end:\"/registry/eventt\" count_only:true " with result "range_response_count:0 size:4" took too long (139.3454ms) to execute * 2020-04-30 08:00:31.263871 W | etcdserver: read-only range request "key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" limit:10000 " with result "range_response_count:0 size:4" took too long (126.7576ms) to execute * 2020-04-30 08:00:31.265043 W | etcdserver: read-only range request "key:\"/registry/limitranges\" range_end:\"/registry/limitranget\" count_only:true " with result "range_response_count:0 size:4" took too long (122.1374ms) to execute * 2020-04-30 08:00:31.270191 W | etcdserver: read-only range request "key:\"/registry/resourcequotas\" range_end:\"/registry/resourcequotat\" count_only:true " with result "range_response_count:0 size:4" took too long (113.3211ms) to execute * 2020-04-30 08:00:31.270883 W | etcdserver: read-only range request "key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" limit:10000 " with result "range_response_count:0 size:4" took too long (113.5468ms) to execute * 2020-04-30 08:00:31.271335 W | etcdserver: read-only range request "key:\"/registry/secrets\" range_end:\"/registry/secrett\" count_only:true " with result "range_response_count:0 size:4" took too long (111.6251ms) to execute * 2020-04-30 08:00:31.271691 W | etcdserver: read-only range request "key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" limit:10000 " with result "range_response_count:0 size:4" took too long (112.7108ms) to execute * 2020-04-30 08:00:31.283167 W | etcdserver: read-only range request "key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" limit:10000 " with result "range_response_count:0 size:4" took too long (105.8657ms) to execute * 2020-04-30 08:00:31.285177 W | etcdserver: read-only range request "key:\"/registry/configmaps\" range_end:\"/registry/configmapt\" count_only:true " with result "range_response_count:0 size:4" took too long (102.303ms) to execute * 2020-04-30 08:00:31.286270 W | etcdserver: read-only range request "key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" limit:10000 " with result "range_response_count:0 size:4" took too long (103.454ms) to execute * 2020-04-30 08:00:31.412831 W | etcdserver: read-only range request "key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" limit:10000 " with result "range_response_count:0 size:4" took too long (223.9462ms) to execute * 2020-04-30 08:00:31.413358 W | etcdserver: read-only range request "key:\"/registry/services/endpoints\" range_end:\"/registry/services/endpointt\" count_only:true " with result "range_response_count:0 size:4" took too long (209.8838ms) to execute * 2020-04-30 08:00:31.422363 W | etcdserver: read-only range request "key:\"/registry/pods/\" range_end:\"/registry/pods0\" limit:10000 " with result "range_response_count:0 size:4" took too long (208.1ms) to execute * 2020-04-30 08:00:31.426235 W | etcdserver: read-only range request "key:\"/registry/minions\" range_end:\"/registry/miniont\" count_only:true " with result "range_response_count:0 size:4" took too long (214.2749ms) to execute * 2020-04-30 08:00:31.437829 W | etcdserver: read-only range request "key:\"/registry/pods\" range_end:\"/registry/podt\" count_only:true " with result "range_response_count:0 size:4" took too long (223.5133ms) to execute * 2020-04-30 08:00:31.439623 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" limit:10000 " with result "range_response_count:0 size:4" took too long (212.7922ms) to execute * 2020-04-30 08:00:31.439968 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts\" range_end:\"/registry/serviceaccountt\" count_only:true " with result "range_response_count:0 size:4" took too long (212.7686ms) to execute * 2020-04-30 08:00:31.440432 W | etcdserver: read-only range request "key:\"/registry/services/specs\" range_end:\"/registry/services/spect\" count_only:true " with result "range_response_count:0 size:4" took too long (201.7522ms) to execute * 2020-04-30 08:00:31.440895 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:10000 " with result "range_response_count:0 size:4" took too long (226.4057ms) to execute * 2020-04-30 08:10:30.994756 I | mvcc: store.index: compact 1078 * 2020-04-30 08:10:31.017480 I | mvcc: finished scheduled compaction at 1078 (took 21.6614ms) * 2020-04-30 08:15:31.006469 I | mvcc: store.index: compact 1762 * 2020-04-30 08:15:31.029995 I | mvcc: finished scheduled compaction at 1762 (took 22.7554ms) * 2020-04-30 08:20:31.017081 I | mvcc: store.index: compact 2424 * 2020-04-30 08:20:31.020486 I | mvcc: finished scheduled compaction at 2424 (took 2.6514ms) * 2020-04-30 08:25:31.026835 I | mvcc: store.index: compact 3081 * 2020-04-30 08:25:31.029510 I | mvcc: finished scheduled compaction at 3081 (took 2.027ms) * 2020-04-30 08:30:31.035516 I | mvcc: store.index: compact 3738 * 2020-04-30 08:30:31.036892 I | mvcc: finished scheduled compaction at 3738 (took 982.8µs) * 2020-04-30 08:35:31.048647 I | mvcc: store.index: compact 4395 * 2020-04-30 08:35:31.051323 I | mvcc: finished scheduled compaction at 4395 (took 2.04ms) * 2020-04-30 08:40:31.060440 I | mvcc: store.index: compact 5052 * 2020-04-30 08:40:31.061989 I | mvcc: finished scheduled compaction at 5052 (took 951.6µs) * * ==> kernel <== * 08:44:52 up 45 min, 1 user, load average: 0.32, 0.43, 0.30 * Linux minikube 4.19.107 #1 SMP Thu Mar 26 11:33:10 PDT 2020 x86_64 GNU/Linux * PRETTY_NAME="Buildroot 2019.02.10" * * ==> kube-apiserver [f0f0630f8733] <== * W0430 08:00:32.039970 1 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources. * W0430 08:00:32.049083 1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources. * W0430 08:00:32.063079 1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. * W0430 08:00:32.065623 1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. * W0430 08:00:32.076872 1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources. * W0430 08:00:32.091977 1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources. * W0430 08:00:32.092019 1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources. * I0430 08:00:32.100965 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. * I0430 08:00:32.100987 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. * I0430 08:00:32.102458 1 client.go:361] parsed scheme: "endpoint" * I0430 08:00:32.102497 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0430 08:00:32.110150 1 client.go:361] parsed scheme: "endpoint" * I0430 08:00:32.110192 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0430 08:00:34.138110 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt * I0430 08:00:34.138297 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt * I0430 08:00:34.138553 1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key * I0430 08:00:34.138728 1 secure_serving.go:178] Serving securely on [::]:8443 * I0430 08:00:34.138764 1 available_controller.go:387] Starting AvailableConditionController * I0430 08:00:34.138783 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller * I0430 08:00:34.138798 1 tlsconfig.go:240] Starting DynamicServingCertificateController * I0430 08:00:34.139042 1 controller.go:81] Starting OpenAPI AggregationController * I0430 08:00:34.139124 1 autoregister_controller.go:141] Starting autoregister controller * I0430 08:00:34.139183 1 cache.go:32] Waiting for caches to sync for autoregister controller * I0430 08:00:34.139351 1 apiservice_controller.go:94] Starting APIServiceRegistrationController * I0430 08:00:34.139413 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller * I0430 08:00:34.140341 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller * I0430 08:00:34.140419 1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller * I0430 08:00:34.140783 1 crd_finalizer.go:266] Starting CRDFinalizer * I0430 08:00:34.140912 1 crdregistration_controller.go:111] Starting crd-autoregister controller * I0430 08:00:34.140985 1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister * I0430 08:00:34.153342 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt * I0430 08:00:34.153517 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt * I0430 08:00:34.153852 1 controller.go:86] Starting OpenAPI controller * I0430 08:00:34.153934 1 customresource_discovery_controller.go:209] Starting DiscoveryController * I0430 08:00:34.154005 1 naming_controller.go:291] Starting NamingConditionController * I0430 08:00:34.154076 1 establishing_controller.go:76] Starting EstablishingController * I0430 08:00:34.154146 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController * I0430 08:00:34.160655 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController * E0430 08:00:34.176310 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.181.94, ResourceVersion: 0, AdditionalErrorMsg: * I0430 08:00:34.241507 1 shared_informer.go:230] Caches are synced for crd-autoregister * I0430 08:00:34.241770 1 cache.go:39] Caches are synced for autoregister controller * I0430 08:00:34.242085 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller * I0430 08:00:34.242200 1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller * I0430 08:00:34.256054 1 cache.go:39] Caches are synced for AvailableConditionController controller * I0430 08:00:35.138199 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). * I0430 08:00:35.138285 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). * I0430 08:00:35.153492 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000 * I0430 08:00:35.163308 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000 * I0430 08:00:35.163347 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist. * I0430 08:00:35.597729 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io * I0430 08:00:35.637950 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io * W0430 08:00:35.785471 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.17.181.94] * I0430 08:00:35.786435 1 controller.go:606] quota admission added evaluator for: endpoints * I0430 08:00:35.790861 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io * I0430 08:00:37.139215 1 controller.go:606] quota admission added evaluator for: serviceaccounts * I0430 08:00:37.167179 1 controller.go:606] quota admission added evaluator for: deployments.apps * I0430 08:00:37.193534 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io * I0430 08:00:37.338600 1 controller.go:606] quota admission added evaluator for: daemonsets.apps * I0430 08:00:45.656780 1 controller.go:606] quota admission added evaluator for: replicasets.apps * I0430 08:00:45.704508 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps * * ==> kube-controller-manager [08094dd06697] <== * I0430 08:00:44.400267 1 ttl_controller.go:118] Starting TTL controller * I0430 08:00:44.400367 1 shared_informer.go:223] Waiting for caches to sync for TTL * I0430 08:00:45.303834 1 garbagecollector.go:133] Starting garbage collector controller * I0430 08:00:45.303999 1 shared_informer.go:223] Waiting for caches to sync for garbage collector * I0430 08:00:45.303889 1 controllermanager.go:533] Started "garbagecollector" * I0430 08:00:45.304039 1 graph_builder.go:282] GraphBuilder running * I0430 08:00:45.326464 1 controllermanager.go:533] Started "cronjob" * I0430 08:00:45.326504 1 core.go:239] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true. * W0430 08:00:45.326512 1 controllermanager.go:525] Skipping "route" * I0430 08:00:45.326934 1 cronjob_controller.go:97] Starting CronJob Manager * I0430 08:00:45.347059 1 controllermanager.go:533] Started "replicationcontroller" * I0430 08:00:45.347107 1 replica_set.go:181] Starting replicationcontroller controller * I0430 08:00:45.347113 1 shared_informer.go:223] Waiting for caches to sync for ReplicationController * I0430 08:00:45.550095 1 controllermanager.go:533] Started "serviceaccount" * I0430 08:00:45.550678 1 shared_informer.go:223] Waiting for caches to sync for resource quota * I0430 08:00:45.557647 1 serviceaccounts_controller.go:117] Starting service account controller * I0430 08:00:45.557675 1 shared_informer.go:223] Waiting for caches to sync for service account * W0430 08:00:45.570066 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist * I0430 08:00:45.600511 1 shared_informer.go:230] Caches are synced for TTL * I0430 08:00:45.600523 1 shared_informer.go:230] Caches are synced for endpoint_slice * I0430 08:00:45.600690 1 shared_informer.go:230] Caches are synced for taint * I0430 08:00:45.600912 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: * W0430 08:00:45.601048 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp. * I0430 08:00:45.601136 1 node_lifecycle_controller.go:1249] Controller detected that zone is now in state Normal. * I0430 08:00:45.601776 1 taint_manager.go:187] Starting NoExecuteTaintManager * I0430 08:00:45.602101 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"261b7ed6-bc08-4aab-83ec-e13b9acb411d", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller * I0430 08:00:45.622396 1 shared_informer.go:230] Caches are synced for job * I0430 08:00:45.647283 1 shared_informer.go:230] Caches are synced for ReplicationController * I0430 08:00:45.649880 1 shared_informer.go:230] Caches are synced for HPA * I0430 08:00:45.650125 1 shared_informer.go:230] Caches are synced for disruption * I0430 08:00:45.650187 1 disruption.go:339] Sending events to api server. * I0430 08:00:45.650442 1 shared_informer.go:230] Caches are synced for endpoint * I0430 08:00:45.651102 1 shared_informer.go:230] Caches are synced for GC * I0430 08:00:45.651200 1 shared_informer.go:230] Caches are synced for bootstrap_signer * I0430 08:00:45.652270 1 shared_informer.go:230] Caches are synced for deployment * I0430 08:00:45.659058 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"eed70628-180d-49fb-9078-0c0426223e9d", APIVersion:"apps/v1", ResourceVersion:"184", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2 * I0430 08:00:45.661688 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator * I0430 08:00:45.662337 1 shared_informer.go:230] Caches are synced for ReplicaSet * I0430 08:00:45.668967 1 shared_informer.go:230] Caches are synced for certificate-csrsigning * I0430 08:00:45.673604 1 shared_informer.go:230] Caches are synced for daemon sets * I0430 08:00:45.683971 1 shared_informer.go:230] Caches are synced for certificate-csrapproving * I0430 08:00:45.687693 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"6a5aac79-f62e-4a5a-a95f-871e05c0b111", APIVersion:"apps/v1", ResourceVersion:"345", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-htrg2 * I0430 08:00:45.687862 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"6a5aac79-f62e-4a5a-a95f-871e05c0b111", APIVersion:"apps/v1", ResourceVersion:"345", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-qwr2j * I0430 08:00:45.701297 1 shared_informer.go:230] Caches are synced for PVC protection * I0430 08:00:45.752810 1 shared_informer.go:230] Caches are synced for stateful set * I0430 08:00:45.775546 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"52c1eda2-b302-4c0a-8138-2d0e64f090b5", APIVersion:"apps/v1", ResourceVersion:"195", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-zdtnt * I0430 08:00:45.800112 1 shared_informer.go:230] Caches are synced for PV protection * I0430 08:00:45.808280 1 shared_informer.go:230] Caches are synced for persistent volume * I0430 08:00:45.850755 1 shared_informer.go:230] Caches are synced for expand * I0430 08:00:46.050793 1 shared_informer.go:230] Caches are synced for resource quota * I0430 08:00:46.053889 1 shared_informer.go:230] Caches are synced for resource quota * I0430 08:00:46.059375 1 shared_informer.go:230] Caches are synced for service account * I0430 08:00:46.076252 1 shared_informer.go:230] Caches are synced for namespace * I0430 08:00:46.151398 1 shared_informer.go:230] Caches are synced for attach detach * I0430 08:00:46.204254 1 shared_informer.go:230] Caches are synced for garbage collector * I0430 08:00:46.204295 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage * I0430 08:00:46.800802 1 shared_informer.go:223] Waiting for caches to sync for garbage collector * I0430 08:00:46.800840 1 shared_informer.go:230] Caches are synced for garbage collector * I0430 08:05:13.435200 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"tiller-deploy", UID:"96dc145a-8fa6-4e2f-ac07-29cfe580713b", APIVersion:"apps/v1", ResourceVersion:"1011", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set tiller-deploy-754f98dbfc to 1 * I0430 08:05:13.454760 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"tiller-deploy-754f98dbfc", UID:"72ee5b7c-e19a-46f3-9d6d-6e39ad94f3b4", APIVersion:"apps/v1", ResourceVersion:"1012", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: tiller-deploy-754f98dbfc-vpr4k * * ==> kube-proxy [edf02143dbc6] <== * W0430 08:00:47.008388 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy * I0430 08:00:47.015473 1 node.go:136] Successfully retrieved node IP: 172.17.181.94 * I0430 08:00:47.015677 1 server_others.go:186] Using iptables Proxier. * W0430 08:00:47.015707 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined * I0430 08:00:47.015715 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local * I0430 08:00:47.015968 1 server.go:583] Version: v1.18.0 * I0430 08:00:47.016421 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 * I0430 08:00:47.016531 1 conntrack.go:52] Setting nf_conntrack_max to 131072 * I0430 08:00:47.016904 1 conntrack.go:83] Setting conntrack hashsize to 32768 * I0430 08:00:47.021626 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 * I0430 08:00:47.021701 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 * I0430 08:00:47.022172 1 config.go:315] Starting service config controller * I0430 08:00:47.022302 1 shared_informer.go:223] Waiting for caches to sync for service config * I0430 08:00:47.023088 1 config.go:133] Starting endpoints config controller * I0430 08:00:47.023250 1 shared_informer.go:223] Waiting for caches to sync for endpoints config * I0430 08:00:47.122695 1 shared_informer.go:230] Caches are synced for service config * I0430 08:00:47.123474 1 shared_informer.go:230] Caches are synced for endpoints config * * ==> kube-scheduler [c839bfc1f3f4] <== * I0430 08:00:29.435761 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0430 08:00:29.435810 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0430 08:00:31.270925 1 serving.go:313] Generated self-signed cert in-memory * W0430 08:00:34.204096 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' * W0430 08:00:34.204267 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" * W0430 08:00:34.204351 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous. * W0430 08:00:34.204406 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false * I0430 08:00:34.259865 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0430 08:00:34.260079 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * W0430 08:00:34.261377 1 authorization.go:47] Authorization is disabled * W0430 08:00:34.261390 1 authentication.go:40] Authentication is disabled * I0430 08:00:34.261428 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 * I0430 08:00:34.263005 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 * I0430 08:00:34.263219 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0430 08:00:34.263231 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0430 08:00:34.263265 1 tlsconfig.go:240] Starting DynamicServingCertificateController * E0430 08:00:34.267989 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" * E0430 08:00:34.268813 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope * E0430 08:00:34.268937 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope * E0430 08:00:34.269023 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope * E0430 08:00:34.269226 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" * E0430 08:00:34.269376 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope * E0430 08:00:34.269701 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope * E0430 08:00:34.269798 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope * E0430 08:00:34.269884 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope * E0430 08:00:34.269891 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope * E0430 08:00:34.270019 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope * E0430 08:00:34.273244 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope * E0430 08:00:34.273998 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope * E0430 08:00:34.275120 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope * E0430 08:00:34.276086 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope * E0430 08:00:34.277120 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope * E0430 08:00:34.278147 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope * E0430 08:00:34.279263 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope * I0430 08:00:37.167025 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler... * I0430 08:00:37.197450 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler * I0430 08:00:37.263355 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Thu 2020-04-30 07:59:35 UTC, end at Thu 2020-04-30 09:59:33 UTC. -- * Apr 30 08:32:28 minikube kubelet[4245]: E0430 08:32:28.749848 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:32:42 minikube kubelet[4245]: E0430 08:32:42.749240 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:32:55 minikube kubelet[4245]: E0430 08:32:55.750851 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:33:09 minikube kubelet[4245]: E0430 08:33:09.749782 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:33:20 minikube kubelet[4245]: E0430 08:33:20.749801 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:33:34 minikube kubelet[4245]: E0430 08:33:34.749705 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:34:00 minikube kubelet[4245]: E0430 08:34:00.843662 4245 remote_image.go:113] PullImage "gcr.io/kubernetes-helm/tiller:v2.16.5" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) * Apr 30 08:34:00 minikube kubelet[4245]: E0430 08:34:00.843781 4245 kuberuntime_image.go:50] Pull image "gcr.io/kubernetes-helm/tiller:v2.16.5" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) * Apr 30 08:34:00 minikube kubelet[4245]: E0430 08:34:00.844199 4245 kuberuntime_manager.go:801] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) * Apr 30 08:34:00 minikube kubelet[4245]: E0430 08:34:00.844500 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:34:14 minikube kubelet[4245]: E0430 08:34:14.751320 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:34:27 minikube kubelet[4245]: E0430 08:34:27.758376 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:34:38 minikube kubelet[4245]: E0430 08:34:38.748405 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:34:49 minikube kubelet[4245]: E0430 08:34:49.749730 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:35:04 minikube kubelet[4245]: E0430 08:35:04.749520 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:35:19 minikube kubelet[4245]: E0430 08:35:19.749751 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:35:30 minikube kubelet[4245]: E0430 08:35:30.749747 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:35:43 minikube kubelet[4245]: E0430 08:35:43.749895 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:35:56 minikube kubelet[4245]: E0430 08:35:56.749814 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:36:09 minikube kubelet[4245]: E0430 08:36:09.749079 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:36:24 minikube kubelet[4245]: E0430 08:36:24.749726 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:36:39 minikube kubelet[4245]: E0430 08:36:39.749246 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:36:54 minikube kubelet[4245]: E0430 08:36:54.749668 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:37:05 minikube kubelet[4245]: E0430 08:37:05.748283 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:37:18 minikube kubelet[4245]: E0430 08:37:18.749799 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:37:32 minikube kubelet[4245]: E0430 08:37:32.749147 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:37:46 minikube kubelet[4245]: E0430 08:37:46.750052 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:37:57 minikube kubelet[4245]: E0430 08:37:57.749388 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:38:08 minikube kubelet[4245]: E0430 08:38:08.749762 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:38:21 minikube kubelet[4245]: E0430 08:38:21.750472 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:38:32 minikube kubelet[4245]: E0430 08:38:32.749790 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:38:45 minikube kubelet[4245]: E0430 08:38:45.747847 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:38:58 minikube kubelet[4245]: E0430 08:38:58.749766 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:39:28 minikube kubelet[4245]: E0430 08:39:28.882920 4245 remote_image.go:113] PullImage "gcr.io/kubernetes-helm/tiller:v2.16.5" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) * Apr 30 08:39:28 minikube kubelet[4245]: E0430 08:39:28.882972 4245 kuberuntime_image.go:50] Pull image "gcr.io/kubernetes-helm/tiller:v2.16.5" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) * Apr 30 08:39:28 minikube kubelet[4245]: E0430 08:39:28.883062 4245 kuberuntime_manager.go:801] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) * Apr 30 08:39:28 minikube kubelet[4245]: E0430 08:39:28.883108 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Apr 30 08:39:41 minikube kubelet[4245]: E0430 08:39:41.750046 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:39:55 minikube kubelet[4245]: E0430 08:39:55.749372 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:40:09 minikube kubelet[4245]: E0430 08:40:09.749616 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:40:20 minikube kubelet[4245]: E0430 08:40:20.749675 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:40:32 minikube kubelet[4245]: E0430 08:40:32.749927 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:40:43 minikube kubelet[4245]: E0430 08:40:43.750512 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:40:55 minikube kubelet[4245]: E0430 08:40:55.749447 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:41:09 minikube kubelet[4245]: E0430 08:41:09.749740 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:41:21 minikube kubelet[4245]: E0430 08:41:21.749961 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:41:34 minikube kubelet[4245]: E0430 08:41:34.749727 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:41:46 minikube kubelet[4245]: E0430 08:41:46.748483 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:41:59 minikube kubelet[4245]: E0430 08:41:59.749061 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:42:13 minikube kubelet[4245]: E0430 08:42:13.750752 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:42:25 minikube kubelet[4245]: E0430 08:42:25.748161 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:42:39 minikube kubelet[4245]: E0430 08:42:39.751169 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:42:53 minikube kubelet[4245]: E0430 08:42:53.749771 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:43:08 minikube kubelet[4245]: E0430 08:43:08.749913 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:43:21 minikube kubelet[4245]: E0430 08:43:21.749838 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:43:34 minikube kubelet[4245]: E0430 08:43:34.748953 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:43:47 minikube kubelet[4245]: E0430 08:43:47.749465 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:44:01 minikube kubelet[4245]: E0430 08:44:01.749893 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:44:14 minikube kubelet[4245]: E0430 08:44:14.749412 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * Apr 30 08:44:28 minikube kubelet[4245]: E0430 08:44:28.748484 4245 pod_workers.go:191] Error syncing pod b0d0c578-1f84-4c97-a449-cb6a27943371 ("tiller-deploy-754f98dbfc-vpr4k_kube-system(b0d0c578-1f84-4c97-a449-cb6a27943371)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.16.5\"" * * ==> storage-provisioner [e8826d2d70a6] <==
govargo commented 4 years ago

Hello, thank you for sharing.

ping xyz.com -> works well , got response curl xyz.com -> just hangs

ICMP(ping) is no problem, but HTTP connection has a problem. It looks like networking or proxy problem.

Is your machine through VPN network or proxy? And could you check the PROXY env is set correctly?

See about vpn and proxy: https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/

Also helpful would be the full output of the minikube start --alsologtostderr -v=1 command.

hatharom commented 4 years ago

I am not behind a proxy. Outside of minikube I can reach everything without any proxy settings. When I launch docker for windows I still can pull anything - step into the container and can reach everything on the net - without any proxy configuration. And minikube 1.9.0. has been working well also.

Here are the logs for minikube start:

PS C:\Windows\system32> minikube start --alsologtostderr -v=1 I0430 17:13:08.182341 11732 notify.go:125] Checking for updates... I0430 17:13:08.312900 11732 start.go:262] hostinfo: {"hostname":"my_name-dt","uptime":26885,"bootTime":1588232703,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Pro","platformFamily":"Standalone Workstation","platformVersion":"10.0.17763 Build 17763","kernelVersion":"","virtualizationSystem":"","virtualizationRole":"","hostid":"9f107451-f2b9-4d88-9465-b93a53bd1300"} W0430 17:13:08.312900 11732 start.go:270] gopshost.Virtualization returned error: not implemented yet * minikube v1.9.2 on Microsoft Windows 10 Pro 10.0.17763 Build 17763 I0430 17:13:08.327896 11732 driver.go:245] Setting default libvirt URI to qemu:///system * Using the hyperv driver based on existing profile I0430 17:13:09.811851 11732 start.go:310] selected driver: hyperv I0430 17:13:09.813842 11732 start.go:656] validating driver "hyperv" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.9.0.iso Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP:172.17.181.94 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] VerifyComponents:map[apiserver:true system_pods:true]} I0430 17:13:09.814842 11732 start.go:662] status for hyperv: {Installed:true Healthy:true Error: Fix: Doc:} I0430 17:13:09.815839 11732 start.go:1004] Using suggested 4000MB memory alloc based on sys=16277MB, container=0MB I0430 17:13:09.815839 11732 start.go:1210] Wait components to verify : map[apiserver:true system_pods:true] I0430 17:13:09.816839 11732 iso.go:119] acquiring lock: {Name:mk80b736cb5f5972bdbe949c7597371022fdfbc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:} * Starting control plane node m01 in cluster minikube I0430 17:13:09.820843 11732 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime docker I0430 17:13:09.821838 11732 preload.go:97] Found local preload: C:\Users\my_name\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4 I0430 17:13:09.822840 11732 cache.go:46] Caching tarball of preloaded images I0430 17:13:09.822840 11732 preload.go:123] Found C:\Users\my_name\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0430 17:13:09.823843 11732 cache.go:49] Finished downloading the preloaded tar for v1.18.0 on docker I0430 17:13:09.823843 11732 profile.go:138] Saving config to C:\Users\my_name\.minikube\profiles\minikube\config.json ... I0430 17:13:09.826840 11732 cache.go:117] Successfully downloaded all kic artifacts I0430 17:13:09.826840 11732 start.go:260] acquiring machines lock for minikube: {Name:mka80e7685679717932903f8477d7aec7441a08f Clock:{} Delay:500ms Timeout:15m0s Cancel:} I0430 17:13:09.827840 11732 start.go:264] acquired machines lock for "minikube" in 0s I0430 17:13:09.827840 11732 start.go:90] Skipping create...Using existing machine configuration I0430 17:13:09.827840 11732 fix.go:53] fixHost starting: m01 I0430 17:13:09.828839 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0430 17:13:10.320849 11732 main.go:110] libmachine: [stdout =====>] : Off I0430 17:13:10.320849 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:10.321838 11732 fix.go:105] recreateIfNeeded on minikube: state=Stopped err= I0430 17:13:10.322839 11732 fix.go:109] exists: true err= I0430 17:13:10.322839 11732 fix.go:110] %!q() vs "machine does not exist" W0430 17:13:10.323840 11732 fix.go:130] unexpected machine state, will restart: * Restarting existing hyperv VM for "minikube" ... I0430 17:13:10.329843 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM minikube I0430 17:13:11.859388 11732 main.go:110] libmachine: [stdout =====>] : I0430 17:13:11.860388 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:11.861387 11732 main.go:110] libmachine: Waiting for host to start... I0430 17:13:11.861387 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0430 17:13:12.409385 11732 main.go:110] libmachine: [stdout =====>] : Running I0430 17:13:12.409385 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:12.410387 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0430 17:13:13.175384 11732 main.go:110] libmachine: [stdout =====>] : I0430 17:13:13.175384 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:14.176388 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0430 17:13:14.701384 11732 main.go:110] libmachine: [stdout =====>] : Running I0430 17:13:14.701384 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:14.701384 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0430 17:13:15.460384 11732 main.go:110] libmachine: [stdout =====>] : I0430 17:13:15.460384 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:16.461414 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0430 17:13:17.025018 11732 main.go:110] libmachine: [stdout =====>] : Running I0430 17:13:17.025018 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:17.026019 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0430 17:13:17.765546 11732 main.go:110] libmachine: [stdout =====>] : I0430 17:13:17.765546 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:18.767541 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0430 17:13:19.274540 11732 main.go:110] libmachine: [stdout =====>] : Running I0430 17:13:19.274540 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:19.274540 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0430 17:13:19.994540 11732 main.go:110] libmachine: [stdout =====>] : I0430 17:13:19.994540 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:20.994544 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0430 17:13:21.509631 11732 main.go:110] libmachine: [stdout =====>] : Running I0430 17:13:21.509631 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:21.510621 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0430 17:13:22.254824 11732 main.go:110] libmachine: [stdout =====>] : 172.17.181.94 I0430 17:13:22.254824 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:22.258835 11732 machine.go:86] provisioning docker machine ... I0430 17:13:22.258835 11732 buildroot.go:163] provisioning hostname "minikube" I0430 17:13:22.259838 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0430 17:13:22.768918 11732 main.go:110] libmachine: [stdout =====>] : Running I0430 17:13:22.768918 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:22.769931 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0430 17:13:23.546932 11732 main.go:110] libmachine: [stdout =====>] : 172.17.181.94 I0430 17:13:23.546932 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:23.550930 11732 main.go:110] libmachine: Using SSH client type: native I0430 17:13:23.551931 11732 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7c06f0] 0x7c06c0 [] 0s} 172.17.181.94 22 } I0430 17:13:23.551931 11732 main.go:110] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0430 17:13:23.677392 11732 main.go:110] libmachine: SSH cmd err, output: : minikube I0430 17:13:23.677392 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0430 17:13:24.220500 11732 main.go:110] libmachine: [stdout =====>] : Running I0430 17:13:24.220500 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:24.221490 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0430 17:13:25.005569 11732 main.go:110] libmachine: [stdout =====>] : 172.17.181.94 I0430 17:13:25.005569 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:25.009562 11732 main.go:110] libmachine: Using SSH client type: native I0430 17:13:25.009562 11732 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7c06f0] 0x7c06c0 [] 0s} 172.17.181.94 22 } I0430 17:13:25.010561 11732 main.go:110] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0430 17:13:25.130993 11732 main.go:110] libmachine: SSH cmd err, output: : I0430 17:13:25.130993 11732 buildroot.go:169] set auth options {CertDir:C:\Users\my_name\.minikube CaCertPath:C:\Users\my_name\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\my_name\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\my_name\.minikube\machines\server.pem ServerKeyPath:C:\Users\my_name\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\my_name\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\my_name\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\my_name\.minikube} I0430 17:13:25.131995 11732 buildroot.go:171] setting up certificates I0430 17:13:25.132994 11732 provision.go:83] configureAuth start I0430 17:13:25.132994 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0430 17:13:25.660003 11732 main.go:110] libmachine: [stdout =====>] : Running I0430 17:13:25.660003 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:25.660994 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0430 17:13:26.417004 11732 main.go:110] libmachine: [stdout =====>] : 172.17.181.94 I0430 17:13:26.417004 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:26.417996 11732 provision.go:132] copyHostCerts I0430 17:13:26.422993 11732 provision.go:106] generating server cert: C:\Users\my_name\.minikube\machines\server.pem ca-key=C:\Users\my_name\.minikube\certs\ca.pem private-key=C:\Users\my_name\.minikube\certs\ca-key.pem org=my_name.minikube san=[172.17.181.94 localhost 127.0.0.1] I0430 17:13:26.889002 11732 provision.go:160] copyRemoteCerts I0430 17:13:26.889002 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0430 17:13:27.414995 11732 main.go:110] libmachine: [stdout =====>] : Running I0430 17:13:27.414995 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:27.416002 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0430 17:13:28.206994 11732 main.go:110] libmachine: [stdout =====>] : 172.17.181.94 I0430 17:13:28.206994 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:28.262996 11732 ssh_runner.go:101] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0430 17:13:28.313995 11732 ssh_runner.go:155] Checked if /etc/docker/ca.pem exists, but got error: Process exited with status 1 I0430 17:13:28.314996 11732 ssh_runner.go:174] Transferring 1042 bytes to /etc/docker/ca.pem I0430 17:13:28.316995 11732 ssh_runner.go:193] ca.pem: copied 1042 bytes I0430 17:13:28.329508 11732 ssh_runner.go:155] Checked if /etc/docker/server.pem exists, but got error: Process exited with status 1 I0430 17:13:28.330508 11732 ssh_runner.go:174] Transferring 1123 bytes to /etc/docker/server.pem I0430 17:13:28.331510 11732 ssh_runner.go:193] server.pem: copied 1123 bytes I0430 17:13:28.343511 11732 ssh_runner.go:155] Checked if /etc/docker/server-key.pem exists, but got error: Process exited with status 1 I0430 17:13:28.344511 11732 ssh_runner.go:174] Transferring 1679 bytes to /etc/docker/server-key.pem I0430 17:13:28.345509 11732 ssh_runner.go:193] server-key.pem: copied 1679 bytes I0430 17:13:28.355510 11732 provision.go:86] configureAuth took 3.2225168s I0430 17:13:28.355510 11732 buildroot.go:186] setting minikube options for container-runtime I0430 17:13:28.356509 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0430 17:13:28.872578 11732 main.go:110] libmachine: [stdout =====>] : Running I0430 17:13:28.872578 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:28.873572 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0430 17:13:29.610691 11732 main.go:110] libmachine: [stdout =====>] : 172.17.181.94 I0430 17:13:29.610691 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:29.614681 11732 main.go:110] libmachine: Using SSH client type: native I0430 17:13:29.615682 11732 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7c06f0] 0x7c06c0 [] 0s} 172.17.181.94 22 } I0430 17:13:29.615682 11732 main.go:110] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0430 17:13:29.742037 11732 main.go:110] libmachine: SSH cmd err, output: : tmpfs I0430 17:13:29.743038 11732 buildroot.go:70] root file system type: tmpfs I0430 17:13:29.744038 11732 provision.go:295] Updating docker unit: /lib/systemd/system/docker.service ... I0430 17:13:29.744038 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0430 17:13:30.265154 11732 main.go:110] libmachine: [stdout =====>] : Running I0430 17:13:30.265154 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:30.266143 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0430 17:13:31.030146 11732 main.go:110] libmachine: [stdout =====>] : 172.17.181.94 I0430 17:13:31.030146 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:31.035141 11732 main.go:110] libmachine: Using SSH client type: native I0430 17:13:31.035141 11732 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7c06f0] 0x7c06c0 [] 0s} 172.17.181.94 22 } I0430 17:13:31.035141 11732 main.go:110] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %s "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target minikube-automount.service docker.socket Requires= minikube-automount.service docker.socket [Service] Type=notify # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0430 17:13:31.176967 11732 main.go:110] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target minikube-automount.service docker.socket Requires= minikube-automount.service docker.socket [Service] Type=notify # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0430 17:13:31.179964 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0430 17:13:31.725061 11732 main.go:110] libmachine: [stdout =====>] : Running I0430 17:13:31.725061 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:31.726061 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0430 17:13:32.460071 11732 main.go:110] libmachine: [stdout =====>] : 172.17.181.94 I0430 17:13:32.460071 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:32.465060 11732 main.go:110] libmachine: Using SSH client type: native I0430 17:13:32.465060 11732 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7c06f0] 0x7c06c0 [] 0s} 172.17.181.94 22 } I0430 17:13:32.466061 11732 main.go:110] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; } I0430 17:13:33.485687 11732 main.go:110] libmachine: SSH cmd err, output: : diff: can't stat '/lib/systemd/system/docker.service': No such file or directory I0430 17:13:33.485687 11732 machine.go:89] provisioned docker machine in 11.2268518s I0430 17:13:33.486692 11732 start.go:189] post-start starting for "minikube" (driver="hyperv") I0430 17:13:33.487689 11732 start.go:199] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0430 17:13:33.487689 11732 start.go:223] determining appropriate runner for "hyperv" I0430 17:13:33.488690 11732 start.go:238] Creating SSH client and returning SSHRunner for "hyperv" driver I0430 17:13:33.489689 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0430 17:13:34.002695 11732 main.go:110] libmachine: [stdout =====>] : Running I0430 17:13:34.002695 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:34.003687 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0430 17:13:34.767686 11732 main.go:110] libmachine: [stdout =====>] : 172.17.181.94 I0430 17:13:34.767686 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:34.816685 11732 ssh_runner.go:101] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0430 17:13:34.882926 11732 ssh_runner.go:101] Run: cat /etc/os-release I0430 17:13:34.886930 11732 info.go:96] Remote host: Buildroot 2019.02.10 I0430 17:13:34.887925 11732 filesync.go:118] Scanning C:\Users\my_name\.minikube\addons for local assets ... I0430 17:13:34.887925 11732 filesync.go:118] Scanning C:\Users\my_name\.minikube\files for local assets ... I0430 17:13:34.888925 11732 start.go:192] post-start completed in 1.4012358s I0430 17:13:34.888925 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0430 17:13:35.443929 11732 main.go:110] libmachine: [stdout =====>] : Running I0430 17:13:35.443929 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:35.444926 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0430 17:13:36.226934 11732 main.go:110] libmachine: [stdout =====>] : 172.17.181.94 I0430 17:13:36.226934 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:36.230929 11732 main.go:110] libmachine: Using SSH client type: native I0430 17:13:36.231924 11732 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7c06f0] 0x7c06c0 [] 0s} 172.17.181.94 22 } I0430 17:13:36.231924 11732 main.go:110] libmachine: About to run SSH command: date +%s.%N I0430 17:13:36.368929 11732 main.go:110] libmachine: SSH cmd err, output: : 1588259616.368330500 I0430 17:13:36.368929 11732 fix.go:199] guest clock: 1588259616.368330500 I0430 17:13:36.369929 11732 fix.go:212] Guest: 2020-04-30 17:13:36.3683305 +0200 CEST Remote: 2020-04-30 17:13:34.8889255 +0200 CEST m=+26.854998901 (delta=1.479405s) I0430 17:13:36.369929 11732 fix.go:183] guest clock delta is within tolerance: 1.479405s I0430 17:13:36.370926 11732 fix.go:55] fixHost completed within 26.5430855s I0430 17:13:36.370926 11732 start.go:77] releasing machines lock for "minikube", held for 26.5430855s I0430 17:13:36.371924 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0430 17:13:36.904929 11732 main.go:110] libmachine: [stdout =====>] : Running I0430 17:13:36.904929 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:36.905925 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0430 17:13:37.671934 11732 main.go:110] libmachine: [stdout =====>] : 172.17.181.94 I0430 17:13:37.671934 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:37.713096 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0430 17:13:38.244106 11732 main.go:110] libmachine: [stdout =====>] : Running I0430 17:13:38.244106 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:38.245106 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0430 17:13:39.022167 11732 main.go:110] libmachine: [stdout =====>] : 172.17.181.94 I0430 17:13:39.022167 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:39.025166 11732 profile.go:138] Saving config to C:\Users\my_name\.minikube\profiles\minikube\config.json ... I0430 17:13:39.025166 11732 ssh_runner.go:101] Run: curl -sS -m 2 https://k8s.gcr.io/ I0430 17:13:39.028165 11732 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime docker I0430 17:13:39.030173 11732 preload.go:97] Found local preload: C:\Users\my_name\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4 I0430 17:13:39.042164 11732 ssh_runner.go:101] Run: docker images --format {{.Repository}}:{{.Tag}} I0430 17:13:39.073163 11732 docker.go:367] Got preloaded images: -- stdout -- k8s.gcr.io/kube-proxy:v1.18.0 k8s.gcr.io/kube-scheduler:v1.18.0 k8s.gcr.io/kube-controller-manager:v1.18.0 k8s.gcr.io/kube-apiserver:v1.18.0 kubernetesui/dashboard:v2.0.0-rc6 k8s.gcr.io/pause:3.2 k8s.gcr.io/coredns:1.6.7 kindest/kindnetd:0.5.3 k8s.gcr.io/etcd:3.4.3-0 kubernetesui/metrics-scraper:v1.0.2 gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -- /stdout -- I0430 17:13:39.073163 11732 docker.go:305] Images already preloaded, skipping extraction I0430 17:13:39.086171 11732 ssh_runner.go:101] Run: sudo systemctl is-active --quiet service containerd I0430 17:13:39.107164 11732 ssh_runner.go:101] Run: sudo systemctl is-active --quiet service crio I0430 17:13:39.129164 11732 ssh_runner.go:101] Run: sudo systemctl start docker I0430 17:13:39.144164 11732 ssh_runner.go:101] Run: docker version --format {{.Server.Version}} * Preparing Kubernetes v1.18.0 on Docker 19.03.8 ... I0430 17:13:39.198164 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0430 17:13:39.719173 11732 main.go:110] libmachine: [stdout =====>] : Running I0430 17:13:39.719173 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:39.720173 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0430 17:13:40.324165 11732 ssh_runner.go:141] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.2980007s) W0430 17:13:40.324165 11732 start.go:442] [curl -sS -m 2 https://k8s.gcr.io/] failed: curl -sS -m 2 https://k8s.gcr.io/: Process exited with status 7 stdout: stderr: curl: (7) Failed to connect to k8s.gcr.io port 443: Connection timed out ! This VM is having trouble accessing https://k8s.gcr.io * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/ I0430 17:13:40.509166 11732 main.go:110] libmachine: [stdout =====>] : 172.17.181.94 I0430 17:13:40.509166 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:40.551169 11732 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime docker I0430 17:13:40.551169 11732 certs.go:51] Setting up C:\Users\my_name\.minikube\profiles\minikube for IP: 172.17.181.94 I0430 17:13:40.552169 11732 preload.go:97] Found local preload: C:\Users\my_name\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4 I0430 17:13:40.553165 11732 certs.go:169] skipping minikubeCA CA generation: C:\Users\my_name\.minikube\ca.key I0430 17:13:40.554166 11732 certs.go:169] skipping proxyClientCA CA generation: C:\Users\my_name\.minikube\proxy-client-ca.key I0430 17:13:40.555165 11732 certs.go:263] skipping minikube-user signed cert generation: C:\Users\my_name\.minikube\profiles\minikube\client.key I0430 17:13:40.555165 11732 certs.go:263] skipping minikube signed cert generation: C:\Users\my_name\.minikube\profiles\minikube\apiserver.key.35038b9a I0430 17:13:40.555165 11732 certs.go:263] skipping aggregator signed cert generation: C:\Users\my_name\.minikube\profiles\minikube\proxy-client.key I0430 17:13:40.557165 11732 certs.go:330] found cert: ca-key.pem (1675 bytes) I0430 17:13:40.557165 11732 certs.go:330] found cert: ca.pem (1042 bytes) I0430 17:13:40.558167 11732 certs.go:330] found cert: cert.pem (1082 bytes) I0430 17:13:40.558167 11732 certs.go:330] found cert: key.pem (1675 bytes) I0430 17:13:40.560167 11732 certs.go:120] copying: /var/lib/minikube/certs/apiserver.crt I0430 17:13:40.562169 11732 ssh_runner.go:101] Run: docker images --format {{.Repository}}:{{.Tag}} I0430 17:13:40.567167 11732 ssh_runner.go:244] found /var/lib/minikube/certs/apiserver.crt: 1306 bytes, modified at 2020-04-30 08:00:16.5803745 +0000 +0000 I0430 17:13:40.567167 11732 ssh_runner.go:158] Skipping copying /var/lib/minikube/certs/apiserver.crt as it already exists I0430 17:13:40.568169 11732 certs.go:120] copying: /var/lib/minikube/certs/apiserver.key I0430 17:13:40.594171 11732 ssh_runner.go:244] found /var/lib/minikube/certs/apiserver.key: 1675 bytes, modified at 2020-04-30 08:00:16.5813745 +0000 +0000 I0430 17:13:40.594171 11732 ssh_runner.go:158] Skipping copying /var/lib/minikube/certs/apiserver.key as it already exists I0430 17:13:40.596166 11732 certs.go:120] copying: /var/lib/minikube/certs/proxy-client.crt I0430 17:13:40.611178 11732 docker.go:367] Got preloaded images: -- stdout -- k8s.gcr.io/kube-proxy:v1.18.0 k8s.gcr.io/kube-apiserver:v1.18.0 k8s.gcr.io/kube-controller-manager:v1.18.0 k8s.gcr.io/kube-scheduler:v1.18.0 kubernetesui/dashboard:v2.0.0-rc6 k8s.gcr.io/pause:3.2 k8s.gcr.io/coredns:1.6.7 kindest/kindnetd:0.5.3 k8s.gcr.io/etcd:3.4.3-0 kubernetesui/metrics-scraper:v1.0.2 gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -- /stdout -- I0430 17:13:40.612171 11732 docker.go:305] Images already preloaded, skipping extraction I0430 17:13:40.613169 11732 ssh_runner.go:244] found /var/lib/minikube/certs/proxy-client.crt: 1103 bytes, modified at 2020-04-30 08:00:16.7553754 +0000 +0000 I0430 17:13:40.615167 11732 ssh_runner.go:158] Skipping copying /var/lib/minikube/certs/proxy-client.crt as it already exists I0430 17:13:40.616167 11732 certs.go:120] copying: /var/lib/minikube/certs/proxy-client.key I0430 17:13:40.620167 11732 ssh_runner.go:244] found /var/lib/minikube/certs/proxy-client.key: 1675 bytes, modified at 2020-04-30 08:00:16.7563742 +0000 +0000 I0430 17:13:40.620167 11732 ssh_runner.go:158] Skipping copying /var/lib/minikube/certs/proxy-client.key as it already exists I0430 17:13:40.621165 11732 certs.go:120] copying: /var/lib/minikube/certs/ca.crt I0430 17:13:40.623165 11732 ssh_runner.go:101] Run: docker images --format {{.Repository}}:{{.Tag}} I0430 17:13:40.627170 11732 ssh_runner.go:244] found /var/lib/minikube/certs/ca.crt: 1066 bytes, modified at 2020-04-29 09:31:41.9605741 +0000 +0000 I0430 17:13:40.627170 11732 ssh_runner.go:158] Skipping copying /var/lib/minikube/certs/ca.crt as it already existsI0430 17:13:40.628164 11732 certs.go:120] copying: /var/lib/minikube/certs/ca.key I0430 17:13:40.633166 11732 ssh_runner.go:244] found /var/lib/minikube/certs/ca.key: 1679 bytes, modified at 2020-04-29 09:31:41.9615735 +0000 +0000 I0430 17:13:40.634165 11732 ssh_runner.go:158] Skipping copying /var/lib/minikube/certs/ca.key as it already existsI0430 17:13:40.635165 11732 certs.go:120] copying: /var/lib/minikube/certs/proxy-client-ca.crt I0430 17:13:40.643170 11732 ssh_runner.go:244] found /var/lib/minikube/certs/proxy-client-ca.crt: 1074 bytes, modified at 2020-04-29 09:31:42.0835708 +0000 +0000 I0430 17:13:40.644169 11732 ssh_runner.go:158] Skipping copying /var/lib/minikube/certs/proxy-client-ca.crt as it already exists I0430 17:13:40.645166 11732 certs.go:120] copying: /var/lib/minikube/certs/proxy-client-ca.key I0430 17:13:40.655169 11732 docker.go:367] Got preloaded images: -- stdout -- k8s.gcr.io/kube-proxy:v1.18.0 k8s.gcr.io/kube-apiserver:v1.18.0 k8s.gcr.io/kube-scheduler:v1.18.0 k8s.gcr.io/kube-controller-manager:v1.18.0 kubernetesui/dashboard:v2.0.0-rc6 k8s.gcr.io/pause:3.2 k8s.gcr.io/coredns:1.6.7 kindest/kindnetd:0.5.3 k8s.gcr.io/etcd:3.4.3-0 kubernetesui/metrics-scraper:v1.0.2 gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -- /stdout -- I0430 17:13:40.655169 11732 cache_images.go:69] Images are preloaded, skipping loading I0430 17:13:40.657166 11732 kubeadm.go:125] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:172.17.181.94 APIServerPort:8443 KubernetesVersion:v1.18.0 EtcdDataDir:/var/lib/minikube/etcd ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.181.94"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.181.94 ControlPlaneAddress:172.17.181.94} I0430 17:13:40.657166 11732 ssh_runner.go:244] found /var/lib/minikube/certs/proxy-client-ca.key: 1675 bytes, modified at 2020-04-29 09:31:42.0845714 +0000 +0000 I0430 17:13:40.658166 11732 ssh_runner.go:158] Skipping copying /var/lib/minikube/certs/proxy-client-ca.key as it already exists I0430 17:13:40.658166 11732 kubeadm.go:129] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 172.17.181.94 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 172.17.181.94 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "172.17.181.94"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: 172.17.181.94:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd kubernetesVersion: v1.18.0 networking: dnsDomain: cluster.local podSubnet: "" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%" nodefs.inodesFree: "0%" imagefs.available: "0%" --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration metricsBindAddress: 172.17.181.94:10249 I0430 17:13:40.659164 11732 certs.go:120] copying: /usr/share/ca-certificates/minikubeCA.pem I0430 17:13:40.665167 11732 ssh_runner.go:155] Checked if /usr/share/ca-certificates/minikubeCA.pem exists, but got error: Process exited with status 1 I0430 17:13:40.669164 11732 ssh_runner.go:101] Run: docker info --format {{.CgroupDriver}} I0430 17:13:40.671165 11732 ssh_runner.go:174] Transferring 1066 bytes to /usr/share/ca-certificates/minikubeCA.pemI0430 17:13:40.672167 11732 ssh_runner.go:193] minikubeCA.pem: copied 1066 bytes I0430 17:13:40.684165 11732 certs.go:120] copying: /var/lib/minikube/kubeconfig I0430 17:13:40.684165 11732 ssh_runner.go:174] Transferring 398 bytes to /var/lib/minikube/kubeconfig I0430 17:13:40.685168 11732 ssh_runner.go:193] kubeconfig: copied 398 bytes I0430 17:13:40.709165 11732 kubeadm.go:671] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=systemd --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.181.94 --pod-manifest-path=/etc/kubernetes/manifests [Install] config: {KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:0 NodeName:} I0430 17:13:40.720164 11732 ssh_runner.go:101] Run: openssl version I0430 17:13:40.723166 11732 ssh_runner.go:101] Run: sudo ls /var/lib/minikube/binaries/v1.18.0 I0430 17:13:40.729165 11732 binaries.go:42] Found k8s binaries, skipping transfer I0430 17:13:40.737163 11732 ssh_runner.go:101] Run: sudo /bin/bash -c "test -f /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0430 17:13:40.742163 11732 ssh_runner.go:101] Run: sudo mkdir -p /var/tmp/minikube /etc/systemd/system/kubelet.service.d /lib/systemd/system I0430 17:13:40.747163 11732 ssh_runner.go:174] Transferring 1410 bytes to /var/tmp/minikube/kubeadm.yaml.new I0430 17:13:40.748165 11732 ssh_runner.go:193] kubeadm.yaml.new: copied 1410 bytes I0430 17:13:40.755170 11732 ssh_runner.go:101] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0430 17:13:40.762167 11732 certs.go:370] hashing: -rw-r--r-- 1 root root 1066 Apr 29 09:31 /usr/share/ca-certificates/minikubeCA.pem I0430 17:13:40.762167 11732 ssh_runner.go:174] Transferring 534 bytes to /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new I0430 17:13:40.764166 11732 ssh_runner.go:193] 10-kubeadm.conf.new: copied 534 bytes I0430 17:13:40.773165 11732 ssh_runner.go:174] Transferring 349 bytes to /lib/systemd/system/kubelet.service.new I0430 17:13:40.774165 11732 ssh_runner.go:193] kubelet.service.new: copied 349 bytes I0430 17:13:40.776166 11732 ssh_runner.go:101] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0430 17:13:40.784166 11732 ssh_runner.go:101] Run: /bin/bash -c "pgrep kubelet && diff -u /lib/systemd/system/kubelet.service /lib/systemd/system/kubelet.service.new && diff -u /etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new" I0430 17:13:40.792167 11732 ssh_runner.go:101] Run: /bin/bash -c "sudo cp /lib/systemd/system/kubelet.service.new /lib/systemd/system/kubelet.service && sudo cp /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new /etc/systemd/system/kubelet.service.d/10-kubeadm.conf && sudo systemctl daemon-reload && sudo systemctl restart kubelet" I0430 17:13:40.800169 11732 ssh_runner.go:101] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0430 17:13:40.888173 11732 kubeadm.go:278] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.9.0.iso Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP:172.17.181.94 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]} I0430 17:13:40.896164 11732 ssh_runner.go:101] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0430 17:13:40.935168 11732 ssh_runner.go:101] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0430 17:13:40.942164 11732 kubeadm.go:289] found existing configuration files, will attempt cluster restart I0430 17:13:40.942164 11732 kubeadm.go:434] restartCluster start I0430 17:13:40.955167 11732 ssh_runner.go:101] Run: sudo test -d /data/minikube I0430 17:13:40.962166 11732 kubeadm.go:149] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1 stdout: stderr: I0430 17:13:40.964167 11732 kapi.go:58] client config for minikube: &rest.Config{Host:"https://172.17.181.94:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\my_name\\.minikube\\profiles\\minikube\\client.crt", KeyFile:"C:\\Users\\my_name\\.minikube\\profiles\\minikube\\client.key", CAFile:"C:\\Users\\my_name\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16297b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)} I0430 17:13:40.990170 11732 ssh_runner.go:101] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new I0430 17:13:41.002165 11732 kubeadm.go:404] needs reset: configs differ: ** stderr ** diff: can't stat '/var/tmp/minikube/kubeadm.yaml': No such file or directory ** /stderr ** I0430 17:13:41.019165 11732 ssh_runner.go:101] Run: sudo /bin/bash -c "grep https://172.17.181.94:8443 /etc/kubernetes/admin.conf || sudo rm -f /etc/kubernetes/admin.conf" I0430 17:13:41.042173 11732 ssh_runner.go:101] Run: sudo /bin/bash -c "grep https://172.17.181.94:8443 /etc/kubernetes/kubelet.conf || sudo rm -f /etc/kubernetes/kubelet.conf" I0430 17:13:41.066164 11732 ssh_runner.go:101] Run: sudo /bin/bash -c "grep https://172.17.181.94:8443 /etc/kubernetes/controller-manager.conf || sudo rm -f /etc/kubernetes/controller-manager.conf" I0430 17:13:41.094165 11732 ssh_runner.go:101] Run: sudo /bin/bash -c "grep https://172.17.181.94:8443 /etc/kubernetes/scheduler.conf || sudo rm -f /etc/kubernetes/scheduler.conf" I0430 17:13:41.116165 11732 ssh_runner.go:101] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0430 17:13:41.122164 11732 kubeadm.go:495] resetting cluster from /var/tmp/minikube/kubeadm.yaml I0430 17:13:41.122164 11732 ssh_runner.go:101] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml" I0430 17:13:41.505410 11732 ssh_runner.go:101] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml" I0430 17:13:42.431426 11732 ssh_runner.go:101] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml" I0430 17:13:42.510423 11732 ssh_runner.go:101] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml" I0430 17:13:42.589434 11732 api_server.go:46] waiting for apiserver process to appear ... I0430 17:13:42.601432 11732 ssh_runner.go:101] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0430 17:13:43.119888 11732 ssh_runner.go:101] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0430 17:13:43.634073 11732 ssh_runner.go:101] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0430 17:13:44.134209 11732 ssh_runner.go:101] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0430 17:13:44.635331 11732 ssh_runner.go:101] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0430 17:13:45.123972 11732 ssh_runner.go:101] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0430 17:13:45.620928 11732 ssh_runner.go:101] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0430 17:13:46.132190 11732 ssh_runner.go:101] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0430 17:13:46.634087 11732 ssh_runner.go:101] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0430 17:13:47.132856 11732 ssh_runner.go:101] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0430 17:13:47.621833 11732 ssh_runner.go:101] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0430 17:13:48.134342 11732 ssh_runner.go:101] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0430 17:13:48.621148 11732 ssh_runner.go:101] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0430 17:13:49.120738 11732 ssh_runner.go:101] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0430 17:13:49.625623 11732 ssh_runner.go:101] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0430 17:13:50.120440 11732 ssh_runner.go:101] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0430 17:13:50.620569 11732 ssh_runner.go:101] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0430 17:13:51.120520 11732 ssh_runner.go:101] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0430 17:13:51.129434 11732 api_server.go:66] duration metric: took 8.5399997s to wait for apiserver process to appear ... I0430 17:13:51.129434 11732 api_server.go:82] waiting for apiserver healthz status ... I0430 17:13:51.129434 11732 api_server.go:184] Checking apiserver healthz at https://172.17.181.94:8443/healthz ...W0430 17:13:56.327688 11732 api_server.go:202] https://172.17.181.94:8443/healthz response: &{Status:403 Forbidden StatusCode:403 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Content-Length:[192] Content-Type:[application/json] Date:[Thu, 30 Apr 2020 15:13:56 GMT] X-Content-Type-Options:[nosniff]] Body:0xc00051d740 ContentLength:192 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0007b9300 TLS:0xc0002fc000} I0430 17:13:56.829696 11732 api_server.go:184] Checking apiserver healthz at https://172.17.181.94:8443/healthz ...W0430 17:13:56.836691 11732 api_server.go:202] https://172.17.181.94:8443/healthz response: &{Status:500 Internal Server Error StatusCode:500 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Content-Length:[864] Content-Type:[text/plain; charset=utf-8] Date:[Thu, 30 Apr 2020 15:13:56 GMT] X-Content-Type-Options:[nosniff]] Body:0xc00073d5c0 ContentLength:864 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005d0d00 TLS:0xc00091c0b0} I0430 17:13:57.329519 11732 api_server.go:184] Checking apiserver healthz at https://172.17.181.94:8443/healthz ...W0430 17:13:57.339518 11732 api_server.go:202] https://172.17.181.94:8443/healthz response: &{Status:500 Internal Server Error StatusCode:500 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Content-Length:[843] Content-Type:[text/plain; charset=utf-8] Date:[Thu, 30 Apr 2020 15:13:57 GMT] X-Content-Type-Options:[nosniff]] Body:0xc00020e340 ContentLength:843 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0007b9400 TLS:0xc0000cb6b0} I0430 17:13:57.829401 11732 api_server.go:184] Checking apiserver healthz at https://172.17.181.94:8443/healthz ...I0430 17:13:57.850394 11732 api_server.go:135] control plane version: v1.18.0 I0430 17:13:57.851396 11732 api_server.go:125] duration metric: took 6.7219617s to wait for apiserver health ... I0430 17:13:57.852397 11732 system_pods.go:37] waiting for kube-system pods to appear ... I0430 17:13:57.875395 11732 system_pods.go:55] 9 kube-system pods found I0430 17:13:57.875395 11732 system_pods.go:57] "coredns-66bff467f8-htrg2" [47a3897a-1614-4162-ba99-1eca6a8c40e4] Running I0430 17:13:57.878395 11732 system_pods.go:57] "coredns-66bff467f8-qwr2j" [7e50ebf9-a586-4af0-86a4-21e02cffdc76] Running I0430 17:13:57.881396 11732 system_pods.go:57] "etcd-minikube" [f1a9adda-d932-4fcd-a7b9-01871629b456] Running I0430 17:13:57.881396 11732 system_pods.go:57] "kube-apiserver-minikube" [7d22cb03-067e-4a73-a60d-71740e220641] Running I0430 17:13:57.885396 11732 system_pods.go:57] "kube-controller-manager-minikube" [f0e6a376-41f4-46ad-a471-ed5fe9c5ad53] Running I0430 17:13:57.888399 11732 system_pods.go:57] "kube-proxy-zdtnt" [84052b20-54af-4554-b930-3fbdf82a166c] Running I0430 17:13:57.889400 11732 system_pods.go:57] "kube-scheduler-minikube" [35ed6731-d9a8-44a1-82d8-1b297f32a8e8] Running I0430 17:13:57.889400 11732 system_pods.go:57] "storage-provisioner" [9e4f9f68-fb87-43b9-a5b3-0c43715755b7] RunningI0430 17:13:57.890397 11732 system_pods.go:57] "tiller-deploy-754f98dbfc-vpr4k" [b0d0c578-1f84-4c97-a449-cb6a27943371] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller]) I0430 17:13:57.890397 11732 system_pods.go:68] duration metric: took 38.0003ms to wait for pod list to return data ... I0430 17:13:57.891397 11732 ssh_runner.go:101] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml" I0430 17:13:58.278986 11732 ssh_runner.go:101] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0430 17:13:58.289985 11732 ops.go:35] apiserver oom_adj: -16 I0430 17:13:58.289985 11732 kubeadm.go:438] restartCluster took 17.3468207s I0430 17:13:58.290985 11732 kubeadm.go:280] StartCluster complete in 17.4028119s I0430 17:13:58.290985 11732 settings.go:123] acquiring lock: {Name:mk077ffaa166fdc3c37acd23d75c77f3186480d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0430 17:13:58.291984 11732 settings.go:131] Updating kubeconfig: C:\Users\my_name/.kube/config I0430 17:13:58.292984 11732 lock.go:35] WriteFile acquiring C:\Users\my_name/.kube/config: {Name:mk9cc6b81dc07244eacd4bb0014e7e9aa821d85c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0430 17:13:58.293985 11732 addons.go:292] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[] I0430 17:13:58.294984 11732 addons.go:60] IsEnabled "dashboard" = false (listed in config=false) I0430 17:13:58.294984 11732 addons.go:60] IsEnabled "efk" = false (listed in config=false) I0430 17:13:58.295984 11732 addons.go:60] IsEnabled "istio" = false (listed in config=false) I0430 17:13:58.295984 11732 addons.go:60] IsEnabled "registry" = false (listed in config=false) I0430 17:13:58.296984 11732 addons.go:60] IsEnabled "registry-creds" = false (listed in config=false) I0430 17:13:58.296984 11732 addons.go:60] IsEnabled "default-storageclass" = false (listed in config=false) I0430 17:13:58.297983 11732 addons.go:60] IsEnabled "storage-provisioner" = false (listed in config=false) I0430 17:13:58.298984 11732 addons.go:60] IsEnabled "storage-provisioner-gluster" = false (listed in config=false) I0430 17:13:58.299983 11732 addons.go:60] IsEnabled "istio-provisioner" = false (listed in config=false) I0430 17:13:58.300984 11732 addons.go:60] IsEnabled "nvidia-driver-installer" = false (listed in config=false) I0430 17:13:58.300984 11732 addons.go:60] IsEnabled "helm-tiller" = false (listed in config=false) I0430 17:13:58.301984 11732 addons.go:60] IsEnabled "logviewer" = false (listed in config=false) I0430 17:13:58.301984 11732 addons.go:60] IsEnabled "ingress-dns" = false (listed in config=false) I0430 17:13:58.301984 11732 addons.go:60] IsEnabled "ingress" = false (listed in config=false) I0430 17:13:58.302985 11732 addons.go:60] IsEnabled "metrics-server" = false (listed in config=false) I0430 17:13:58.302985 11732 addons.go:60] IsEnabled "registry-aliases" = false (listed in config=false) I0430 17:13:58.303983 11732 addons.go:60] IsEnabled "freshpod" = false (listed in config=false) I0430 17:13:58.303983 11732 addons.go:60] IsEnabled "nvidia-gpu-device-plugin" = false (listed in config=false) I0430 17:13:58.304983 11732 addons.go:60] IsEnabled "gvisor" = false (listed in config=false) * Enabling addons: default-storageclass, storage-provisioner I0430 17:13:58.306983 11732 addons.go:46] Setting default-storageclass=true in profile "minikube" I0430 17:13:58.307983 11732 addons.go:242] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0430 17:13:58.335985 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0430 17:13:58.939984 11732 main.go:110] libmachine: [stdout =====>] : Running I0430 17:13:58.939984 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:58.952984 11732 addons.go:105] Setting addon default-storageclass=true in "minikube" I0430 17:13:58.953985 11732 addons.go:60] IsEnabled "default-storageclass" = false (listed in config=false) W0430 17:13:58.953985 11732 addons.go:120] addon default-storageclass should already be in state true I0430 17:13:58.954985 11732 host.go:65] Checking if "minikube" exists ... I0430 17:13:58.956985 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0430 17:13:59.542983 11732 main.go:110] libmachine: [stdout =====>] : Running I0430 17:13:59.542983 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:13:59.543984 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0430 17:14:00.134989 11732 main.go:110] libmachine: [stdout =====>] : Running I0430 17:14:00.134989 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:14:00.135984 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0430 17:14:01.189984 11732 main.go:110] libmachine: [stdout =====>] : 172.17.181.94 I0430 17:14:01.189984 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:14:01.228983 11732 addons.go:209] installing /etc/kubernetes/addons/storageclass.yaml I0430 17:14:01.270984 11732 ssh_runner.go:174] Transferring 271 bytes to /etc/kubernetes/addons/storageclass.yaml I0430 17:14:01.273986 11732 ssh_runner.go:193] storageclass.yaml: copied 271 bytes I0430 17:14:01.307985 11732 ssh_runner.go:101] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0430 17:14:01.893298 11732 addons.go:71] Writing out "minikube" config to set default-storageclass=true... I0430 17:14:01.895295 11732 addons.go:46] Setting storage-provisioner=true in profile "minikube" I0430 17:14:01.895295 11732 addons.go:105] Setting addon storage-provisioner=true in "minikube" I0430 17:14:01.896298 11732 addons.go:60] IsEnabled "storage-provisioner" = false (listed in config=false) W0430 17:14:01.896298 11732 addons.go:120] addon storage-provisioner should already be in state true I0430 17:14:01.896298 11732 host.go:65] Checking if "minikube" exists ... I0430 17:14:01.897294 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0430 17:14:02.454304 11732 main.go:110] libmachine: [stdout =====>] : Running I0430 17:14:02.454304 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:14:02.455295 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0430 17:14:03.002304 11732 main.go:110] libmachine: [stdout =====>] : Running I0430 17:14:03.002304 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:14:03.003295 11732 main.go:110] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0430 17:14:03.770399 11732 main.go:110] libmachine: [stdout =====>] : 172.17.181.94 I0430 17:14:03.770399 11732 main.go:110] libmachine: [stderr =====>] : I0430 17:14:03.811566 11732 addons.go:209] installing /etc/kubernetes/addons/storage-provisioner.yaml I0430 17:14:03.852578 11732 ssh_runner.go:174] Transferring 1709 bytes to /etc/kubernetes/addons/storage-provisioner.yaml I0430 17:14:03.854562 11732 ssh_runner.go:193] storage-provisioner.yaml: copied 1709 bytes I0430 17:14:03.876564 11732 ssh_runner.go:101] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0430 17:14:04.106563 11732 addons.go:71] Writing out "minikube" config to set storage-provisioner=true... I0430 17:14:04.108566 11732 addons.go:294] enableAddons completed in 5.8145815s * Done! kubectl is now configured to use "minikube" I0430 17:14:06.481381 11732 start.go:454] kubectl: 1.15.5, cluster: 1.18.0 (minor skew: 3) ! C:\Program Files\Docker\Docker\resources\bin\kubectl.exe is v1.15.5, which may be incompatible with Kubernetes v1.18.0. * You can also use 'minikube kubectl -- get pods' to invoke a matching version

edit: Forgot to mention one difference to the 1.9.0. 1.9.0 was started by minikube start , but before 1.9.2 start I have set the default driver to hyper-v. (Since then I am unable to unset it because it says I cant set it to 'none')

tstromberg commented 4 years ago

@hatharom - my guess is that the Virtual Switch that minikube is using does not have internet access. Can you include the output of minikube start --alsologtostderr -v=1?

I think the solution here might involve reconfiguring or adding a new Virtual Switch and specifying --hyperv-external-switch, but in the mean-time, you sounds like you can use minikube start --driver=docker as well.

hatharom commented 4 years ago

Thanks your response. External switch for hyper-v didnt solved it. So I changed the driver back to docker.