kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.47k stars 4.89k forks source link

OOM + kubectl - Unable to connect to the server: net/http: TLS handshake timeout #5933

Closed pnisbettmtc closed 4 years ago

pnisbettmtc commented 4 years ago

I get "Unable to connect to the server: net/http: TLS handshake timeout" after using minikube for a while . It works for a while then stops working with with above message.

This happens on windows 10 using both hyper-v and virtualbox as the vm host. After working with this technology for a few weeks, I have come to the conclusion it's flaky as hell . In terms of using kubernetes ,my experience with minikube really discourages me from recommending Kubernetes to my company as viable solution. The number of times minikube crashes or responds with a stupid message recommending I delete the cluster that I have spent hours creating is a joke.

tstromberg commented 4 years ago

My apologies for the poor experience here. It sounds like either a VPN or firewall is interfering, or the apiserver is crashing.

Do you mind what version of minikube you are on, along with the output of minikube logs and kubectl describe node when this happens?

That should help us figure out he root cause. Thanks!

pnisbettmtc commented 4 years ago

OK. Thanks.

I've tried it on a windows 10 desktop in a company network and a windows 10 laptop that is outside the company network. Neither is behind a VPN.

kubectl describe node - currently shows the TLS Timout message from above .

Here is the log output:

C:\Users\pnisbett\_cloudnative_book\cloudnative-abundantsunshine\cloudnative-statelessness>kubectl describe node
Unable to connect to the server: net/http: TLS handshake timeout
C:\Users\pnisbett\_cloudnative_book\cloudnative-abundantsunshine\cloudnative-statelessness>minikube logs
* ==> Docker <==
* -- Logs begin at Sun 2019-11-17 05:59:47 UTC, end at Sun 2019-11-17 06:10:16 UTC. -- * Nov 17 06:00:20 minikube dockerd[2800]: time="2019-11-17T06:00:20.230121400Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" * Nov 17 06:00:20 minikube dockerd[2800]: time="2019-11-17T06:00:20.230145800Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" * Nov 17 06:00:20 minikube dockerd[2800]: time="2019-11-17T06:00:20.230903200Z" level=info msg="Loading containers: start." * Nov 17 06:00:21 minikube dockerd[2800]: time="2019-11-17T06:00:21.421692700Z" level=warning msg="b3a07e529816a9aaf3464d34bf92d095eb7def29ffb4bb25d63f4f2c1078eb33 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/b3a07e529816a9aaf3464d34bf92d095eb7def29ffb4bb25d63f4f2c1078eb33/mounts/shm, flags: 0x2: no such file or directory" * Nov 17 06:00:21 minikube dockerd[2800]: time="2019-11-17T06:00:21.421689700Z" level=warning msg="48576a001e33c7fd24215379e1f427983813a5b13c3ed47874f5107d64d63ac2 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/48576a001e33c7fd24215379e1f427983813a5b13c3ed47874f5107d64d63ac2/mounts/shm, flags: 0x2: no such file or directory" * Nov 17 06:00:21 minikube dockerd[2800]: time="2019-11-17T06:00:21.920639200Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" * Nov 17 06:00:22 minikube dockerd[2800]: time="2019-11-17T06:00:22.532897100Z" level=info msg="Loading containers: done." * Nov 17 06:00:22 minikube dockerd[2800]: time="2019-11-17T06:00:22.543611200Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" * Nov 17 06:00:22 minikube dockerd[2800]: time="2019-11-17T06:00:22.544125900Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" * Nov 17 06:00:22 minikube dockerd[2800]: time="2019-11-17T06:00:22.561564900Z" level=info msg="Docker daemon" commit=039a7df9ba graphdriver(s)=overlay2 version=18.09.9 * Nov 17 06:00:22 minikube dockerd[2800]: time="2019-11-17T06:00:22.562262900Z" level=info msg="Daemon has completed initialization" * Nov 17 06:00:22 minikube dockerd[2800]: time="2019-11-17T06:00:22.587861700Z" level=info msg="API listen on /var/run/docker.sock" * Nov 17 06:00:22 minikube systemd[1]: Started Docker Application Container Engine. * Nov 17 06:00:22 minikube dockerd[2800]: time="2019-11-17T06:00:22.588978400Z" level=info msg="API listen on [::]:2376" * Nov 17 06:01:49 minikube dockerd[2800]: time="2019-11-17T06:01:49.620786455Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" * Nov 17 06:01:49 minikube dockerd[2800]: time="2019-11-17T06:01:49.621676855Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" * Nov 17 06:01:49 minikube dockerd[2800]: time="2019-11-17T06:01:49.632214655Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" * Nov 17 06:01:49 minikube dockerd[2800]: time="2019-11-17T06:01:49.632762755Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" * Nov 17 06:01:49 minikube dockerd[2800]: time="2019-11-17T06:01:49.665698255Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" * Nov 17 06:01:49 minikube dockerd[2800]: time="2019-11-17T06:01:49.666340755Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" * Nov 17 06:01:49 minikube dockerd[2800]: time="2019-11-17T06:01:49.677772855Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" * Nov 17 06:01:49 minikube dockerd[2800]: time="2019-11-17T06:01:49.678192455Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" * Nov 17 06:02:09 minikube dockerd[2800]: time="2019-11-17T06:02:09.980655355Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" * Nov 17 06:02:09 minikube dockerd[2800]: time="2019-11-17T06:02:09.982917055Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" * Nov 17 06:02:11 minikube dockerd[2800]: time="2019-11-17T06:02:11.126793555Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5d96d49d2a30110904a1bf47a4f0f5a9c1c630d74288321fc451d658ca465a21/shim.sock" debug=false pid=3529 * Nov 17 06:02:11 minikube dockerd[2800]: time="2019-11-17T06:02:11.317727855Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9b3d3dbffe1f5cf49617fbc1140696316b23be0d3844b183d9587284f0718c32/shim.sock" debug=false pid=3573 * Nov 17 06:02:11 minikube dockerd[2800]: time="2019-11-17T06:02:11.366140655Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8fb5d386fa2a9463b44e7add4241e306e08ee175f596403facbebe396bddfd13/shim.sock" debug=false pid=3589 * Nov 17 06:02:11 minikube dockerd[2800]: time="2019-11-17T06:02:11.415011955Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9922908bd097dd4ca38c06aba8307b4664128bf769a33fcb0c29a4812e103562/shim.sock" debug=false pid=3615 * Nov 17 06:02:11 minikube dockerd[2800]: time="2019-11-17T06:02:11.494922455Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b9b768ab801809bdda1a6584bfd61178292b28dd29aac180e681f8e006512518/shim.sock" debug=false pid=3649 * Nov 17 06:02:11 minikube dockerd[2800]: time="2019-11-17T06:02:11.663958755Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/712f8b871b15c6a8d8fcd7129fe6dc044872a5e000d2b943612f7cd5dfa1ee49/shim.sock" debug=false pid=3745 * Nov 17 06:02:12 minikube dockerd[2800]: time="2019-11-17T06:02:12.025759255Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a3dbaa6f8a4c4e227712bab9b7812ac83499cba43852c3d706c48a7ec9f04baf/shim.sock" debug=false pid=3804 * Nov 17 06:02:12 minikube dockerd[2800]: time="2019-11-17T06:02:12.133224355Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d9b5c2e24aec23f52fd984f68c1c08a46939245c393d1c855a1699caab1bfada/shim.sock" debug=false pid=3834 * Nov 17 06:02:12 minikube dockerd[2800]: time="2019-11-17T06:02:12.161879555Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dcb882d7392fa64775223c8bade26714bdf81b44d2f3305cd4935b59ac319e49/shim.sock" debug=false pid=3853 * Nov 17 06:02:12 minikube dockerd[2800]: time="2019-11-17T06:02:12.190878955Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/aeb8146fc0907d0d71edca10d563f34d4cd7505b411f39744131025846588c21/shim.sock" debug=false pid=3875 * Nov 17 06:02:20 minikube dockerd[2800]: time="2019-11-17T06:02:20.192775570Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b884e4340a5c6bce36406eebb760adf69f618a9d706665fce4e4e553e6f81e03/shim.sock" debug=false pid=4060 * Nov 17 06:02:20 minikube dockerd[2800]: time="2019-11-17T06:02:20.318129525Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b9b8843e11f8f997605760bc5fc302fdda52ed8fb3eed1645dd3bba761c9b2b3/shim.sock" debug=false pid=4095 * Nov 17 06:02:20 minikube dockerd[2800]: time="2019-11-17T06:02:20.656319476Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/60e87c58135646103c22bf78e8ff2a1523a1ae04cbbae52201da43f0b8dc6083/shim.sock" debug=false pid=4183 * Nov 17 06:02:20 minikube dockerd[2800]: time="2019-11-17T06:02:20.726633782Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/20fb864da160c2c1b06b032f9f8b1449592035bc7a821bcafc7f50a42df8b232/shim.sock" debug=false pid=4202 * Nov 17 06:02:20 minikube dockerd[2800]: time="2019-11-17T06:02:20.810054876Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2ca616fcaca0f7664c1b60c3ce21c35ca5aae3f1cf091b288b0c53e19c5c1cb4/shim.sock" debug=false pid=4242 * Nov 17 06:02:20 minikube dockerd[2800]: time="2019-11-17T06:02:20.973262553Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fa0d2d0f2df229127f470032f16e6dfe99cea043225fe97c7f796f6959721df0/shim.sock" debug=false pid=4291 * Nov 17 06:02:21 minikube dockerd[2800]: time="2019-11-17T06:02:21.992723135Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fe957c7ab5d6137f90a9159e026d8fa80a80086770719695bab53b3e481d8cc4/shim.sock" debug=false pid=4471 * Nov 17 06:02:22 minikube dockerd[2800]: time="2019-11-17T06:02:22.667755436Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8d7f28ed751ecf33a5ceaae730a6609f5b9e1435607f780fb68b32bfd5e4a4d5/shim.sock" debug=false pid=4552 * Nov 17 06:02:22 minikube dockerd[2800]: time="2019-11-17T06:02:22.720184676Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/adb6e91a6622604ef8adbc570488d80a611b3e3a83f9fd5767ed3c42f77b1f4a/shim.sock" debug=false pid=4569 * Nov 17 06:02:22 minikube dockerd[2800]: time="2019-11-17T06:02:22.991846054Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3adb1537fc43362c1e7d94d4a9716407f24b3faea34a778497f4c9aaffcc22de/shim.sock" debug=false pid=4628 * Nov 17 06:02:23 minikube dockerd[2800]: time="2019-11-17T06:02:23.057650170Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2deb1c21d16f930e206b627d293a0b2c33bd388b6a105dafc26a2d1d69d24dac/shim.sock" debug=false pid=4640 * Nov 17 06:02:23 minikube dockerd[2800]: time="2019-11-17T06:02:23.118602065Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5de327d39dbea2bdccf8e00101a49b0449aa61494a40572cdde43c3111f259f6/shim.sock" debug=false pid=4654 * Nov 17 06:02:24 minikube dockerd[2800]: time="2019-11-17T06:02:24.462611325Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3c89349465804fb816bfaa668e07a95a2bf5bcc9351dbf3dc3e6d692b9934039/shim.sock" debug=false pid=4843 * Nov 17 06:02:24 minikube dockerd[2800]: time="2019-11-17T06:02:24.602408333Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a5abb87dcc49827a0854b017d97853873d4fd6a04e1c4dec01c0cf431c691689/shim.sock" debug=false pid=4870 * Nov 17 06:02:24 minikube dockerd[2800]: time="2019-11-17T06:02:24.713449951Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/48962cbf47c896c1c6a7b2fbd61d5d4698edbe7ae71ae2ae3690d42ca438fa96/shim.sock" debug=false pid=4918 * Nov 17 06:02:25 minikube dockerd[2800]: time="2019-11-17T06:02:25.892065791Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/24fa5be67c6ac6cba739d50dd8f91139a13e099d29ac91308bd6b5711016fa17/shim.sock" debug=false pid=5063 * Nov 17 06:02:54 minikube dockerd[2800]: time="2019-11-17T06:02:54.266980061Z" level=info msg="shim reaped" id=fa0d2d0f2df229127f470032f16e6dfe99cea043225fe97c7f796f6959721df0 * Nov 17 06:02:54 minikube dockerd[2800]: time="2019-11-17T06:02:54.283843827Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Nov 17 06:02:54 minikube dockerd[2800]: time="2019-11-17T06:02:54.285152917Z" level=warning msg="fa0d2d0f2df229127f470032f16e6dfe99cea043225fe97c7f796f6959721df0 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/fa0d2d0f2df229127f470032f16e6dfe99cea043225fe97c7f796f6959721df0/mounts/shm, flags: 0x2: no such file or directory" * Nov 17 06:03:05 minikube dockerd[2800]: time="2019-11-17T06:03:05.172433093Z" level=info msg="shim reaped" id=aeb8146fc0907d0d71edca10d563f34d4cd7505b411f39744131025846588c21 * Nov 17 06:03:05 minikube dockerd[2800]: time="2019-11-17T06:03:05.185945088Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Nov 17 06:03:05 minikube dockerd[2800]: time="2019-11-17T06:03:05.188252670Z" level=warning msg="aeb8146fc0907d0d71edca10d563f34d4cd7505b411f39744131025846588c21 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/aeb8146fc0907d0d71edca10d563f34d4cd7505b411f39744131025846588c21/mounts/shm, flags: 0x2: no such file or directory" * Nov 17 06:03:10 minikube dockerd[2800]: time="2019-11-17T06:03:10.136669088Z" level=info msg="shim reaped" id=d9b5c2e24aec23f52fd984f68c1c08a46939245c393d1c855a1699caab1bfada * Nov 17 06:03:10 minikube dockerd[2800]: time="2019-11-17T06:03:10.149736287Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Nov 17 06:03:10 minikube dockerd[2800]: time="2019-11-17T06:03:10.153741257Z" level=warning msg="d9b5c2e24aec23f52fd984f68c1c08a46939245c393d1c855a1699caab1bfada cleanup: failed to unmount IPC: umount /var/lib/docker/containers/d9b5c2e24aec23f52fd984f68c1c08a46939245c393d1c855a1699caab1bfada/mounts/shm, flags: 0x2: no such file or directory" * Nov 17 06:03:14 minikube dockerd[2800]: time="2019-11-17T06:03:14.473898426Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7efc38d3407e0e06c0494d80ed00376bedaa784a1a61c9b2fdf42f83bad1b9d0/shim.sock" debug=false pid=5566
* ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID * 7efc38d3407e0 4689081edb103 7 minutes ago Running storage-provisioner 22 b884e4340a5c6 * 24fa5be67c6ac cdavisafc/cloudnative-statelessness-connectionsposts-stateful@sha256:1eb63116c784ffd30a3bb4c77ba3bebdf177abde23971fd6f06314ec78c9ce79 7 minutes ago Running connectionsposts 4 2ca616fcaca0f * 48962cbf47c89 cdavisafc/cloudnative-statelessness-posts@sha256:351fba985427c6722475bd6a930d83d0fcb9965c5d1bf43e2aac898c9e6821cb 7 minutes ago Running posts 32 20fb864da160c * a5abb87dcc498 bf261d1579144 7 minutes ago Running coredns 22 adb6e91a66226 * 3c89349465804 bf261d1579144 7 minutes ago Running coredns 22 8d7f28ed751ec * 5de327d39dbea cdavisafc/cloudnative-statelessness-connections@sha256:9405807d18ad427c636a26138b78f9195c1920558f391fdd53b12b62b2f27771 7 minutes ago Running connections 18 b9b8843e11f8f * 2deb1c21d16f9 c21b0c7400f98 7 minutes ago Running kube-proxy 11 fe957c7ab5d61 * 3adb1537fc433 6bb891430fb6e 7 minutes ago Running mysql 10 60e87c5813564 * fa0d2d0f2df22 4689081edb103 7 minutes ago Exited storage-provisioner 21 b884e4340a5c6 * aeb8146fc0907 301ddc62b80b1 8 minutes ago Exited kube-scheduler 108 b9b768ab80180 * d9b5c2e24aec2 06a629a7e51cd 8 minutes ago Exited kube-controller-manager 104 9922908bd097d * dcb882d7392fa b2756210eeabf 8 minutes ago Running etcd 27 9b3d3dbffe1f5 * 712f8b871b15c b305571ca60a5 8 minutes ago Running kube-apiserver 29 5d96d49d2a301 * a3dbaa6f8a4c4 bd12a212f9dcb 8 minutes ago Running kube-addon-manager 11 8fb5d386fa2a9 * cb965bc23d06a cdavisafc/cloudnative-statelessness-posts@sha256:351fba985427c6722475bd6a930d83d0fcb9965c5d1bf43e2aac898c9e6821cb 16 minutes ago Exited posts 31 eb86b4d501914 * df48b232be453 cdavisafc/cloudnative-statelessness-connections@sha256:9405807d18ad427c636a26138b78f9195c1920558f391fdd53b12b62b2f27771 16 minutes ago Exited connections 17 c6a9c6650ad85 * 071b34eb0f59a bf261d1579144 24 minutes ago Exited coredns 21 f5c83a0827d97 * 48576a001e33c b305571ca60a5 24 minutes ago Exited kube-apiserver 28 c3b31fa9f68f8 * efd30464b0342 bf261d1579144 24 minutes ago Exited coredns 21 06bab6f2b10e8 * b7b6fdc778290 b2756210eeabf 33 minutes ago Exited etcd 26 5117eed3c547c * 0bd716af82428 cdavisafc/cloudnative-statelessness-connectionsposts-stateful@sha256:1eb63116c784ffd30a3bb4c77ba3bebdf177abde23971fd6f06314ec78c9ce79 3 hours ago Exited connectionsposts 3 8a4e037fda1f3 * ed58b71a7384f 6bb891430fb6e 3 hours ago Exited mysql 9 b3d4d4b1a90a5 * 74c4604d20881 c21b0c7400f98 3 hours ago Exited kube-proxy 10 d192a0d0c4352 * b3a07e529816a bd12a212f9dcb 3 hours ago Exited kube-addon-manager 10 3f3f0854eb8f8 * * ==> coredns [071b34eb0f59] <== * 2019-11-17T05:46:00.193Z [INFO] plugin/ready: Still waiting on: "kubernetes" * .:53 * 2019-11-17T05:46:03.053Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76 * 2019-11-17T05:46:03.054Z [INFO] CoreDNS-1.6.2 * 2019-11-17T05:46:03.054Z [INFO] linux/amd64, go1.12.8, 795a3eb * CoreDNS-1.6.2 * linux/amd64, go1.12.8, 795a3eb * I1117 05:46:08.180945 1 trace.go:82] Trace[216067182]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-11-17 05:45:57.934144586 +0000 UTC m=+0.750660110) (total time: 10.234119638s): * Trace[216067182]: [10.199453277s] [10.199453277s] Objects listed * I1117 05:46:08.189315 1 trace.go:82] Trace[665094032]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-11-17 05:45:57.92176895 +0000 UTC m=+0.738284474) (total time: 10.246114167s): * Trace[665094032]: [10.183384972s] [10.183384972s] Objects listed * I1117 05:46:08.221202 1 trace.go:82] Trace[1175849154]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-11-17 05:45:57.922557065 +0000 UTC m=+0.739072489) (total time: 10.298616567s): * Trace[1175849154]: [10.231021779s] [10.231021779s] Objects listed * 2019-11-17T05:46:09.039Z [ERROR] plugin/errors: 2 6744179341325501480.6408354322842470513. HINFO: read udp 172.17.0.3:33847->75.75.76.76:53: i/o timeout * [INFO] SIGTERM: Shutting down servers then terminating * * ==> coredns [3c8934946580] <== * E1117 06:02:55.941237 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E1117 06:02:55.943013 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E1117 06:02:55.943058 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * 2019-11-17T06:02:26.930Z [INFO] plugin/ready: Still waiting on: "kubernetes" * .:53 * 2019-11-17T06:02:30.467Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76 * 2019-11-17T06:02:30.467Z [INFO] CoreDNS-1.6.2 * 2019-11-17T06:02:30.467Z [INFO] linux/amd64, go1.12.8, 795a3eb * CoreDNS-1.6.2 * linux/amd64, go1.12.8, 795a3eb * 2019-11-17T06:02:36.930Z [INFO] plugin/ready: Still waiting on: "kubernetes" * 2019-11-17T06:02:46.930Z [INFO] plugin/ready: Still waiting on: "kubernetes" * I1117 06:02:55.941184 1 trace.go:82] Trace[1834331732]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-11-17 06:02:25.484376708 +0000 UTC m=+0.701572046) (total time: 30.403832731s): * Trace[1834331732]: [30.403832731s] [30.403832731s] END * E1117 06:02:55.941237 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E1117 06:02:55.941237 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E1117 06:02:55.941237 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I1117 06:02:55.942974 1 trace.go:82] Trace[1733199566]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-11-17 06:02:25.48760658 +0000 UTC m=+0.704801918) (total time: 30.455333226s): * Trace[1733199566]: [30.455333226s] [30.455333226s] END * E1117 06:02:55.943013 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E1117 06:02:55.943013 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E1117 06:02:55.943013 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I1117 06:02:55.943048 1 trace.go:82] Trace[843711810]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-11-17 06:02:25.484616305 +0000 UTC m=+0.701811643) (total time: 30.402967239s): * Trace[843711810]: [30.402967239s] [30.402967239s] END * E1117 06:02:55.943058 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E1117 06:02:55.943058 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E1117 06:02:55.943058 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * 2019-11-17T06:02:57.124Z [INFO] plugin/ready: Still waiting on: "kubernetes" * * ==> coredns [a5abb87dcc49] <== * .:53 * 2019-11-17T06:02:30.500Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76 * 2019-11-17T06:02:30.500Z [INFO] CoreDNS-1.6.2 * 2019-11-17T06:02:30.500Z [INFO] linux/amd64, go1.12.8, 795a3eb * CoreDNS-1.6.2 * linux/amd64, go1.12.8, 795a3eb * 2019-11-17T06:02:36.873Z [INFO] plugin/ready: Still waiting on: "kubernetes" * 2019-11-17T06:02:46.830Z [INFO] plugin/ready: Still waiting on: "kubernetes" * I1117 06:02:55.932509 1 trace.go:82] Trace[1314561798]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-11-17 06:02:25.499049585 +0000 UTC m=+0.288808781) (total time: 30.388721257s): * Trace[1314561798]: [30.388721257s] [30.388721257s] END * I1117 06:02:55.932632 1 trace.go:82] Trace[624288803]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-11-17 06:02:25.499732979 +0000 UTC m=+0.289492075) (total time: 30.432873409s): * Trace[624288803]: [30.432873409s] [30.432873409s] END * E1117 06:02:55.934164 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E1117 06:02:55.934164 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E1117 06:02:55.934164 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E1117 06:02:55.934164 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E1117 06:02:55.934239 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E1117 06:02:55.934275 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94E1117 06:02: Failed to :55.934239 1 reflector.go:126] pkg/mod/list *v1.Service: Get https://10.96.0.1:443/api/vk8s.io/client-go@v11.0.0+incompat1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o tiible/toolsmeout * /cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E1117 06:02:55.934239 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E1117 06:02:55.934239 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I1117 06:02:55.934265 1 trace.go:82] Trace[1024197893]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-11-17 06:02:25.49954178 +0000 UTC m=+0.289300876) (total time: 30.38850656s): * Trace[1024197893]: [30.38850656s] [30.38850656s] END * E1117 06:02:55.934275 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E1117 06:02:55.934275 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E1117 06:02:55.934275 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * 2019-11-17T06:02:57.124Z [INFO] plugin/ready: Still waiting on: "kubernetes" * * ==> coredns [efd30464b034] <== * .:53 * 2019-11-17T05:46:03.053Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76 * 2019-11-17T05:46:03.054Z [INFO] CoreDNS-1.6.2 * 2019-11-17T05:46:03.054Z [INFO] linux/amd64, go1.12.8, 795a3eb * CoreDNS-1.6.2 * linux/amd64, go1.12.8, 795a3eb * 2019-11-17T05:46:08.110Z [INFO] plugin/ready: Still waiting on: "kubernetes" * I1117 05:46:08.187363 1 trace.go:82] Trace[1129130459]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-11-17 05:45:57.914653814 +0000 UTC m=+0.730998634) (total time: 10.259444121s): * Trace[1129130459]: [10.198962969s] [10.198962969s] Objects listed * I1117 05:46:08.210771 1 trace.go:82] Trace[416288223]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-11-17 05:45:57.921085537 +0000 UTC m=+0.737430357) (total time: 10.248460611s): * Trace[416288223]: [10.21426386s] [10.21426386s] Objects listed * I1117 05:46:08.215096 1 trace.go:82] Trace[152531944]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-11-17 05:45:57.932631457 +0000 UTC m=+0.748976177) (total time: 10.236759589s): * Trace[152531944]: [10.221156791s] [10.221156791s] Objects listed * 2019-11-17T05:46:09.040Z [ERROR] plugin/errors: 2 7377168356532398295.48259277177418671. HINFO: read udp 172.17.0.2:60066->75.75.75.75:53: i/o timeout * [INFO] SIGTERM: Shutting down servers then terminating * * ==> dmesg <== * [ +25.344896] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 * [ +0.639307] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons * [ +1.322127] systemd-fstab-generator[1168]: Ignoring "noauto" for root device * [ +0.006568] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:35 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling. * [ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.) * [Nov17 06:01] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack. * [ +0.240103] vboxguest: loading out-of-tree module taints kernel. * [ +0.003398] vboxguest: PCI device not found, probably running on physical hardware. * [ +22.034024] systemd-fstab-generator[2577]: Ignoring "noauto" for root device * [ +23.341113] systemd-fstab-generator[3300]: Ignoring "noauto" for root device * [Nov17 06:02] kauditd_printk_skb: 104 callbacks suppressed * [ +10.084382] kauditd_printk_skb: 20 callbacks suppressed * [ +6.124593] kauditd_printk_skb: 8 callbacks suppressed * [ +14.283101] kauditd_printk_skb: 29 callbacks suppressed * [Nov17 06:03] NFSD: Unable to end grace period: -110 * [ +3.820955] kauditd_printk_skb: 2 callbacks suppressed * [Nov17 06:10] dockerd invoked oom-killer: gfp_mask=0x14280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=(null), order=0, oom_score_adj=-999 * [ +0.000011] CPU: 0 PID: 2849 Comm: dockerd Tainted: G O 4.15.0 #1 * [ +0.000001] Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS 090008 12/07/2018 * [ +0.000001] Call Trace: * [ +0.000007] dump_stack+0x5c/0x82 * [ +0.000004] dump_header+0x66/0x281 * [ +0.000003] ? cap_inode_getsecurity+0x1f0/0x1f0 * [ +0.000002] oom_kill_process+0x223/0x430 * [ +0.000001] out_of_memory+0x28d/0x490 * [ +0.000003] __alloc_pages_slowpath+0x9db/0xd60 * [ +0.000003] __alloc_pages_nodemask+0x21e/0x240 * [ +0.000002] alloc_pages_vma+0x130/0x180 * [ +0.000004] __handle_mm_fault+0x429/0xa70 * [ +0.000003] ? __switch_to_asm+0x24/0x60 * [ +0.000002] handle_mm_fault+0xa5/0x1f0 * [ +0.000003] __do_page_fault+0x235/0x4b0 * [ +0.000002] ? page_fault+0x36/0x60 * [ +0.000002] page_fault+0x4c/0x60 * [ +0.000003] RIP: 0033:0x45f4f3 * [ +0.000001] RSP: 002b:00007fa0c97e9c20 EFLAGS: 00010202 * [ +0.000001] Mem-Info: * [ +0.000005] active_anon:389796 inactive_anon:72606 isolated_anon:0 * active_file:58 inactive_file:103 isolated_file:0 * unevictable:0 dirty:0 writeback:0 unstable:0 * slab_reclaimable:6492 slab_unreclaimable:10309 * mapped:26819 shmem:114364 pagetables:2012 bounce:0 * free:3069 free_pcp:0 free_cma:0 * [ +0.000003] Node 0 active_anon:1559184kB inactive_anon:290424kB active_file:232kB inactive_file:412kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:107276kB dirty:0kB writeback:0kB shmem:457456kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no * [ +0.000000] Node 0 DMA free:7228kB min:44kB low:56kB high:68kB active_anon:7872kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15908kB mlocked:0kB kernel_stack:136kB pagetables:36kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB * [ +0.000004] lowmem_reserve[]: 0 1797 1797 1797 * [ +0.000003] Node 0 DMA32 free:5048kB min:5400kB low:7240kB high:9080kB active_anon:1550892kB inactive_anon:290424kB active_file:732kB inactive_file:244kB unevictable:0kB writepending:0kB present:2031552kB managed:1973572kB mlocked:0kB kernel_stack:9064kB pagetables:8012kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB * [ +0.000004] lowmem_reserve[]: 0 0 0 0 * [ +0.000003] Node 0 DMA: 7*4kB (UME) 6*8kB (ME) 13*16kB (UM) 5*32kB (UM) 2*64kB (UM) 4*128kB (UME) 2*256kB (UM) 3*512kB (UME) 2*1024kB (ME) 1*2048kB (E) 0*4096kB = 7228kB * [ +0.000012] Node 0 DMA32: 172*4kB (UMEH) 101*8kB (MEH) 115*16kB (UMEH) 34*32kB (UMEH) 1*64kB (H) 2*128kB (H) 1*256kB (H) 1*512kB (H) 0*1024kB 0*2048kB 0*4096kB = 5512kB * [ +0.000013] 114553 total pagecache pages * [ +0.000003] 0 pages in swap cache * [ +0.000001] Swap cache stats: add 0, delete 0, find 0/0 * [ +0.000001] Free swap = 0kB * [ +0.000001] Total swap = 0kB * [ +0.000000] 511886 pages RAM * [ +0.000001] 0 pages HighMem/MovableOnly * [ +0.000000] 14516 pages reserved * [ +0.000161] Out of memory: Kill process 4963 (java) score 1138 or sacrifice child * [ +0.000024] Killed process 4963 (java) total-vm:2008440kB, anon-rss:275312kB, file-rss:0kB, shmem-rss:0kB * * ==> kernel <== * 06:10:18 up 9 min, 0 users, load average: 43.01, 32.79, 15.89 * Linux minikube 4.15.0 #1 SMP Wed Sep 18 07:44:58 PDT 2019 x86_64 GNU/Linux * PRETTY_NAME="Buildroot 2018.05.3" * * ==> kube-addon-manager [a3dbaa6f8a4c] <==
* find: '/etc/kubernetes/admission-controls': No such file or directory * INFO: == Generated kubectl prune whitelist flags: --prune-whitelist core/v1/ConfigMap --prune-whitelist core/v1/Endpoints --prune-whitelist core/v1/Namespace --prune-whitelist core/v1/PersistentVolumeClaim --prune-whitelist core/v1/PersistentVolume --pruerrne-whior: no objects passed to apply * telist core/v1/Pod --prune-whitelist core/v1/ReplicationController --prune-whitelist core/v1/Secret --prune-whitelist core/v1/Service --prune-whitelist batch/v1/Job --prune-whitelist batch/v1beta1/CronJob --prune-whitelist apps/v1/DaemonSet --prune-whitelist apps/v1/Deployment --prune-whitelist apps/v1/ReplicaSet --prune-whitelist apps/v1/StatefulSet --prune-whitelist extensions/v1beta1/Ingress == * INFO: == Kubernetes addon manager started at 2019-11-17T06:02:12+00:00 with ADDON_CHECK_INTERVAL_SEC=5 == * INFO: == Default service account in the kube-system namespace has token default-token-ck5vd == * INFO: == Entering periodical apply loop at 2019-11-17T06:02:20+00:00 == * INFO: Leader election disabled. * INFO: == Kubernetes addon ensure completed at 2019-11-17T06:02:21+00:00 == * INFO: == Reconciling with deprecated label == * INFO: == Reconciling with addon-manager label == * serviceaccount/storage-provisioner unchanged * INFO: == Kubernetes addon reconcile completed at 2019-11-17T06:02:24+00:00 == * INFO: Leader election disabled. * INFO: == Kubernetes addon ensure completed at 2019-11-17T06:02:25+00:00 == * INFO: == Reconciling with deprecated label == * error: no objects passed to apply * INFO: == Reconciling with addon-manager label == * serviceaccount/storage-provisioner unchanged * INFO: == Kubernetes addon reconcile completed at 2019-11-17T06:02:28+00:00 == * error: no objects passed to apply * error: no objects passed to apply * INFO: Leader election disabled. * INFO: == Kubernetes addon ensure completed at 2019-11-17T06:02:30+00:00 == * INFO: == Reconciling with deprecated label == * INFO: == Reconciling with addon-manager label == * serviceaccount/storage-provisioner unchanged * INFO: == Kubernetes addon reconcile completed at 2019-11-17T06:02:32+00:00 == * INFO: Leader election disabled. * INFO: == Kubernetes addon ensure completed at 2019-11-17T06:02:36+00:00 == * INFO: == Reconciling with deprecated label == * INFO: == Reconciling with addon-manager label == * serviceaccount/storage-provisioner unchanged * INFO: == Kubernetes addon reconcile completed at 2019-11-17T06:02:38+00:00 == * INFO: Leader election disabled. * error: no objects passed to apply * INFO: == Kubernetes addon ensure completed at 2019-11-17T06:02:41+00:00 == * INFO: == Reconciling with deprecated label == * INFO: == Reconciling with addon-manager label == * serviceaccount/storage-provisioner unchanged * INFO: == Kubernetes addon reconcile completed at 2019-11-17T06:02:45+00:00 == * INFO: Leader election disabled. * INFO: == Kubernetes addon ensure completed at 2019-11-17T06:02:46+00:00 == * INFO: == Reconciling with deprecated label == * error: no objects passed to apply * INFO: == Reconciling with addon-manager label == * serviceaccount/storage-provisioner unchanged * INFO: == Kubernetes addon reconcile completed at 2019-11-17T06:02:51+00:00 == * INFO: Leader election disabled. * INFO: == Kubernetes addon ensure completed at 2019-11-17T06:03:11+00:00 == * INFO: == Reconciling with deprecated label == * error: no objects passed to apply * INFO: == Reconciling with addon-manager label == * * ==> kube-addon-manager [b3a07e529816] <== * INFO: == Kubernetes addon reconcile completed at 2019-11-17T05:54:01+00:00 == * error: no objects passed to apply * INFO: Leader election disabled. * INFO: == Kubernetes addon ensure completed at 2019-11-17T05:54:04+00:00 == * INFO: == Reconciling with deprecated label == * INFO: == Reconciling with addon-manager label == * serviceaccount/storage-provisioner unchanged * INFO: == Kubernetes addon reconcile completed at 2019-11-17T05:54:10+00:00 == * INFO: Leader election disabled. * error when creating "/etc/kubernetes/addons/storage-provisioner.yaml": Post https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings: dial tcp 127.0.0.1:8443: connect: connection refused * error when creating "/etc/kubernetes/addons/storageclass.yaml": Post https://localhost:8443/apis/storage.k8s.io/v1/storageclasses: dial tcp 127.0.0.1:8443: connect: connection refused * INFO: == Kubernetes addon ensure completed at 2019-11-17T05:58:24+00:00 == * INFO: == Reconciling with deprecated label == * INFO: == Reconciling with addon-manager label == * error: no objects passed to apply * error when retrieving current configuration of: * Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount" * Name: "storage-provisioner", Namespace: "kube-system" * Object: &{map["apiVersion":"v1" "kind":"ServiceAccount" "metadata":map["namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "labels":map["addonmanager.kubernetes.io/mode":"Reconcile"] "name":"storage-provisioner"]]} * from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get https://localhost:8443/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner: dial tcp 127.0.0.1:8443: connect: connection refused * error when retrieving current configuration of: * INFO: == Kubernetes addon reconcile completed at 2019-11-17T05:58:25+00:00 == * INFO: Leader election disabled. * Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod" * Name: "storage-provisioner", Namespace: "kube-system" * Object: &{map["apiVersion":"v1" "kind":"Pod" "metadata":map["labels":map["addonmanThe connectioager.kubernetn to the server localhost:8443 was refused - did youes.io/mode":" Rseconcile" "integration-test":"storage-provisioner"] "pecify the right host or port? * name":"storage-provisioner" "namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""]] "spec":map["containers":[map["imagePullPolicy":"IfNotPresent" "name":"storage-provisioner" "volumeMounts":[map["mountPath":"/tmp" "name":"tmp"]] "command":["/storage-provisioner"] "image":"gcr.io/k8s-minikube/storage-provisioner:v1.8.1"]] "hostNetwork":%!q(bool=true) "serviceAccountName":"storage-provisioner" "volumes":[map["hostPath":map["path":"/tmp" "type":"Directory"] "name":"tmp"]]]]} * from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/storage-provisioner: dial tcp 127.0.0.1:8443: connect: connection refused * error: no objects passed to apply * error when retrieving current configuration of: * Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount" * INFO: == Kubernetes addon ensure completed at 2019-11-17T05:58:25+00:00 == * INFO: == Reconciling with deprecated label == * INFO: == Reconciling with addon-manager label == * Name: "storage-provisioner", Namespace: "kube-system" * Object: &{map["apiVersion":"v1" "kind":"ServiceAccount" "metadata":map["labels":map["addonmanager.kubernetes.io/mode":"Reconcile"] "name":"storage-provisioner" "namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""]]]} * from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get https://localhost:8443/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner: dial tcp 127.0.0.1:8443: connect: connection refused * error when retrieving current configuration of: * Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod" * Name: "storage-provisioner", Namespace: "kube-system" * Object: &{map["metadata":map["name":"storage-provisioner" "namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "labels":map["addonmanager.kubernetes.io/mode":"Reconcile" "integration-test":"storage-provisioner"]] "spec":map["containers":[map["image":"gcr.io/k8s-minikube/storage-provisioner:v1.8.1" "imagePullPolicy":"IfNotPresent" "name":"storage-provisioner" "volumeMounts":[map["mountPath":"/tmp" "name":"tmp"]] "command":["/storage-provisioner"]]] "hostNetwork":%!q(bool=true) "serviceAccountName":"storage-provisioner" "volumes":[map["hostPath":map["path":"/tmp" "type":"Directory"] "name":"tmp"]]] "apiVersion":"v1" "kind":"Pod"]} * from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/storage-provisioner: dial tcp 127.0.0.1:8443: connect: connection refused * INFO: == Kubernetes addon reconcile completed at 2019-11-17T05:58:25+00:00 == * INFO: Leader election disabled. * The connection to the server localhost:8443 was refused - did you specify the right host or port? * INFO: == Kubernetes addon ensure completed at 2019-11-17T05:58:30+00:00 == * error: no objects passed to apply * INFO: == Reconciling with deprecated label == * INFO: == Reconciling with addon-manager label == * error when retrieving current configuration of: * Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount" * Name: "storage-provisioner", Namespace: "kube-system" * Object: &{map["apiVersion":"v1" "kind":"ServiceAccount" "metadata":map["labels":map["addonmanager.kubernetes.io/mode":"Reconcile"] "name":"storage-provisioner" "namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""]]]} * from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get https://localhost:8443/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner: dial tcp 127.0.0.1:8443: connect: connection refused * error when retrieving current configuration of: * Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod" * Name: "storage-provisioner", Namespace: "kube-system" * Object: &{map["apiVersion":"v1" "kind":"Pod" "metadata":map["labels":map["addonmanager.kubernetes.io/mode":"Reconcile" "integration-test":"storage-provisioner"] "name":"storage-provisioner" "namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""]] "spec":map["hostNetwork":%!q(bool=true) "serviceAccountName":"storage-provisioner" "volumes":[map["hostPath":map["path":"/tmp" "type":"Directory"] "name":"tmp"]] "containers":[map["volumeMounts":[map["mountPath":"/tmp" "name":"tmp"]] "command":["/storage-provisioner"] "image":"gcr.io/k8s-minikube/storage-provisioner:v1.8.1" "imagePullPolicy":"IfNotPresent" "name":"storage-provisioner"]]]]} * from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/storage-provisioner: dial tcp 127.0.0.1:8443: connect: connection refused * INFO: == Kubernetes addon reconcile completed at 2019-11-17T05:58:30+00:00 == * * ==> kube-apiserver [48576a001e33] <== * W1117 05:58:28.561543 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:28.587423 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:28.609126 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:28.624183 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:28.645245 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:28.668779 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:28.669352 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:28.671977 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:28.678184 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:28.687396 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:28.693151 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:28.718656 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:28.732956 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:28.740236 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:28.753121 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:28.765012 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:28.869119 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:28.879091 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:28.891414 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:28.899415 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:28.918766 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:28.943638 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.001633 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.022603 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.045579 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.064550 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.071512 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.082742 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.084137 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.084567 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.132482 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.132810 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.153542 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.157708 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.162008 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.162043 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.170768 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.170973 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.215463 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.220698 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.240643 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.240643 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.244417 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.287783 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.293366 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.300273 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.315829 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.382957 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.476991 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.570132 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:29.638353 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:31.647831 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:31.803603 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:31.858405 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:31.946559 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:32.005566 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:32.009234 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:32.060143 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:32.144926 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W1117 05:58:32.217792 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * * ==> kube-apiserver [712f8b871b15] <== * /workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/cachecontrol.go:31 +0xa8 * net/http.HandlerFunc.ServeHTTP(0xc00138c400, 0x7b10de0, 0xc00f320c40, 0xc00f36e500) * /usr/local/go/src/net/http/server.go:1995 +0x44 * k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.WithLogging.func1(0x7b04720, 0xc00e965698, 0xc00f35ab00) * /workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:89 +0x29c * net/http.HandlerFunc.ServeHTTP(0xc00138c420, 0x7b04720, 0xc00e965698, 0xc00f35ab00) * /usr/local/go/src/net/http/server.go:1995 +0x44 * k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withPanicRecovery.func1(0x7b04720, 0xc00e965698, 0xc00f35ab00) * /workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/wrap.go:51 +0x105 * net/http.HandlerFunc.ServeHTTP(0xc00138c440, 0x7b04720, 0xc00e965698, 0xc00f35ab00) * /usr/local/go/src/net/http/server.go:1995 +0x44 * k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*APIServerHandler).ServeHTTP(0xc002259e90, 0x7b04720, 0xc00e965698, 0xc00f35ab00) * /workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:189 +0x51 * net/http.serverHandler.ServeHTTP(0xc00441a9c0, 0x7b04720, 0xc00e965698, 0xc00f35ab00) * /usr/local/go/src/net/http/server.go:2774 +0xa8 * net/http.initNPNRequest.ServeHTTP(0xc00d99f500, 0xc00441a9c0, 0x7b04720, 0xc00e965698, 0xc00f35ab00) * /usr/local/go/src/net/http/server.go:3323 +0x8d * k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*serverConn).runHandler(0xc0085b1380, 0xc00e965698, 0xc00f35ab00, 0xc00b849580) * /workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:2125 +0x89 * created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*serverConn).processHeaders * /workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:1859 +0x4f4 * W1117 06:10:16.961981 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting... * W1117 06:10:16.966833 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting... * W1117 06:10:16.967023 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting... * W1117 06:10:16.967054 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting... * W1117 06:10:16.967337 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting... * W1117 06:10:16.968021 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting... * W1117 06:10:16.968047 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting... * W1117 06:10:16.968119 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting... * I1117 06:10:16.969218 1 trace.go:116] Trace[693534505]: "GuaranteedUpdate etcd3" type:*core.Event (started: 2019-11-17 06:03:18.09530005 +0000 UTC m=+66.045727196) (total time: 6m58.869674069s): * Trace[693534505]: [1.575448809s] [1.575448809s] initial value restored * Trace[693534505]: [6m58.869674069s] [6m57.29422526s] END * W1117 06:10:16.979848 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting... * W1117 06:10:16.982272 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting... * W1117 06:10:16.982296 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting... * W1117 06:10:16.983146 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting... * W1117 06:10:16.983226 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting... * W1117 06:10:16.984017 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting... * W1117 06:10:16.984123 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting... * W1117 06:10:17.004697 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting... * W1117 06:10:17.006265 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting... * I1117 06:10:17.269690 1 log.go:172] http: TLS handshake error from 192.168.0.11:61980: write tcp 192.168.0.20:8443->192.168.0.11:61980: write: broken pipe * I1117 06:10:17.273003 1 log.go:172] http: TLS handshake error from 192.168.0.11:61979: write tcp 192.168.0.20:8443->192.168.0.11:61979: write: broken pipe * E1117 06:10:17.293850 1 writers.go:118] apiserver was unable to write a fallback JSON response: http: Handler timeout * E1117 06:10:17.306166 1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{} * I1117 06:10:17.320101 1 trace.go:116] Trace[762362511]: "Get" url:/api/v1/namespaces/default (started: 2019-11-17 06:03:23.334312347 +0000 UTC m=+71.284739593) (total time: 6m53.985732993s): * Trace[762362511]: [6m53.985732993s] [6m53.787960872s] END * E1117 06:10:17.323637 1 writers.go:105] apiserver was unable to write a JSON response: http: Handler timeout * E1117 06:10:17.344687 1 writers.go:105] apiserver was unable to write a JSON response: http: Handler timeout * E1117 06:10:17.364994 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} * E1117 06:10:17.388632 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} * E1117 06:10:17.405477 1 writers.go:118] apiserver was unable to write a fallback JSON response: http: Handler timeout * E1117 06:10:17.441241 1 writers.go:118] apiserver was unable to write a fallback JSON response: http: Handler timeout * I1117 06:10:17.443660 1 trace.go:116] Trace[1207966728]: "Get" url:/api/v1/namespaces/kube-system (started: 2019-11-17 06:03:20.741275299 +0000 UTC m=+68.691702445) (total time: 6m56.702199631s): * Trace[1207966728]: [6m56.702199631s] [6m56.626467701s] END * I1117 06:10:17.460643 1 trace.go:116] Trace[1348837527]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube (started: 2019-11-17 06:03:33.432719929 +0000 UTC m=+81.383147075) (total time: 6m44.027856945s): * Trace[1348837527]: [6m43.792065727s] [6m43.792065727s] About to convert to expected version * Trace[1348837527]: [6m44.027856945s] [189.81507ms] END * E1117 06:10:17.472756 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} * E1117 06:10:17.476712 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} * * ==> kube-controller-manager [d9b5c2e24aec] <== * I1117 06:02:39.108788 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io * I1117 06:02:39.108930 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges * I1117 06:02:39.109023 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates * I1117 06:02:39.109129 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps * I1117 06:02:39.109233 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch * I1117 06:02:39.109337 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints * I1117 06:02:39.109470 1 resource_quota_controller.go:271] Starting resource quota controller * I1117 06:02:39.109560 1 shared_informer.go:197] Waiting for caches to sync for resource quota * I1117 06:02:39.109658 1 resource_quota_monitor.go:303] QuotaMonitor running * I1117 06:02:39.109772 1 controllermanager.go:534] Started "resourcequota" * I1117 06:02:39.133401 1 controllermanager.go:534] Started "namespace" * I1117 06:02:39.133611 1 namespace_controller.go:186] Starting namespace controller * I1117 06:02:39.133973 1 shared_informer.go:197] Waiting for caches to sync for namespace * I1117 06:02:39.204283 1 controllermanager.go:534] Started "replicaset" * I1117 06:02:39.204564 1 replica_set.go:182] Starting replicaset controller * I1117 06:02:39.204655 1 shared_informer.go:197] Waiting for caches to sync for ReplicaSet * I1117 06:02:39.353809 1 controllermanager.go:534] Started "csrcleaner" * W1117 06:02:39.353886 1 controllermanager.go:526] Skipping "root-ca-cert-publisher" * I1117 06:02:39.354482 1 shared_informer.go:197] Waiting for caches to sync for garbage collector * I1117 06:02:39.354770 1 cleaner.go:81] Starting CSR cleaner controller * I1117 06:02:39.423010 1 shared_informer.go:204] Caches are synced for certificate * W1117 06:02:39.435483 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist * I1117 06:02:39.435658 1 shared_informer.go:204] Caches are synced for namespace * I1117 06:02:39.442875 1 shared_informer.go:204] Caches are synced for TTL * I1117 06:02:39.454745 1 shared_informer.go:204] Caches are synced for PVC protection * I1117 06:02:39.455015 1 shared_informer.go:204] Caches are synced for service account * I1117 06:02:39.455177 1 shared_informer.go:204] Caches are synced for attach detach * I1117 06:02:39.466879 1 shared_informer.go:204] Caches are synced for GC * I1117 06:02:39.504895 1 shared_informer.go:204] Caches are synced for ReplicaSet * I1117 06:02:39.506968 1 shared_informer.go:204] Caches are synced for taint * I1117 06:02:39.507429 1 node_lifecycle_controller.go:1208] Initializing eviction metric for zone: * W1117 06:02:39.508529 1 node_lifecycle_controller.go:903] Missing timestamp for Node minikube. Assuming now as a timestamp. * I1117 06:02:39.509102 1 node_lifecycle_controller.go:1108] Controller detected that zone is now in state Normal. * I1117 06:02:39.510590 1 shared_informer.go:204] Caches are synced for job * I1117 06:02:39.510945 1 taint_manager.go:186] Starting NoExecuteTaintManager * I1117 06:02:39.511112 1 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"307b5052-bf9d-4542-9f8b-9bf9fb9a1d61", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller * I1117 06:02:39.511503 1 shared_informer.go:204] Caches are synced for stateful set * I1117 06:02:39.511597 1 shared_informer.go:204] Caches are synced for HPA * I1117 06:02:39.511974 1 shared_informer.go:204] Caches are synced for daemon sets * I1117 06:02:39.517066 1 shared_informer.go:204] Caches are synced for certificate * I1117 06:02:39.519066 1 shared_informer.go:204] Caches are synced for PV protection * I1117 06:02:39.520939 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator * I1117 06:02:39.532052 1 shared_informer.go:204] Caches are synced for ReplicationController * I1117 06:02:39.701947 1 shared_informer.go:204] Caches are synced for disruption * I1117 06:02:39.702201 1 disruption.go:341] Sending events to api server. * I1117 06:02:39.743558 1 shared_informer.go:204] Caches are synced for deployment * I1117 06:02:39.743943 1 shared_informer.go:204] Caches are synced for expand * I1117 06:02:39.781094 1 shared_informer.go:204] Caches are synced for persistent volume * I1117 06:02:39.910069 1 shared_informer.go:204] Caches are synced for resource quota * I1117 06:02:39.955463 1 shared_informer.go:204] Caches are synced for endpoint * I1117 06:02:39.957383 1 shared_informer.go:204] Caches are synced for garbage collector * I1117 06:02:39.981537 1 shared_informer.go:204] Caches are synced for bootstrap_signer * I1117 06:02:39.982787 1 shared_informer.go:204] Caches are synced for garbage collector * I1117 06:02:39.982819 1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage * I1117 06:02:40.806547 1 shared_informer.go:197] Waiting for caches to sync for resource quota * I1117 06:02:40.906843 1 shared_informer.go:204] Caches are synced for resource quota * I1117 06:03:04.964893 1 leaderelection.go:287] failed to renew lease kube-system/kube-controller-manager: failed to tryAcquireOrRenew context deadline exceeded * F1117 06:03:05.034054 1 controllermanager.go:279] leaderelection lost * I1117 06:03:05.579186 1 resource_quota_controller.go:290] Shutting down resource quota controller * I1117 06:03:05.591505 1 pv_controller_base.go:298] Shutting down persistent volume controller * * ==> kube-proxy [2deb1c21d16f] <== * W1117 06:02:24.845313 1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy * I1117 06:02:25.568660 1 node.go:135] Successfully retrieved node IP: 192.168.0.20 * I1117 06:02:25.568728 1 server_others.go:149] Using iptables Proxier. * W1117 06:02:25.576460 1 proxier.go:287] clusterCIDR not specified, unable to distinguish between internal and external traffic * I1117 06:02:25.608235 1 server.go:529] Version: v1.16.0 * I1117 06:02:25.685520 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 * I1117 06:02:25.710491 1 conntrack.go:52] Setting nf_conntrack_max to 131072 * I1117 06:02:25.719329 1 conntrack.go:83] Setting conntrack hashsize to 32768 * I1117 06:02:25.723718 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 * I1117 06:02:25.723833 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 * I1117 06:02:25.724221 1 config.go:131] Starting endpoints config controller * I1117 06:02:25.724266 1 shared_informer.go:197] Waiting for caches to sync for endpoints config * I1117 06:02:25.724308 1 config.go:313] Starting service config controller * I1117 06:02:25.724338 1 shared_informer.go:197] Waiting for caches to sync for service config * I1117 06:02:25.867882 1 shared_informer.go:204] Caches are synced for service config * I1117 06:02:25.868084 1 shared_informer.go:204] Caches are synced for endpoints config * * ==> kube-proxy [74c4604d2088] <== * Trace[1617637697]: [18.584471964s] [18.579393172s] END * E1117 03:43:14.137543 1 proxier.go:726] Failed to ensure that filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: timed out while checking rules * I1117 03:49:19.105026 1 trace.go:116] Trace[737828289]: "iptables save" (started: 2019-11-17 03:47:35.924483125 +0000 UTC m=+717.634237097) (total time: 1m21.838133396s): * Trace[737828289]: [1m21.838133396s] [1m21.838133396s] END * I1117 03:52:58.782462 1 trace.go:116] Trace[325557888]: "iptables save" (started: 2019-11-17 03:49:32.789502151 +0000 UTC m=+834.499256123) (total time: 2m52.708972189s): * Trace[325557888]: [2m52.708972189s] [2m52.708972189s] END * I1117 03:53:14.144383 1 trace.go:116] Trace[1481306036]: "iptables restore" (started: 2019-11-17 03:52:59.059033723 +0000 UTC m=+1040.768787695) (total time: 14.345085708s): * Trace[1481306036]: [14.345085708s] [13.376919181s] END * E1117 04:00:16.845939 1 proxier.go:726] Failed to ensure that filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: timed out while checking rules * E1117 04:26:11.882339 1 proxier.go:726] Failed to ensure that filter chain OUTPUT jumps to KUBE-SERVICES: timed out while checking rules * E1117 04:49:48.214835 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=139049&timeout=6m12s&timeoutSeconds=372&watch=true: net/http: TLS handshake timeout * E1117 04:49:49.302922 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=138819&timeout=7m17s&timeoutSeconds=437&watch=true: net/http: TLS handshake timeout * I1117 05:01:05.090107 1 trace.go:116] Trace[1503679304]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (started: 2019-11-17 04:50:24.056759813 +0000 UTC m=+4485.766513885) (total time: 9m28.680586028s): * Trace[1503679304]: [9m28.680586028s] [9m28.680586028s] END * E1117 05:01:19.119676 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0: dial tcp: i/o timeout * I1117 05:01:29.063638 1 trace.go:116] Trace[234828594]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (started: 2019-11-17 04:49:55.610191396 +0000 UTC m=+4457.319945368) (total time: 10m59.560641452s): * Trace[234828594]: [10m59.560641452s] [10m59.560641452s] END * E1117 05:01:41.967646 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0: dial tcp: i/o timeout * I1117 05:11:35.077880 1 trace.go:116] Trace[591940071]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (started: 2019-11-17 05:01:42.286196262 +0000 UTC m=+5163.995950234) (total time: 9m47.425468186s): * Trace[591940071]: [9m47.425468186s] [9m47.425468186s] END * E1117 05:12:20.974422 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0: dial tcp: i/o timeout * I1117 05:24:55.758011 1 trace.go:116] Trace[1669628617]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (started: 2019-11-17 05:01:50.933489358 +0000 UTC m=+5172.643243430) (total time: 20m30.462217781s): * Trace[1669628617]: [20m30.462217781s] [20m30.462217781s] END * E1117 05:25:27.750411 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0: dial tcp: i/o timeout * I1117 05:37:02.071054 1 trace.go:116] Trace[185481143]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (started: 2019-11-17 05:14:06.455035316 +0000 UTC m=+5908.164789388) (total time: 22m55.537504265s): * Trace[185481143]: [22m55.537504265s] [22m55.537504265s] END * E1117 05:37:02.091436 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0: dial tcp: i/o timeout * I1117 05:37:02.170803 1 trace.go:116] Trace[710274328]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (started: 2019-11-17 05:25:55.812335656 +0000 UTC m=+6617.522089628) (total time: 11m6.358439123s): * Trace[710274328]: [11m6.358439123s] [11m6.358439123s] END * E1117 05:37:02.170827 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: i/o timeout * E1117 05:37:09.219886 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=139049&timeout=9m33s&timeoutSeconds=573&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E1117 05:37:09.220322 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=138819&timeout=9m5s&timeoutSeconds=545&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E1117 05:37:10.648418 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=139049&timeout=6m34s&timeoutSeconds=394&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E1117 05:37:10.648652 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=138819&timeout=6m0s&timeoutSeconds=360&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E1117 05:37:11.685704 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=139049&timeout=7m41s&timeoutSeconds=461&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E1117 05:37:11.686066 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=138819&timeout=9m25s&timeoutSeconds=565&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E1117 05:37:12.687932 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=138819&timeout=9m47s&timeoutSeconds=587&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E1117 05:37:12.692775 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=139049&timeout=5m31s&timeoutSeconds=331&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E1117 05:37:13.692826 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=138819&timeout=7m15s&timeoutSeconds=435&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E1117 05:37:13.696719 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=139049&timeout=8m17s&timeoutSeconds=497&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E1117 05:37:14.698509 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=138819&timeout=8m26s&timeoutSeconds=506&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E1117 05:37:14.699440 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=139049&timeout=5m50s&timeoutSeconds=350&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E1117 05:37:15.703739 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=139049&timeout=7m34s&timeoutSeconds=454&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E1117 05:37:15.719191 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=138819&timeout=6m40s&timeoutSeconds=400&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E1117 05:37:16.705651 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=139049&timeout=6m50s&timeoutSeconds=410&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E1117 05:37:16.721076 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=138819&timeout=7m50s&timeoutSeconds=470&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E1117 05:37:17.706639 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=139049&timeout=5m17s&timeoutSeconds=317&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E1117 05:37:17.721873 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=138819&timeout=6m14s&timeoutSeconds=374&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * W1117 05:37:25.226269 1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Endpoints ended with: too old resource version: 139049 (139122) * W1117 05:37:25.226339 1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: too old resource version: 138819 (139122) * I1117 05:45:40.530896 1 trace.go:116] Trace[1751548359]: "iptables save" (started: 2019-11-17 05:45:37.362493447 +0000 UTC m=+7799.072247519) (total time: 3.063395926s): * Trace[1751548359]: [3.063395926s] [3.063395926s] END * E1117 05:45:55.485654 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=139122&timeout=5m54s&timeoutSeconds=354&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E1117 05:45:55.485713 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=139263&timeout=5m7s&timeoutSeconds=307&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E1117 05:45:56.489093 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=139263&timeout=5m30s&timeoutSeconds=330&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E1117 05:45:56.489091 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=139122&timeout=5m52s&timeoutSeconds=352&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E1117 05:45:57.501418 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=139263&timeout=8m46s&timeoutSeconds=526&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E1117 05:45:57.502426 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=139122&timeout=5m50s&timeoutSeconds=350&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * W1117 05:46:08.583242 1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Endpoints ended with: too old resource version: 139263 (139294) * W1117 05:46:08.583322 1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: too old resource version: 139122 (139294) * * ==> kube-scheduler [aeb8146fc090] <== * I1117 06:02:14.591321 1 serving.go:319] Generated self-signed cert in-memory * W1117 06:02:19.271862 1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' * W1117 06:02:19.271966 1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" * W1117 06:02:19.271986 1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous. * W1117 06:02:19.271997 1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false * I1117 06:02:19.344377 1 server.go:143] Version: v1.16.0 * I1117 06:02:19.346230 1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory * W1117 06:02:19.377005 1 authorization.go:47] Authorization is disabled * W1117 06:02:19.377036 1 authentication.go:79] Authentication is disabled * I1117 06:02:19.377074 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 * I1117 06:02:19.378019 1 secure_serving.go:123] Serving securely on 127.0.0.1:10259 * I1117 06:02:20.524153 1 leaderelection.go:241] attempting to acquire leader lease kube-system/kube-scheduler... * I1117 06:02:39.318972 1 leaderelection.go:251] successfully acquired lease kube-system/kube-scheduler * I1117 06:03:03.929974 1 leaderelection.go:287] failed to renew lease kube-system/kube-scheduler: failed to tryAcquireOrRenew context deadline exceeded * F1117 06:03:03.997667 1 server.go:264] leaderelection lost * * ==> kubelet <== * -- Logs begin at Sun 2019-11-17 05:59:47 UTC, end at Sun 2019-11-17 06:10:23 UTC. -- * Nov 17 06:03:11 minikube kubelet[3359]: E1117 06:03:11.548609 3359 pod_workers.go:191] Error syncing pod c18ee741ac4ad7b2bfda7d88116f3047 ("kube-scheduler-minikube_kube-system(c18ee741ac4ad7b2bfda7d88116f3047)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-minikube_kube-system(c18ee741ac4ad7b2bfda7d88116f3047)" * Nov 17 06:03:11 minikube kubelet[3359]: E1117 06:03:11.551861 3359 remote_runtime.go:261] RemoveContainer "233aa4be5bb4e3fec3eb0ad0da008586c620c8c1c25b9ad773bf6a01c6f3710e" from runtime service failed: rpc error: code = Unknown desc = failed to remove container "233aa4be5bb4e3fec3eb0ad0da008586c620c8c1c25b9ad773bf6a01c6f3710e": Error response from daemon: removal of container 233aa4be5bb4e3fec3eb0ad0da008586c620c8c1c25b9ad773bf6a01c6f3710e is already in progress * Nov 17 06:03:11 minikube kubelet[3359]: E1117 06:03:11.552052 3359 kuberuntime_gc.go:143] Failed to remove container "233aa4be5bb4e3fec3eb0ad0da008586c620c8c1c25b9ad773bf6a01c6f3710e": rpc error: code = Unknown desc = failed to remove container "233aa4be5bb4e3fec3eb0ad0da008586c620c8c1c25b9ad773bf6a01c6f3710e": Error response from daemon: removal of container 233aa4be5bb4e3fec3eb0ad0da008586c620c8c1c25b9ad773bf6a01c6f3710e is already in progress * Nov 17 06:03:12 minikube kubelet[3359]: E1117 06:03:12.579302 3359 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 75.75.75.75 75.75.76.76 2001:558:feed::2 * Nov 17 06:03:12 minikube kubelet[3359]: E1117 06:03:12.579671 3359 pod_workers.go:191] Error syncing pod c18ee741ac4ad7b2bfda7d88116f3047 ("kube-scheduler-minikube_kube-system(c18ee741ac4ad7b2bfda7d88116f3047)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-minikube_kube-system(c18ee741ac4ad7b2bfda7d88116f3047)" * Nov 17 06:03:12 minikube kubelet[3359]: E1117 06:03:12.639918 3359 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 75.75.75.75 75.75.76.76 2001:558:feed::2 * Nov 17 06:03:12 minikube kubelet[3359]: E1117 06:03:12.640568 3359 pod_workers.go:191] Error syncing pod 38d78cbd438e068d417c11c848b26f09 ("kube-controller-manager-minikube_kube-system(38d78cbd438e068d417c11c848b26f09)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(38d78cbd438e068d417c11c848b26f09)" * Nov 17 06:03:12 minikube kubelet[3359]: E1117 06:03:12.925184 3359 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 75.75.75.75 75.75.76.76 2001:558:feed::2 * Nov 17 06:03:18 minikube kubelet[3359]: E1117 06:03:18.534510 3359 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 75.75.75.75 75.75.76.76 2001:558:feed::2 * Nov 17 06:05:21 minikube kubelet[3359]: E1117 06:05:21.351552 3359 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "minikube": Get https://localhost:8443/api/v1/nodes/minikube?resourceVersion=0&timeout=10s: context deadline exceeded (Client.Timeout exceeded while awaiting headers) * Nov 17 06:05:56 minikube kubelet[3359]: E1117 06:05:33.875908 3359 controller.go:170] failed to update node lease, error: Put https://localhost:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s: context deadline exceeded (Client.Timeout exceeded while awaiting headers) * Nov 17 06:06:38 minikube kubelet[3359]: E1117 06:05:56.113619 3359 remote_runtime.go:453] Status from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded * Nov 17 06:07:04 minikube kubelet[3359]: E1117 06:05:53.296368 3359 remote_runtime.go:277] ListContainers with filter &ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},} from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded * Nov 17 06:07:42 minikube kubelet[3359]: E1117 06:06:33.700099 3359 remote_runtime.go:182] ListPodSandbox with filter &PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},} from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded * Nov 17 06:08:45 minikube kubelet[3359]: E1117 06:07:51.637526 3359 remote_image.go:71] ListImages with filter nil from image service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded * Nov 17 06:09:08 minikube kubelet[3359]: E1117 06:08:45.120238 3359 remote_runtime.go:277] ListContainers with filter &ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},} from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded * Nov 17 06:09:13 minikube kubelet[3359]: E1117 06:09:11.461579 3359 kuberuntime_container.go:340] getKubeletContainers failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded * Nov 17 06:09:20 minikube kubelet[3359]: E1117 06:09:20.296523 3359 kuberuntime_sandbox.go:210] ListPodSandbox failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded * Nov 17 06:09:26 minikube kubelet[3359]: E1117 06:09:17.555926 3359 kubelet.go:2174] Container runtime sanity check failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded * Nov 17 06:09:28 minikube kubelet[3359]: E1117 06:09:17.663751 3359 generic.go:205] GenericPLEG: Unable to retrieve pods: rpc error: code = DeadlineExceeded desc = context deadline exceeded * Nov 17 06:10:06 minikube kubelet[3359]: E1117 06:09:26.723862 3359 kuberuntime_container.go:340] getKubeletContainers failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded * Nov 17 06:10:17 minikube kubelet[3359]: E1117 06:10:17.215441 3359 kuberuntime_image.go:100] ListImages failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded * Nov 17 06:10:17 minikube kubelet[3359]: W1117 06:10:17.237992 3359 image_gc_manager.go:192] [imageGCManager] Failed to update image list: rpc error: code = DeadlineExceeded desc = context deadline exceeded * Nov 17 06:10:17 minikube kubelet[3359]: E1117 06:09:43.133493 3359 container_manager_linux.go:89] Unable to get docker version: operation timeout: context deadline exceeded * Nov 17 06:10:17 minikube kubelet[3359]: E1117 06:09:26.755158 3359 kubelet_pods.go:1027] Error listing containers: &status.statusError{Code:4, Message:"context deadline exceeded", Details:[]*any.Any(nil), XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0} * Nov 17 06:10:17 minikube kubelet[3359]: E1117 06:10:17.448332 3359 kubelet.go:1990] Failed cleaning pods: rpc error: code = DeadlineExceeded desc = context deadline exceeded * Nov 17 06:10:17 minikube kubelet[3359]: I1117 06:10:17.465374 3359 kubelet.go:1839] skipping pod synchronization - [container runtime is down, PLEG is not healthy: pleg was last seen active 6m59.993117666s ago; threshold is 3m0s] * Nov 17 06:10:17 minikube kubelet[3359]: E1117 06:10:17.473811 3359 controller.go:170] failed to update node lease, error: Put https://localhost:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s: write tcp 127.0.0.1:51598->127.0.0.1:8443: use of closed network connection * Nov 17 06:10:17 minikube kubelet[3359]: E1117 06:10:17.473314 3359 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "minikube": Get https://localhost:8443/api/v1/nodes/minikube?timeout=10s: context deadline exceeded (Client.Timeout exceeded while awaiting headers) * Nov 17 06:10:17 minikube kubelet[3359]: E1117 06:10:17.481461 3359 controller.go:170] failed to update node lease, error: Put https://localhost:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s: write tcp 127.0.0.1:51598->127.0.0.1:8443: use of closed network connection * Nov 17 06:10:17 minikube kubelet[3359]: E1117 06:10:17.481718 3359 controller.go:170] failed to update node lease, error: Put https://localhost:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s: write tcp 127.0.0.1:51598->127.0.0.1:8443: use of closed network connection * Nov 17 06:10:17 minikube kubelet[3359]: E1117 06:10:17.481889 3359 controller.go:170] failed to update node lease, error: Put https://localhost:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s: write tcp 127.0.0.1:51598->127.0.0.1:8443: use of closed network connection * Nov 17 06:10:17 minikube kubelet[3359]: E1117 06:10:17.482112 3359 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "minikube": Get https://localhost:8443/api/v1/nodes/minikube?timeout=10s: write tcp 127.0.0.1:51598->127.0.0.1:8443: use of closed network connection * Nov 17 06:10:17 minikube kubelet[3359]: E1117 06:10:17.482333 3359 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "minikube": Get https://localhost:8443/api/v1/nodes/minikube?timeout=10s: write tcp 127.0.0.1:51598->127.0.0.1:8443: use of closed network connection * Nov 17 06:10:17 minikube kubelet[3359]: E1117 06:10:17.482525 3359 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "minikube": Get https://localhost:8443/api/v1/nodes/minikube?timeout=10s: write tcp 127.0.0.1:51598->127.0.0.1:8443: use of closed network connection * Nov 17 06:10:17 minikube kubelet[3359]: E1117 06:10:17.482606 3359 kubelet_node_status.go:375] Unable to update node status: update node status exceeds retry count * Nov 17 06:10:17 minikube kubelet[3359]: I1117 06:10:17.481940 3359 controller.go:105] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update node lease * Nov 17 06:10:17 minikube kubelet[3359]: E1117 06:10:17.484545 3359 controller.go:135] failed to ensure node lease exists, will retry in 200ms, error: Get https://localhost:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s: write tcp 127.0.0.1:51598->127.0.0.1:8443: use of closed network connection * Nov 17 06:10:17 minikube kubelet[3359]: E1117 06:10:17.469822 3359 remote_image.go:87] ImageStatus "k8s.gcr.io/pause:3.1" from image service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded * Nov 17 06:10:17 minikube kubelet[3359]: E1117 06:10:17.514876 3359 kuberuntime_image.go:85] ImageStatus for image {"k8s.gcr.io/pause:3.1"} failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded * Nov 17 06:10:17 minikube kubelet[3359]: E1117 06:10:17.523415 3359 event.go:246] Unable to write event: 'Patch https://localhost:8443/api/v1/namespaces/kube-system/events/storage-provisioner.15d7ddde7389f386: stream error: stream ID 539; INTERNAL_ERROR' (may retry after sleeping) * Nov 17 06:10:17 minikube kubelet[3359]: W1117 06:10:17.534861 3359 status_manager.go:545] Failed to update status for pod "coredns-5644d7b6d9-zfl4t_kube-system(a1d88b03-426f-4b26-8fa2-36fa42fbfe2e)": failed to patch status "{\"status\":{\"$setElementOrder/conditions\":[{\"type\":\"Initialized\"},{\"type\":\"Ready\"},{\"type\":\"ContainersReady\"},{\"type\":\"PodScheduled\"}],\"conditions\":[{\"lastTransitionTime\":\"2019-11-17T06:03:17Z\",\"message\":null,\"reason\":null,\"status\":\"True\",\"type\":\"Ready\"},{\"lastTransitionTime\":\"2019-11-17T06:03:17Z\",\"message\":null,\"reason\":null,\"status\":\"True\",\"type\":\"ContainersReady\"}],\"containerStatuses\":[{\"containerID\":\"docker://3c89349465804fb816bfaa668e07a95a2bf5bcc9351dbf3dc3e6d692b9934039\",\"image\":\"k8s.gcr.io/coredns:1.6.2\",\"imageID\":\"docker-pullable://k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5\",\"lastState\":{\"terminated\":{\"containerID\":\"docker://071b34eb0f59a66426c841754fb0e96755b1834e7f2a99521320b7e13c52d436\",\"exitCode\":0,\"finishedAt\":\"2019-11-17T05:58:22Z\",\"reason\":\"Completed\",\"startedAt\":\"2019-11-17T05:45:56Z\"}},\"name\":\"coredns\",\"ready\":true,\"restartCount\":22,\"started\":true,\"state\":{\"running\":{\"startedAt\":\"2019-11-17T06:02:24Z\"}}}]}}" for pod "kube-system"/"coredns-5644d7b6d9-zfl4t": Patch https://localhost:8443/api/v1/namespaces/kube-system/pods/coredns-5644d7b6d9-zfl4t/status: read tcp 127.0.0.1:51598->127.0.0.1:8443: use of closed network connection * Nov 17 06:10:17 minikube kubelet[3359]: I1117 06:10:17.614037 3359 kubelet.go:1839] skipping pod synchronization - [container runtime is down, PLEG is not healthy: pleg was last seen active 7m0.14840845s ago; threshold is 3m0s] * Nov 17 06:10:17 minikube kubelet[3359]: I1117 06:10:17.844914 3359 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m0.379261584s ago; threshold is 3m0s * Nov 17 06:10:18 minikube kubelet[3359]: E1117 06:10:18.098808 3359 kubelet.go:1274] Container garbage collection failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded * Nov 17 06:10:18 minikube kubelet[3359]: E1117 06:10:18.333704 3359 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 75.75.75.75 75.75.76.76 2001:558:feed::2 * Nov 17 06:10:18 minikube kubelet[3359]: E1117 06:10:18.334562 3359 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 75.75.75.75 75.75.76.76 2001:558:feed::2 * Nov 17 06:10:18 minikube kubelet[3359]: E1117 06:10:18.348587 3359 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 75.75.75.75 75.75.76.76 2001:558:feed::2 * Nov 17 06:10:18 minikube kubelet[3359]: E1117 06:10:18.355513 3359 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 75.75.75.75 75.75.76.76 2001:558:feed::2 * Nov 17 06:10:18 minikube kubelet[3359]: E1117 06:10:18.363530 3359 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 75.75.75.75 75.75.76.76 2001:558:feed::2 * Nov 17 06:10:18 minikube kubelet[3359]: E1117 06:10:18.364226 3359 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 75.75.75.75 75.75.76.76 2001:558:feed::2 * Nov 17 06:10:18 minikube kubelet[3359]: E1117 06:10:18.392846 3359 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 75.75.75.75 75.75.76.76 2001:558:feed::2 * Nov 17 06:10:18 minikube kubelet[3359]: E1117 06:10:18.409518 3359 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 75.75.75.75 75.75.76.76 2001:558:feed::2 * Nov 17 06:10:18 minikube kubelet[3359]: E1117 06:10:18.414593 3359 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 75.75.75.75 75.75.76.76 2001:558:feed::2 * Nov 17 06:10:18 minikube kubelet[3359]: W1117 06:10:18.955557 3359 container.go:409] Failed to create summary reader for "/kubepods/besteffort/pod0729bc2f-c0de-4305-b2f6-046cbe7f58f6/48962cbf47c896c1c6a7b2fbd61d5d4698edbe7ae71ae2ae3690d42ca438fa96": none of the resources are being tracked. * Nov 17 06:10:21 minikube kubelet[3359]: W1117 06:10:21.244077 3359 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/posts-77486bffcb-wl67h through plugin: invalid network status for * Nov 17 06:10:21 minikube kubelet[3359]: E1117 06:10:21.500976 3359 pod_workers.go:191] Error syncing pod 0729bc2f-c0de-4305-b2f6-046cbe7f58f6 ("posts-77486bffcb-wl67h_default(0729bc2f-c0de-4305-b2f6-046cbe7f58f6)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 10s restarting failed container=posts pod=posts-77486bffcb-wl67h_default(0729bc2f-c0de-4305-b2f6-046cbe7f58f6)" * Nov 17 06:10:22 minikube kubelet[3359]: E1117 06:10:22.751400 3359 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 75.75.75.75 75.75.76.76 2001:558:feed::2 * Nov 17 06:10:23 minikube kubelet[3359]: W1117 06:10:23.203251 3359 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/posts-77486bffcb-wl67h through plugin: invalid network status for * Nov 17 06:10:23 minikube kubelet[3359]: E1117 06:10:23.413063 3359 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 75.75.75.75 75.75.76.76 2001:558:feed::2 * * ==> storage-provisioner [7efc38d3407e] <== * * ==> storage-provisioner [fa0d2d0f2df2] <== * F1117 06:02:53.590465 1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
pnisbettmtc commented 4 years ago

Update - I actually tried to deploy the same set of apps on a mac computer and the same thing happens without actually doing anything. It is just crashing on it's own after a short indeterminate amount of time.

I start up minikube. I run kubectl get all I see 3 pods 1 is mysql and 2 are very small java apps that try to connect to MySql . All are running Under services - there are 4 services - 3 node ports and a Kubernetes ClusterIP 3 deployments - all ready I walk away from the computer for 10 minutes I return and run kubectl get all again -I get "tls handshake timeout" I run minikube status host:running kubelet:running apiserver: Error kubeconfig: Configured

This is three different environments that this technology is failing in the same way. And it is failing without any action on my part . It just runs for a while then crashes all on it's own .

On the apple machine I installed it via brew Kubernetes v1.16.2 on Docker 18.0.9

This is unusable at this point. Maybe I'll try it again after the bugs are fixed in few months or a year or not again. Frustrating because it looked like it had potential.

tstromberg commented 4 years ago

Thanks! It appears that your VM keeps running out of memory:

[Nov17 06:10] dockerd invoked oom-killer: gfp_mask=0x14280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=(null), order=0, oom_score_adj=-999

also:

[ +0.000161] Out of memory: Kill process 4963 (java) score 1138 or sacrifice child
[ +0.000024] Killed process 4963 (java) total-vm:2008440kB, anon-rss:275312kB, file-rss:0kB, shmem-rss:0kB

The default VM is only 2GB, which this pod is likely pushing:

24fa5be67c6ac cdavisafc/cloudnative-statelessness-connectionsposts-stateful@sha256:1eb63116c784ffd30a3bb4c77ba3bebdf177abde23971fd6f06314ec78c9ce79 7 minutes ago Running connectionsposts 4 2ca616fcaca0f

You will need a bigger VM to use minikube with this application. Try removing your old VM using minikube delete, and then either persistently tell minikube to use more memory using:

minikube config set memory 8192

Or pass --memory 8192 to minikube start. Please let me know if this helps!

pnisbettmtc commented 4 years ago

OK. Thanks for looking at it. Two of the machines are laptops and only have 8GB of memory . Even increasing the VM to 4GB on those machines is a push . One of the pods is a MySQL server and a small amount of data on it. The other two are really small Java apps that pull a handful of rows form that DB. This seems like a pretty light workload.

Do you know what the memory footprint is for an empty pod with nothing running on it?

Thanks.

pnisbettmtc commented 4 years ago

Hi. I tried it on a computer with 32 GB and am getting the same error(s) . Did minikube delete. Ran minikube start --vm-driver "virtualbox" --memory 8192 One of the services cookbook-deployment-posts worked initiall then after a restart would not start. Another service - cookbook-deployment-connections - timed out after a 15 minutes

This is the output from kubectl describe node. I'll paste the output from minikube logs below that.

C:\Users\pnisbe\cloudnative\cloudnative-abundantsunshine\cloudnative-statelessness>kubectl describe node
Name:               minikube
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=minikube
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 20 Nov 2019 12:35:11 -0800
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Wed, 20 Nov 2019 13:33:45 -0800   Wed, 20 Nov 2019 12:35:06 -0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 20 Nov 2019 13:33:45 -0800   Wed, 20 Nov 2019 12:35:06 -0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Wed, 20 Nov 2019 13:33:45 -0800   Wed, 20 Nov 2019 12:35:06 -0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Wed, 20 Nov 2019 13:33:45 -0800   Wed, 20 Nov 2019 12:35:06 -0800   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.99.105
  Hostname:    minikube
Capacity:
 cpu:                2
 ephemeral-storage:  17784772Ki
 hugepages-2Mi:      0
 memory:             8163932Ki
 pods:               110
Allocatable:
 cpu:                2
 ephemeral-storage:  17784772Ki
 hugepages-2Mi:      0
 memory:             8163932Ki
 pods:               110
System Info:
 Machine ID:                 1132e5770f3f4c868d59effa0accbd3f
 System UUID:                1ffcb2be-6765-40e2-a476-052b5a2c9a76
 Boot ID:                    33290324-0476-4197-b7f9-41ea17252987
 Kernel Version:             4.19.76
 OS Image:                   Buildroot 2019.02.6
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://18.9.9
 Kubelet Version:            v1.16.2
 Kube-Proxy Version:         v1.16.2
Non-terminated Pods:         (15 in total)
  Namespace                  Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                                          ------------  ----------  ---------------  -------------  ---
  default                    connections-589dff75fc-l9zsc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         36m
  default                    connectionsposts-7689845fd8-2whvf             0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
  default                    mysql-5988544dd4-vb7xz                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         57m
  default                    posts-ddd9f5767-d7b26                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         36m
  kube-system                coredns-5644d7b6d9-4q8zd                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     59m
  kube-system                coredns-5644d7b6d9-gnlhk                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     59m
  kube-system                etcd-minikube                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         58m
  kube-system                kube-addon-manager-minikube                   5m (0%)       0 (0%)      50Mi (0%)        0 (0%)         59m
  kube-system                kube-apiserver-minikube                       250m (12%)    0 (0%)      0 (0%)           0 (0%)         58m
  kube-system                kube-controller-manager-minikube              200m (10%)    0 (0%)      0 (0%)           0 (0%)         26m
  kube-system                kube-proxy-52fps                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         59m
  kube-system                kube-scheduler-minikube                       100m (5%)     0 (0%)      0 (0%)           0 (0%)         58m
  kube-system                storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         59m
  kubernetes-dashboard       dashboard-metrics-scraper-76585494d8-pgds7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         59m
  kubernetes-dashboard       kubernetes-dashboard-57f4cb4545-pd7fs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         59m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                755m (37%)  0 (0%)
  memory             190Mi (2%)  340Mi (4%)
  ephemeral-storage  0 (0%)      0 (0%)
Events:
  Type    Reason                   Age                From                  Message
  ----    ------                   ----               ----                  -------
  Normal  NodeHasSufficientMemory  59m (x8 over 59m)  kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    59m (x8 over 59m)  kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     59m (x7 over 59m)  kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
  Normal  Starting                 59m                kube-proxy, minikube  Starting kube-proxy.
  Normal  Starting                 26m                kubelet, minikube     Starting kubelet.
  Normal  NodeHasSufficientMemory  26m (x8 over 26m)  kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    26m (x7 over 26m)  kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     26m (x8 over 26m)  kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  26m                kubelet, minikube     Updated Node Allocatable limit across pods
  Normal  Starting                 26m                kube-proxy, minikube  Starting kube-proxy.
  Normal  Starting                 21m                kubelet, minikube     Starting kubelet.
  Normal  NodeAllocatableEnforced  21m                kubelet, minikube     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    21m (x7 over 21m)  kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     21m (x8 over 21m)  kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
  Normal  Starting                 20m                kube-proxy, minikube  Starting kube-proxy.
C:\Users\pnisbe\cloudnative\cloudnative-abundantsunshine\cloudnative-statelessness>kubectl get all
NAME                                    READY   STATUS             RESTARTS   AGE
pod/connections-589dff75fc-l9zsc        0/1     CrashLoopBackOff   17         38m
pod/connectionsposts-7689845fd8-2whvf   1/1     Running            1          26m
pod/mysql-5988544dd4-vb7xz              1/1     Running            2          58m
pod/posts-ddd9f5767-d7b26               0/1     CrashLoopBackOff   17         38m

NAME                           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
service/connections-svc        NodePort    10.110.175.205   <none>        80:30133/TCP     38m
service/connectionsposts-svc   NodePort    10.100.61.194    <none>        80:32017/TCP     26m
service/kubernetes             ClusterIP   10.96.0.1        <none>        443/TCP          61m
service/mysql-svc              NodePort    10.110.134.75    <none>        3306:31014/TCP   58m
service/posts-svc              NodePort    10.111.28.164    <none>        80:30470/TCP     38m

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/connections        0/1     1            0           38m
deployment.apps/connectionsposts   1/1     1            1           26m
deployment.apps/mysql              1/1     1            1           58m
deployment.apps/posts              0/1     1            0           38m

NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/connections-589dff75fc        1         1         0       38m
replicaset.apps/connectionsposts-7689845fd8   1         1         1       26m
replicaset.apps/mysql-5988544dd4              1         1         1       58m
replicaset.apps/posts-ddd9f5767               1         1         0       38m
C:\Users\pnisbe\cloudnative\cloudnative-abundantsunshine\cloudnative-statelessness>kubectl describe node
Name:               minikube
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=minikube
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 20 Nov 2019 12:35:11 -0800
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Wed, 20 Nov 2019 13:36:46 -0800   Wed, 20 Nov 2019 12:35:06 -0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 20 Nov 2019 13:36:46 -0800   Wed, 20 Nov 2019 12:35:06 -0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Wed, 20 Nov 2019 13:36:46 -0800   Wed, 20 Nov 2019 12:35:06 -0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Wed, 20 Nov 2019 13:36:46 -0800   Wed, 20 Nov 2019 12:35:06 -0800   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.99.105
  Hostname:    minikube
Capacity:
 cpu:                2
 ephemeral-storage:  17784772Ki
 hugepages-2Mi:      0
 memory:             8163932Ki
 pods:               110
Allocatable:
 cpu:                2
 ephemeral-storage:  17784772Ki
 hugepages-2Mi:      0
 memory:             8163932Ki
 pods:               110
System Info:
 Machine ID:                 1132e5770f3f4c868d59effa0accbd3f
 System UUID:                1ffcb2be-6765-40e2-a476-052b5a2c9a76
 Boot ID:                    33290324-0476-4197-b7f9-41ea17252987
 Kernel Version:             4.19.76
 OS Image:                   Buildroot 2019.02.6
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://18.9.9
 Kubelet Version:            v1.16.2
 Kube-Proxy Version:         v1.16.2
Non-terminated Pods:         (15 in total)
  Namespace                  Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                                          ------------  ----------  ---------------  -------------  ---
  default                    connections-589dff75fc-l9zsc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         39m
  default                    connectionsposts-7689845fd8-2whvf             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
  default                    mysql-5988544dd4-vb7xz                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         59m
  default                    posts-ddd9f5767-d7b26                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         38m
  kube-system                coredns-5644d7b6d9-4q8zd                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     61m
  kube-system                coredns-5644d7b6d9-gnlhk                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     61m
  kube-system                etcd-minikube                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         60m
  kube-system                kube-addon-manager-minikube                   5m (0%)       0 (0%)      50Mi (0%)        0 (0%)         61m
  kube-system                kube-apiserver-minikube                       250m (12%)    0 (0%)      0 (0%)           0 (0%)         60m
  kube-system                kube-controller-manager-minikube              200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
  kube-system                kube-proxy-52fps                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         61m
  kube-system                kube-scheduler-minikube                       100m (5%)     0 (0%)      0 (0%)           0 (0%)         60m
  kube-system                storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         61m
  kubernetes-dashboard       dashboard-metrics-scraper-76585494d8-pgds7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         61m
  kubernetes-dashboard       kubernetes-dashboard-57f4cb4545-pd7fs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         61m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                755m (37%)  0 (0%)
  memory             190Mi (2%)  340Mi (4%)
  ephemeral-storage  0 (0%)      0 (0%)
Events:
  Type    Reason                   Age                From                  Message
  ----    ------                   ----               ----                  -------
  Normal  NodeHasSufficientMemory  61m (x8 over 61m)  kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    61m (x8 over 61m)  kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     61m (x7 over 61m)  kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
  Normal  Starting                 61m                kube-proxy, minikube  Starting kube-proxy.
  Normal  Starting                 28m                kubelet, minikube     Starting kubelet.
  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    28m (x7 over 28m)  kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     28m (x8 over 28m)  kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  28m                kubelet, minikube     Updated Node Allocatable limit across pods
  Normal  Starting                 28m                kube-proxy, minikube  Starting kube-proxy.
  Normal  Starting                 23m                kubelet, minikube     Starting kubelet.
  Normal  NodeAllocatableEnforced  23m                kubelet, minikube     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    23m (x7 over 23m)  kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     23m (x8 over 23m)  kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
  Normal  Starting                 23m                kube-proxy, minikube  Starting kube-proxy.

minikube logs output:

...
...
*
* ==> container status <==
* CONTAINER           IMAGE                                                                                                                                   CREATED              STATE               NAME                        ATTEMPT             POD ID
* 1c86b7b52d806       cdavisafc/cloudnative-statelessness-connections@sha256:9405807d18ad427c636a26138b78f9195c1920558f391fdd53b12b62b2f27771                 56 seconds ago       Exited              connections                 16                  90ccceb1da205
* 82bbdcbf3add2       cdavisafc/cloudnative-statelessness-posts@sha256:351fba985427c6722475bd6a930d83d0fcb9965c5d1bf43e2aac898c9e6821cb                       About a minute ago   Exited              posts                       16                  761287e7b4bb2
* c3767dac48559       6802d83967b99                                                                                                                           12 minutes ago       Running             kubernetes-dashboard        3                   c51d2e1006324
* c63a55785d2ee       4689081edb103                                                                                                                           12 minutes ago       Running             storage-provisioner         3                   2e65db2cd9e90
* 783b14f62c021       709901356c115                                                                                                                           13 minutes ago       Running             dashboard-metrics-scraper   2                   886058575db0b
* 4ed40de4e832a       6802d83967b99                                                                                                                           13 minutes ago       Exited              kubernetes-dashboard        2                   c51d2e1006324
* c418726dd39d9       cdavisafc/cloudnative-statelessness-connectionsposts-stateful@sha256:1eb63116c784ffd30a3bb4c77ba3bebdf177abde23971fd6f06314ec78c9ce79   13 minutes ago       Running             connectionsposts            1                   57aa4bdca25c4
* 9225fe47e042c       8454cbe08dc9f                                                                                                                           13 minutes ago       Running             kube-proxy                  2                   b605ec221306e
* 1d7c16c38ca62       bf261d1579144                                                                                                                           13 minutes ago       Running             coredns                     2                   285bcc596080a
* 9b6df92a23814       bf261d1579144                                                                                                                           13 minutes ago       Running             coredns                     2                   c96860b8e2e5b
* 294b2df0fb4ce       4689081edb103                                                                                                                           13 minutes ago       Exited              storage-provisioner         2                   2e65db2cd9e90
* 4fbd5f082acaf       6bb891430fb6e                                                                                                                           13 minutes ago       Running             mysql                       2                   53adc8a6ba2fd
* b6405881ffba6       c2c9a0406787c                                                                                                                           13 minutes ago       Running             kube-apiserver              2                   458bc70122cbe
* abac6ec681c11       6e4bffa46d70b                                                                                                                           13 minutes ago       Running             kube-controller-manager     1                   3ed6fbade5a76
* 2fe023ea0144b       ebac1ae204a2c                                                                                                                           13 minutes ago       Running             kube-scheduler              2                   a51e0f33567f8
* 4a0fbb8d2d69d       bd12a212f9dcb                                                                                                                           13 minutes ago       Running             kube-addon-manager          2                   e472c627287ef
* 8bbb81cc0696a       b2756210eeabf                                                                                                                           13 minutes ago       Running             etcd                        2                   4edfdffcca98b
* 7e3707fe9a243       cdavisafc/cloudnative-statelessness-connectionsposts-stateful@sha256:1eb63116c784ffd30a3bb4c77ba3bebdf177abde23971fd6f06314ec78c9ce79   16 minutes ago       Exited              connectionsposts            0                   8588b27fa7084
* 896a98ce2f9d4       709901356c115                                                                                                                           18 minutes ago       Exited              dashboard-metrics-scraper   1                   3d9d78abc8ad1
* 924f6dc580888       bf261d1579144                                                                                                                           18 minutes ago       Exited              coredns                     1                   4e5542272180d
* 041a9a3573935       bf261d1579144                                                                                                                           18 minutes ago       Exited              coredns                     1                   b9ba1d1adfa03
* ec769c83ee93c       8454cbe08dc9f                                                                                                                           18 minutes ago       Exited              kube-proxy                  1                   1c2f36d5df094
* 8c58ce625906d       6bb891430fb6e                                                                                                                           18 minutes ago       Exited              mysql                       1                   12660b0677675
* 7f2bec43f2724       bd12a212f9dcb                                                                                                                           18 minutes ago       Exited              kube-addon-manager          1                   38ede4257e3b5
* 5f7852a377871       ebac1ae204a2c                                                                                                                           18 minutes ago       Exited              kube-scheduler              1                   b9812f3433dd8
* 2b81d9d77e22f       b2756210eeabf                                                                                                                           18 minutes ago       Exited              etcd                        1                   a4f68377b6477
* 419be3aece395       c2c9a0406787c                                                                                                                           18 minutes ago       Exited              kube-apiserver              1                   0b06af7d8c269
* 138ded634d4d7       6e4bffa46d70b                                                                                                                           18 minutes ago       Exited              kube-controller-manager     0                   6dbd36efb2e17
*
* ==> coredns ["041a9a357393"] <==
...
* E1120 21:10:24.250816       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=3312&timeout=9m18s&timeoutSeconds=558&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
* [INFO] SIGTERM: Shutting down servers then terminating
*
* ==> coredns ["1d7c16c38ca6"] <==
...
* E1120 21:14:18.206011       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.E+i1n1c2o0m p2a1t:i1b4l:e18t2o0o60s1/ c a c h e / 1erfelfelcecoorr..ggo:19246:]  Fkagi/lmod/ kt8s list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
*
* ==> coredns ["924f6dc58088"] <==
..
* [INFO] SIGTERM: Shutting down servers then terminating
*
* ==> coredns ["9b6df92a2381"] <==
...
* E1120 21:14:18.123389       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
*
* ==> dmesg <==
...
* [ +15.392989] kauditd_printk_skb: 2 callbacks suppressed
* [Nov20 21:26] kauditd_printk_skb: 2 callbacks suppressed
*
* ==> kernel <==
*  21:27:00 up 14 min,  0 users,  load average: 1.19, 0.91, 0.81
* Linux minikube 4.19.76 #1 SMP Tue Oct 29 14:56:42 PDT 2019 x86_64 GNU/Linux
* PRETTY_NAME="Buildroot 2019.02.6"
*
* ==> kube-addon-manager ["4a0fbb8d2d69"] <==
...
* ==> kube-addon-manager ["7f2bec43f272"] <==
...

*
* ==> kube-apiserver ["419be3aece39"] <==
...
* I1120 21:10:02.807148       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
* I1120 21:10:24.245603       1 controller.go:182] Shutting down kubernetes service endpoint reconciler
* I1120 21:10:24.245817       1 controller.go:87] Shutting down OpenAPI AggregationController
* I1120 21:10:24.245870       1 controller.go:122] Shutting down OpenAPI controller
* I1120 21:10:24.245883       1 nonstructuralschema_controller.go:203] Shutting down NonStructuralSchemaConditionController
* I1120 21:10:24.245893       1 establishing_controller.go:84] Shutting down EstablishingController
* I1120 21:10:24.245945       1 naming_controller.go:299] Shutting down NamingConditionController
* I1120 21:10:24.245956       1 customresource_discovery_controller.go:219] Shutting down DiscoveryController
* I1120 21:10:24.245981       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
* I1120 21:10:24.245993       1 apiapproval_controller.go:197] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
* I1120 21:10:24.246001       1 available_controller.go:395] Shutting down AvailableConditionController
* I1120 21:10:24.246012       1 autoregister_controller.go:164] Shutting down autoregister controller
* I1120 21:10:24.246020       1 apiservice_controller.go:106] Shutting down APIServiceRegistrationController
* I1120 21:10:24.246035       1 crd_finalizer.go:286] Shutting down CRDFinalizer
* I1120 21:10:24.248961       1 secure_serving.go:167] Stopped listening on [::]:8443
* E1120 21:10:24.258184       1 controller.go:185] Get https://[::1]:8443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp [::1]:8443: connect: connection refused
*
* ==> kube-apiserver ["b6405881ffba"] <==
...
* I1120 21:14:00.038950       1 controller.go:606] quota admission added evaluator for: endpoints
*
* ==> kube-controller-manager ["138ded634d4d"] <==
...
https://localhost:8443/apis/storage.k8s.io/v1/volumeattachments?allowWatchBookmarks=true&resourceVersion=2968&timeout=7m37s&timeoutSeconds=457&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
* E1120 21:10:24.254646       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.Ingress: Get https://localhost:8443/apis/extensions/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=2968&timeout=6m13s&timeoutSeconds=373&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
*
* ==> kube-controller-manager ["abac6ec681c1"] <==
...
* I1120 21:14:10.159860       1 shared_informer.go:204] Caches are synced for resource quota
* I1120 21:14:10.162696       1 shared_informer.go:204] Caches are synced for endpoint
* I1120 21:14:10.175845       1 shared_informer.go:204] Caches are synced for resource quota
* I1120 21:14:10.197452       1 shared_informer.go:204] Caches are synced for garbage collector
* I1120 21:14:10.197475       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
* I1120 21:14:10.245420       1 shared_informer.go:204] Caches are synced for garbage collector
* I1120 21:14:10.247799       1 shared_informer.go:204] Caches are synced for attach detach
*
* ==> kube-proxy ["9225fe47e042"] <==
...
* I1120 21:13:48.951738       1 shared_informer.go:204] Caches are synced for endpoints config
* I1120 21:13:48.951825       1 shared_informer.go:204] Caches are synced for service config
*
* ==> kube-proxy ["ec769c83ee93"] <==
...

* I1120 21:08:38.017169       1 shared_informer.go:204] Caches are synced for service config
* I1120 21:08:38.017216       1 shared_informer.go:204] Caches are synced for endpoints config
*
* ==> kube-scheduler ["2fe023ea0144"] <==
* I1120 21:13:38.580544       1 serving.go:319] Generated self-signed cert in-memory
...
* I1120 21:14:00.043102       1 leaderelection.go:251] successfully acquired lease kube-system/kube-scheduler
*
* ==> kube-scheduler ["5f7852a37787"] <==
* I1120 21:08:29.533878       1 serving.go:319] Generated self-signed cert in-memory
...
* E1120 21:10:24.251921       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=2968&timeout=5m42s&timeoutSeconds=342&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
*
* ==> kubelet <==
* -- Logs begin at Wed 2019-11-20 21:13:03 UTC, end at Wed 2019-11-20 21:27:02 UTC. --
* Nov 20 21:21:10 minikube kubelet[2992]: E1120 21:21:10.721251    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:21:18 minikube kubelet[2992]: E1120 21:21:18.725538    2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:21:23 minikube kubelet[2992]: E1120 21:21:23.722863    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:21:30 minikube kubelet[2992]: E1120 21:21:30.723743    2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:21:34 minikube kubelet[2992]: E1120 21:21:34.721210    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:21:41 minikube kubelet[2992]: E1120 21:21:41.720600    2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:21:49 minikube kubelet[2992]: E1120 21:21:49.722852    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:21:53 minikube kubelet[2992]: E1120 21:21:53.722862    2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:22:04 minikube kubelet[2992]: E1120 21:22:04.722878    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:22:04 minikube kubelet[2992]: E1120 21:22:04.723227    2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:22:15 minikube kubelet[2992]: E1120 21:22:15.722691    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:22:15 minikube kubelet[2992]: E1120 21:22:15.724430    2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:22:26 minikube kubelet[2992]: E1120 21:22:26.721238    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:22:27 minikube kubelet[2992]: E1120 21:22:27.728381    2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:22:40 minikube kubelet[2992]: E1120 21:22:40.722019    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:22:41 minikube kubelet[2992]: E1120 21:22:41.720589    2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:22:54 minikube kubelet[2992]: E1120 21:22:54.721649    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:22:56 minikube kubelet[2992]: E1120 21:22:56.723021    2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:23:07 minikube kubelet[2992]: E1120 21:23:07.725189    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:23:10 minikube kubelet[2992]: E1120 21:23:10.721639    2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:23:19 minikube kubelet[2992]: E1120 21:23:19.722636    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:23:25 minikube kubelet[2992]: E1120 21:23:25.723072    2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:23:31 minikube kubelet[2992]: E1120 21:23:31.721563    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:23:40 minikube kubelet[2992]: E1120 21:23:40.721843    2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:23:46 minikube kubelet[2992]: E1120 21:23:46.722027    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:23:52 minikube kubelet[2992]: E1120 21:23:52.724589    2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:23:58 minikube kubelet[2992]: E1120 21:23:58.721728    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:24:03 minikube kubelet[2992]: E1120 21:24:03.722293    2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:24:10 minikube kubelet[2992]: E1120 21:24:10.721321    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:24:18 minikube kubelet[2992]: E1120 21:24:18.721816    2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:24:23 minikube kubelet[2992]: E1120 21:24:23.724293    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:24:29 minikube kubelet[2992]: E1120 21:24:29.721567    2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:24:34 minikube kubelet[2992]: E1120 21:24:34.722722    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:24:43 minikube kubelet[2992]: E1120 21:24:43.722199    2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:24:49 minikube kubelet[2992]: E1120 21:24:49.723104    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:24:55 minikube kubelet[2992]: E1120 21:24:55.723681    2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:25:00 minikube kubelet[2992]: E1120 21:25:00.723777    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:25:07 minikube kubelet[2992]: E1120 21:25:07.723381    2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:25:11 minikube kubelet[2992]: E1120 21:25:11.723051    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:25:20 minikube kubelet[2992]: E1120 21:25:20.722108    2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:25:24 minikube kubelet[2992]: E1120 21:25:24.724102    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:25:35 minikube kubelet[2992]: E1120 21:25:35.723806    2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:25:36 minikube kubelet[2992]: E1120 21:25:36.723952    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:25:48 minikube kubelet[2992]: E1120 21:25:48.722551    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:25:51 minikube kubelet[2992]: W1120 21:25:51.882627    2992 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/posts-ddd9f5767-d7b26 through plugin: invalid network status for
* Nov 20 21:25:53 minikube kubelet[2992]: W1120 21:25:53.009014    2992 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/posts-ddd9f5767-d7b26 through plugin: invalid network status for
* Nov 20 21:26:01 minikube kubelet[2992]: W1120 21:26:01.173437    2992 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/posts-ddd9f5767-d7b26 through plugin: invalid network status for
* Nov 20 21:26:01 minikube kubelet[2992]: E1120 21:26:01.201865    2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:26:02 minikube kubelet[2992]: W1120 21:26:02.220494    2992 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/posts-ddd9f5767-d7b26 through plugin: invalid network status for
* Nov 20 21:26:04 minikube kubelet[2992]: W1120 21:26:04.270541    2992 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/connections-589dff75fc-l9zsc through plugin: invalid network status for
* Nov 20 21:26:11 minikube kubelet[2992]: E1120 21:26:11.721745    2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:26:17 minikube kubelet[2992]: W1120 21:26:17.583909    2992 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/connections-589dff75fc-l9zsc through plugin: invalid network status for
* Nov 20 21:26:17 minikube kubelet[2992]: E1120 21:26:17.591916    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:26:18 minikube kubelet[2992]: W1120 21:26:18.633288    2992 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/connections-589dff75fc-l9zsc through plugin: invalid network status for
* Nov 20 21:26:22 minikube kubelet[2992]: E1120 21:26:22.726827    2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:26:32 minikube kubelet[2992]: E1120 21:26:32.723039    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:26:37 minikube kubelet[2992]: E1120 21:26:37.723683    2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:26:46 minikube kubelet[2992]: E1120 21:26:46.721733    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:26:51 minikube kubelet[2992]: E1120 21:26:51.726672    2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:26:58 minikube kubelet[2992]: E1120 21:26:58.722865    2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
*
* ==> kubernetes-dashboard ["4ed40de4e832"] <==
* 2019/11/20 21:13:48 Using namespace: kubernetes-dashboard
* 2019/11/20 21:13:48 Using in-cluster config to connect to apiserver
* 2019/11/20 21:13:48 Using secret token for csrf signing
* 2019/11/20 21:13:48 Initializing csrf token from kubernetes-dashboard-csrf secret
* 2019/11/20 21:13:48 Starting overwatch
* panic: Get https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf: dial tcp 10.96.0.1:443: i/o timeout
*
* goroutine 1 [running]:
* github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc0004d2000)
*       /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:40 +0x3b4
* github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
*       /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:65
* github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc000343700)
*       /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:479 +0xc7
* github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc000343700)
*       /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:447 +0x47
* github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
*       /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:528
* main.main()
*       /home/travis/build/kubernetes/dashboard/src/app/backend/dashboard.go:105 +0x212
*
* ==> kubernetes-dashboard ["c3767dac4855"] <==
* 2019/11/20 21:14:37 Starting overwatch
* 2019/11/20 21:14:37 Using namespace: kubernetes-dashboard
* 2019/11/20 21:14:37 Using in-cluster config to connect to apiserver
* 2019/11/20 21:14:37 Using secret token for csrf signing
* 2019/11/20 21:14:37 Initializing csrf token from kubernetes-dashboard-csrf secret
* 2019/11/20 21:14:37 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
* 2019/11/20 21:14:37 Successful initial request to the apiserver, version: v1.16.2
* 2019/11/20 21:14:37 Generating JWE encryption key
* 2019/11/20 21:14:37 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
* 2019/11/20 21:14:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
* 2019/11/20 21:14:38 Initializing JWE encryption key from synchronized object
* 2019/11/20 21:14:38 Creating in-cluster Sidecar client
* 2019/11/20 21:14:38 Successful request to sidecar
* 2019/11/20 21:14:38 Serving insecurely on HTTP port: 9090
*
* ==> storage-provisioner ["294b2df0fb4c"] <==
* F1120 21:14:18.463717       1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
*
* ==> storage-provisioner ["c63a55785d2e"] <==

tstromberg commented 4 years ago

Hi. I tried it on a computer with 32 GB and am getting the same error(s) .

Do you mind sharing the actual error you received?

Did minikube delete. Ran minikube start --vm-driver "virtualbox" --memory 8192 One of the services cookbook-deployment-posts worked initiall then after a restart would not start.

After a restart of what, the host running minikube, or do you mean minikube start?

I went through your logs and see that it looks like minikube start was run multiple times, which is fine. I also see that your posts container is in CrashLoopBackoff. Do you mind running

kubectl get po -A kubectl describe pod <name of your posts pod>

As far as I can tell, the apiserver should be serving data now.

Do you know what the memory footprint is for an empty pod with nothing running on it?

No, but I would wager it isn't more than 1MB. The main issue that I saw previously was that the minikube VM only had 2GB allocated to it, but the Java process you were running was asking for 2GB, which doesn't leave any room for Kubernetes.

Hope this helps. Please let me know what you find out!

sharifelgamal commented 4 years ago

@pnisbettmtc Do you still need help here?

pnisbettmtc commented 4 years ago

Thanks but I’ve pretty much written off minikube as unusable for the time being .

I spent way too much time just to get three small pods to run only to have one or two pods run but a third won’t start because the API crashed . Different runs ,different pods wouldn’t start.

Minikube did not work as it was intended to.

From: Sharif Elgamal notifications@github.com Sent: Wednesday, January 29, 2020 11:23 AM To: kubernetes/minikube minikube@noreply.github.com Cc: Paul Nisbett pnisbett@bayareametro.gov; Mention mention@noreply.github.com Subject: Re: [kubernetes/minikube] OOM + kubectl - Unable to connect to the server: net/http: TLS handshake timeout (#5933)

External Email

@pnisbettmtchttps://gcc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpnisbettmtc&data=02%7C01%7Cpnisbett%40bayareametro.gov%7C7ffcfe38caff40c938d108d7a4f0b118%7C0d1e7a5560f044919f2e363ea94f5c87%7C0%7C1%7C637159225988085877&sdata=iLRgqjLBcPCKL9fjlUuUHysxaDVps7L%2B67%2BKeXlV4CU%3D&reserved=0 Do you still need help here?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://gcc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fkubernetes%2Fminikube%2Fissues%2F5933%3Femail_source%3Dnotifications%26email_token%3DADMHWUQQQYZJHOGSGERNSGDRAHJSHA5CNFSM4JOG75EKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEKINPOY%23issuecomment-579917755&data=02%7C01%7Cpnisbett%40bayareametro.gov%7C7ffcfe38caff40c938d108d7a4f0b118%7C0d1e7a5560f044919f2e363ea94f5c87%7C0%7C1%7C637159225988095869&sdata=a45g8WcuApGjSiBBhVgTuyIMBlvsCJMpaokn%2FnztPD4%3D&reserved=0, or unsubscribehttps://gcc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FADMHWUQ2TILZFYZCIZDMRR3RAHJSHANCNFSM4JOG75EA&data=02%7C01%7Cpnisbett%40bayareametro.gov%7C7ffcfe38caff40c938d108d7a4f0b118%7C0d1e7a5560f044919f2e363ea94f5c87%7C0%7C1%7C637159225988095869&sdata=p%2FcWyLNB%2B3BD6l2dvFsxKCWHocg57VhmZVJtrwoYXX0%3D&reserved=0.

medyagh commented 4 years ago

@pnisbettmtc applogize for the bad expereince you faced in windows :( our windows integration tests has been broken and we need to fix our integraiton tests for windows to get better eye on the users experience.

meanwhile do you mind trying with our new vm-driver docker ?

if you have docker on your windows, you could try

minikube start --vm-driver=docker
pnisbettmtc commented 4 years ago

I will do if I get the time. Thanks. Paul

From: Medya Ghazizadeh notifications@github.com Sent: Wednesday, March 4, 2020 11:23 AM To: kubernetes/minikube minikube@noreply.github.com Cc: Paul Nisbett pnisbett@bayareametro.gov; Mention mention@noreply.github.com Subject: Re: [kubernetes/minikube] OOM + kubectl - Unable to connect to the server: net/http: TLS handshake timeout (#5933)

External Email

@pnisbettmtchttps://gcc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpnisbettmtc&data=02%7C01%7Cpnisbett%40bayareametro.gov%7C2952834bc73f4afa32d108d7c071717a%7C0d1e7a5560f044919f2e363ea94f5c87%7C0%7C1%7C637189465789413064&sdata=WqWN3BBba62kLyxuEAsQMRCvdVO8norD01fSQTQA7OQ%3D&reserved=0 applogize for the bad expereince you faced in windows :( our windows integration tests has been broken and we need to fix our integraiton tests for windows to get better eye on the users experience.

meanwhile do you mind trying with our new vm-driver docker ?

if you have docker on your windows, you could try

minikube start --vm-driver=docker

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://gcc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fkubernetes%2Fminikube%2Fissues%2F5933%3Femail_source%3Dnotifications%26email_token%3DADMHWUTZP5IMPQ6QU2UX3IDRF2TA5A5CNFSM4JOG75EKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOENZYAQQ%23issuecomment-594772034&data=02%7C01%7Cpnisbett%40bayareametro.gov%7C2952834bc73f4afa32d108d7c071717a%7C0d1e7a5560f044919f2e363ea94f5c87%7C0%7C1%7C637189465789413064&sdata=pZqlVVRaGfqHf47gANC2v944kHX4R2VmNxpRewf61fU%3D&reserved=0, or unsubscribehttps://gcc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FADMHWUTSWGCXI6IWOGRJBBTRF2TA5ANCNFSM4JOG75EA&data=02%7C01%7Cpnisbett%40bayareametro.gov%7C2952834bc73f4afa32d108d7c071717a%7C0d1e7a5560f044919f2e363ea94f5c87%7C0%7C1%7C637189465789423060&sdata=jK%2FLQ%2B3wncQfwfmxMuAs8SU28lScCsYpvE9AUMG%2F6Fg%3D&reserved=0.

omeryounus commented 4 years ago

Unable to connect to the server: net/http: TLS handshake timeout