kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.47k stars 4.89k forks source link

Quickstart instructions for kubectl run don't work with Kubernetes 1.16 #5433

Closed nickebbitt closed 5 years ago

nickebbitt commented 5 years ago

The exact command to reproduce the issue:

Based on the Quickstart instructions:

kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.10 --port=8080

This is using Minikube version 1.4.0 with the default k8s version of 1.16.

The above command works when downgrading the version to 1.15 (i.e. minikube start --kubernetes-version=v1.15.0).

The full output of the command that failed:

WARNING: New generator "deployment/apps.v1beta1" specified, but it isn't available. Falling back to "run/v1". error: no matches for kind "Deployment" in version "apps/v1beta1"

The output of the minikube logs command:

==> Docker <== -- Logs begin at Sun 2019-09-22 21:10:34 UTC, end at Sun 2019-09-22 21:22:41 UTC. -- Sep 22 21:10:42 minikube dockerd[2396]: time="2019-09-22T21:10:42.719602607Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 22 21:10:42 minikube dockerd[2396]: time="2019-09-22T21:10:42.719623295Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000047f50, CONNECTING" module=grpc Sep 22 21:10:42 minikube dockerd[2396]: time="2019-09-22T21:10:42.719902466Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000047f50, READY" module=grpc Sep 22 21:10:42 minikube dockerd[2396]: time="2019-09-22T21:10:42.752961755Z" level=info msg="Graph migration to content-addressability took 0.00 seconds" Sep 22 21:10:42 minikube dockerd[2396]: time="2019-09-22T21:10:42.753201564Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 22 21:10:42 minikube dockerd[2396]: time="2019-09-22T21:10:42.753230672Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 22 21:10:42 minikube dockerd[2396]: time="2019-09-22T21:10:42.753240355Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" Sep 22 21:10:42 minikube dockerd[2396]: time="2019-09-22T21:10:42.753248379Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" Sep 22 21:10:42 minikube dockerd[2396]: time="2019-09-22T21:10:42.753259911Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" Sep 22 21:10:42 minikube dockerd[2396]: time="2019-09-22T21:10:42.753268730Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" Sep 22 21:10:42 minikube dockerd[2396]: time="2019-09-22T21:10:42.753731561Z" level=info msg="Loading containers: start." Sep 22 21:10:42 minikube dockerd[2396]: time="2019-09-22T21:10:42.837972021Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 22 21:10:42 minikube dockerd[2396]: time="2019-09-22T21:10:42.981718387Z" level=info msg="Loading containers: done." Sep 22 21:10:42 minikube dockerd[2396]: time="2019-09-22T21:10:42.998564581Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" Sep 22 21:10:42 minikube dockerd[2396]: time="2019-09-22T21:10:42.999210605Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" Sep 22 21:10:43 minikube dockerd[2396]: time="2019-09-22T21:10:43.031103434Z" level=info msg="Docker daemon" commit=039a7df9ba graphdriver(s)=overlay2 version=18.09.9 Sep 22 21:10:43 minikube dockerd[2396]: time="2019-09-22T21:10:43.031259354Z" level=info msg="Daemon has completed initialization" Sep 22 21:10:43 minikube systemd[1]: Started Docker Application Container Engine. Sep 22 21:10:43 minikube dockerd[2396]: time="2019-09-22T21:10:43.111381359Z" level=info msg="API listen on /var/run/docker.sock" Sep 22 21:10:43 minikube dockerd[2396]: time="2019-09-22T21:10:43.111458843Z" level=info msg="API listen on [::]:2376" Sep 22 21:12:01 minikube dockerd[2396]: time="2019-09-22T21:12:01.221077186Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" Sep 22 21:12:01 minikube dockerd[2396]: time="2019-09-22T21:12:01.222096471Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" Sep 22 21:12:01 minikube dockerd[2396]: time="2019-09-22T21:12:01.275997980Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" Sep 22 21:12:01 minikube dockerd[2396]: time="2019-09-22T21:12:01.276743658Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" Sep 22 21:12:01 minikube dockerd[2396]: time="2019-09-22T21:12:01.314734493Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" Sep 22 21:12:01 minikube dockerd[2396]: time="2019-09-22T21:12:01.315263063Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" Sep 22 21:12:01 minikube dockerd[2396]: time="2019-09-22T21:12:01.679916585Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" Sep 22 21:12:01 minikube dockerd[2396]: time="2019-09-22T21:12:01.680433568Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" Sep 22 21:12:05 minikube dockerd[2396]: time="2019-09-22T21:12:05.026670581Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" Sep 22 21:12:05 minikube dockerd[2396]: time="2019-09-22T21:12:05.027513543Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" Sep 22 21:12:05 minikube dockerd[2396]: time="2019-09-22T21:12:05.050097736Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" Sep 22 21:12:05 minikube dockerd[2396]: time="2019-09-22T21:12:05.050667545Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" Sep 22 21:12:08 minikube dockerd[2396]: time="2019-09-22T21:12:08.410591470Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" Sep 22 21:12:08 minikube dockerd[2396]: time="2019-09-22T21:12:08.411073491Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" Sep 22 21:12:08 minikube dockerd[2396]: time="2019-09-22T21:12:08.422545967Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" Sep 22 21:12:08 minikube dockerd[2396]: time="2019-09-22T21:12:08.423178949Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" Sep 22 21:12:13 minikube dockerd[2396]: time="2019-09-22T21:12:13.527623920Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" Sep 22 21:12:13 minikube dockerd[2396]: time="2019-09-22T21:12:13.528827684Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" Sep 22 21:12:14 minikube dockerd[2396]: time="2019-09-22T21:12:14.183166349Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/15a51e44f97da33972c7cf16ba6b8e5757b68cf77e6adcac1dabbcf35cd85e7b/shim.sock" debug=false pid=3455 Sep 22 21:12:14 minikube dockerd[2396]: time="2019-09-22T21:12:14.194823163Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/36890e6de3cc5c3874b1297db9b8a81f0e27caf72b0b9a4091071fc1368cbc0a/shim.sock" debug=false pid=3465 Sep 22 21:12:14 minikube dockerd[2396]: time="2019-09-22T21:12:14.205534195Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4ae4d020727e2cd031051bf4d92da93628f7b65aa382dff3922cc3822690a60a/shim.sock" debug=false pid=3483 Sep 22 21:12:14 minikube dockerd[2396]: time="2019-09-22T21:12:14.215432324Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3adee2b78e0e177c4256390d0f2740e6a40d104c07af3b21e0b8d17f639a9269/shim.sock" debug=false pid=3489 Sep 22 21:12:14 minikube dockerd[2396]: time="2019-09-22T21:12:14.269389918Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/740321fe71cf01cc8228ec071257de8ebe33da32c72ee25923a0fc7356834e92/shim.sock" debug=false pid=3507 Sep 22 21:12:14 minikube dockerd[2396]: time="2019-09-22T21:12:14.578965933Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3dd2228844d8eb4b0d86e93b816a8c33fbdad69820ca7598ecb4ca7ffae528d3/shim.sock" debug=false pid=3674 Sep 22 21:12:14 minikube dockerd[2396]: time="2019-09-22T21:12:14.591171772Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7584f4aedfff38997bdcfcb5436c0837222a134f7c7c69db8976f756a7fd28ba/shim.sock" debug=false pid=3688 Sep 22 21:12:14 minikube dockerd[2396]: time="2019-09-22T21:12:14.593333218Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a08da7b221f9086f2bcf8e4935370fac7be045bbef36d0ba8b4510ca5f4a2b35/shim.sock" debug=false pid=3693 Sep 22 21:12:14 minikube dockerd[2396]: time="2019-09-22T21:12:14.615029942Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/04543b0c1c4823adde4dfa91d9d293bd5c1c742dc0dbb8ee7b32faae84406671/shim.sock" debug=false pid=3704 Sep 22 21:12:14 minikube dockerd[2396]: time="2019-09-22T21:12:14.718414445Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b74f2dd18237367528143593df635aef6ef7f7b0185bae58ae455c1ab58e46aa/shim.sock" debug=false pid=3766 Sep 22 21:12:31 minikube dockerd[2396]: time="2019-09-22T21:12:31.628386572Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ac2f38898d56b40b812f02a706aefb87c78eea26be5fb44474e78649f922095b/shim.sock" debug=false pid=4104 Sep 22 21:12:32 minikube dockerd[2396]: time="2019-09-22T21:12:32.020015596Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6acc4cece7e4a1768fe96bbc3333168169497c34567fc0835e4b4dc470adf2f7/shim.sock" debug=false pid=4171 Sep 22 21:12:32 minikube dockerd[2396]: time="2019-09-22T21:12:32.063484700Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6d6f14ad05d03ab576ed90f27457cbe22a08cd1cc17c929db0a25511fe5e1417/shim.sock" debug=false pid=4187 Sep 22 21:12:32 minikube dockerd[2396]: time="2019-09-22T21:12:32.122325328Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/078c3f6ad769a924332a1503a1595aacaf7a339555fa2df28f314c6b1a7ed317/shim.sock" debug=false pid=4201 Sep 22 21:12:32 minikube dockerd[2396]: time="2019-09-22T21:12:32.811463378Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3f89c95730f3768febf3654e169f65ecc2cb9baa9eddf3c3501cf78e691765da/shim.sock" debug=false pid=4358 Sep 22 21:12:33 minikube dockerd[2396]: time="2019-09-22T21:12:33.148699592Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4009f65d3ee5f80c89f1efcc7c24db072a412184b97fa137db209769fc06e92e/shim.sock" debug=false pid=4412 Sep 22 21:12:35 minikube dockerd[2396]: time="2019-09-22T21:12:35.169612076Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b5bfe6ea5e510b0c8ec388e01ed8e817daddf334decce05c20f599509797da75/shim.sock" debug=false pid=4526 Sep 22 21:12:35 minikube dockerd[2396]: time="2019-09-22T21:12:35.345098134Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c969f8c957847ba84233a23799136ce1ccc9a2d39e456c4803aca078b30a8792/shim.sock" debug=false pid=4568 Sep 22 21:12:37 minikube dockerd[2396]: time="2019-09-22T21:12:37.936146440Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/97bbe5483acb8e1f952444a6a9a71642cd892daeae0403d73061fd545fb8a450/shim.sock" debug=false pid=4671 Sep 22 21:12:38 minikube dockerd[2396]: time="2019-09-22T21:12:38.002268934Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/14a7f31cbadc91737084b8e95f714fb1be93353866255ec4ecd99aa77a0f64a3/shim.sock" debug=false pid=4691 Sep 22 21:12:38 minikube dockerd[2396]: time="2019-09-22T21:12:38.683858432Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0a1625f5f04a7d5ba1e18b4408cb7f0476dd7eb54b989567bd3a1ab22b154bc9/shim.sock" debug=false pid=4805 Sep 22 21:12:44 minikube dockerd[2396]: time="2019-09-22T21:12:44.052001798Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b5f9d8a7e502381f737912750da3821447e1c422f691f6a73a20eb855a5c3630/shim.sock" debug=false pid=4929

==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID b5f9d8a7e5023 kubernetesui/metrics-scraper@sha256:35fcae4fd9232a541a8cb08f2853117ba7231750b75c2cb3b6a58a2aaa57f878 9 minutes ago Running dashboard-metrics-scraper 0 97bbe5483acb8 0a1625f5f04a7 6802d83967b99 10 minutes ago Running kubernetes-dashboard 0 14a7f31cbadc9 c969f8c957847 4689081edb103 10 minutes ago Running storage-provisioner 0 b5bfe6ea5e510 4009f65d3ee5f bf261d1579144 10 minutes ago Running coredns 0 6d6f14ad05d03 3f89c95730f37 bf261d1579144 10 minutes ago Running coredns 0 6acc4cece7e4a 078c3f6ad769a c21b0c7400f98 10 minutes ago Running kube-proxy 0 ac2f38898d56b b74f2dd182373 bd12a212f9dcb 10 minutes ago Running kube-addon-manager 0 740321fe71cf0 a08da7b221f90 06a629a7e51cd 10 minutes ago Running kube-controller-manager 0 3adee2b78e0e1 04543b0c1c482 b305571ca60a5 10 minutes ago Running kube-apiserver 0 4ae4d020727e2 3dd2228844d8e b2756210eeabf 10 minutes ago Running etcd 0 36890e6de3cc5 7584f4aedfff3 301ddc62b80b1 10 minutes ago Running kube-scheduler 0 15a51e44f97da

==> coredns [3f89c95730f3] <== .:53 2019-09-22T21:12:38.499Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76 2019-09-22T21:12:38.499Z [INFO] CoreDNS-1.6.2 2019-09-22T21:12:38.499Z [INFO] linux/amd64, go1.12.8, 795a3eb CoreDNS-1.6.2 linux/amd64, go1.12.8, 795a3eb 2019-09-22T21:12:42.895Z [INFO] plugin/ready: Still waiting on: "kubernetes" 2019-09-22T21:12:52.895Z [INFO] plugin/ready: Still waiting on: "kubernetes" 2019-09-22T21:13:02.896Z [INFO] plugin/ready: Still waiting on: "kubernetes" I0922 21:13:03.530997 1 trace.go:82] Trace[1084923015]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-09-22 21:12:33.519801252 +0000 UTC m=+0.305590694) (total time: 30.011157233s): Trace[1084923015]: [30.011157233s] [30.011157233s] END E0922 21:13:03.531100 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeoEut 0922 21:13:03.531100 1 reflE09ec22to 2r.g1:1o:1263:03.] pkg/5311mod/k8s.io/c0l0 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatibleient-/tgoo@ols/cvache11.0./0+increfompaltibleector.go:9/tool4:s Failed to list v1.Service: Get https://10.96.0.1:443/api/v1/services?limit/=500&cache/rreeflsourceVeectrsorion=0.go: :94di: alFa tcilep d 10.96.0to li.1:443: i/o stimeout t v1.E0922 2Ser1:13:vi03ce:.5311 Get ht00 tps:/ /10.9 1 6.0.1re:443/flapiector.go:126/v] pkg/m1/serodvices/k?l8simit=500.io/clien&rt-go@v1esour1.0.0+icencompatVerible/tosion=ols/c0:ac dialhe tcp 10.96./r0.1:eflec443: toi/o timeour.t go:94: FaileE0d to li922st v 21:11.Service:3:03 G.5312et https37 :/ /10.9 1 6.0ref.1:443/lecapi/vtor1/s.go:1ervices?li26] pmit=5kg/mo00d/k8s&re.io/clisouentrceVe-gorsion=0: d@v11ial tcp .0.10.96.00+inc.1:443: omi/o tpaimtieouble/toot lIs/cac09he/22 21:13reflector:0.go:3.53194:229 Failed t o list 1v1.E tndpoirace.ntsgo:82: ] TGet httrace[ps://105440110.9793]:6.0.1:4 "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-09-22 21:12:33.5194693/944api/v19 /endp+0oints?li00mit0 UTC m=50=+00&r.3054esour88ceVer780) sion=(tota0: dil al tctime:p 10. 3096..010.1151:440623: i/2s): o timeTraoutce[10544 01793E0922]: [3 21:13:0.01151062203.53s] [30.02380 111 re51fl062ector2s] E.go:126ND ] E09pkg/m22 21od/:13:0k8s.io/c3.531237lie nt-go@v 11.0.0+ inc1 rompeflatiector.gbleo:1/to26] pkgols/c/mache/refod/leck8s.io/ctor.glient-goo:@v94: F11ailed.0.0+inc toomp list vatibl1.Ne/tools/amespaccachee: /reflGet hecttps:to//10.96r..0.1:44go3/:94: Faapiile/v1d to li/nast mesv1.Endpaces?lpoimiintt=500&res: Get sourchttps://eVers10.96ion.0.=0: d1:ial tc443p 10./a96.pi0.1/v1:4/en43:dpoints i/?limit=o tim500&reseouourt ceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0922 21:13:03.531237 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0922 21:13:03.531237 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I0922 21:13:03.532295 1 trace.go:82] Trace[733308622]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-09-22 21:12:33.531491921 +0000 UTC m=+0.317281249) (total time: 30.000774934s): Trace[733308622]: [30.000774934s] [30.000774934s] END E0922 21:13:03.532380 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0922 21:13:03.532380 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0922 21:13:03.532380 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

==> coredns [4009f65d3ee5] <== E0922 21:13:03.531244 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0922 21:13:03.531546 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 102.96.019-0.01:4439-22T:2 i/o ti1:12:meo34.8u7t 2E0922 21:13:03.533177 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+inZco mpatible/to[olINFO] plugin/rs/cache/refleeadyctor.g: Sto:94: Faileill waiting d on:to list " v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0kuberne: dial tcpt 10.96.0.1:443: i/o es" timeout .:53 2019-09-22T21:12:38.503Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76 2019-09-22T21:12:38.503Z [INFO] CoreDNS-1.6.2 2019-09-22T21:12:38.503Z [INFO] linux/amd64, go1.12.8, 795a3eb CoreDNS-1.6.2 linux/amd64, go1.12.8, 795a3eb 2019-09-22T21:12:44.873Z [INFO] plugin/ready: Still waiting on: "kubernetes" 2019-09-22T21:12:54.875Z [INFO] plugin/ready: Still waiting on: "kubernetes" I0922 21:13:03.531212 1 trace.go:82] Trace[1993390614]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-09-22 21:12:33.519863308 +0000 UTC m=+0.121846578) (total time: 30.011311765s): Trace[1993390614]: [30.011311765s] [30.011311765s] END E0922 21:13:03.531244 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0922 21:13:03.531244 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0922 21:13:03.531244 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I0922 21:13:03.531515 1 trace.go:82] Trace[1790402991]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-09-22 21:12:33.530169937 +0000 UTC m=+0.132153194) (total time: 30.001326286s): Trace[1790402991]: [30.001326286s] [30.001326286s] END E0922 21:13:03.531546 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0922 21:13:03.531546 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0922 21:13:03.531546 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I0922 21:13:03.533162 1 trace.go:82] Trace[764555674]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-09-22 21:12:33.519905621 +0000 UTC m=+0.121888861) (total time: 30.013233504s): Trace[764555674]: [30.013233504s] [30.013233504s] END E0922 21:13:03.533177 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0922 21:13:03.533177 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0922 21:13:03.533177 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

==> dmesg <== [ +5.001187] hpet1: lost 318 rtc interrupts [ +5.000998] hpet1: lost 318 rtc interrupts [ +5.001410] hpet1: lost 318 rtc interrupts [Sep22 21:18] hpet1: lost 319 rtc interrupts [ +5.000333] hpet1: lost 318 rtc interrupts [ +5.001267] hpet1: lost 318 rtc interrupts [ +5.002489] hpet1: lost 318 rtc interrupts [ +5.001031] hpet1: lost 318 rtc interrupts [ +5.000741] hpet1: lost 318 rtc interrupts [ +5.004061] hpet1: lost 318 rtc interrupts [ +4.999427] hpet1: lost 318 rtc interrupts [ +5.000472] hpet1: lost 318 rtc interrupts [ +5.001030] hpet1: lost 318 rtc interrupts [ +5.002166] hpet1: lost 318 rtc interrupts [ +5.000709] hpet1: lost 318 rtc interrupts [Sep22 21:19] hpet1: lost 319 rtc interrupts [ +5.002029] hpet1: lost 318 rtc interrupts [ +5.001022] hpet1: lost 318 rtc interrupts [ +5.000774] hpet1: lost 318 rtc interrupts [ +5.001120] hpet1: lost 318 rtc interrupts [ +5.001215] hpet1: lost 318 rtc interrupts [ +5.001959] hpet1: lost 318 rtc interrupts [ +5.000477] hpet1: lost 318 rtc interrupts [ +5.000974] hpet1: lost 318 rtc interrupts [ +5.001287] hpet1: lost 318 rtc interrupts [ +5.000491] hpet1: lost 319 rtc interrupts [ +5.002020] hpet1: lost 318 rtc interrupts [Sep22 21:20] hpet1: lost 318 rtc interrupts [ +5.002410] hpet1: lost 318 rtc interrupts [ +5.002040] hpet1: lost 318 rtc interrupts [ +5.002879] hpet1: lost 318 rtc interrupts [ +5.000712] hpet1: lost 318 rtc interrupts [ +5.000762] hpet1: lost 319 rtc interrupts [ +5.001570] hpet1: lost 319 rtc interrupts [ +5.002836] hpet1: lost 320 rtc interrupts [ +5.001355] hpet1: lost 318 rtc interrupts [ +5.001537] hpet1: lost 318 rtc interrupts [ +5.001684] hpet1: lost 318 rtc interrupts [ +5.001419] hpet1: lost 318 rtc interrupts [Sep22 21:21] hpet1: lost 318 rtc interrupts [ +5.000653] hpet1: lost 318 rtc interrupts [ +5.001583] hpet1: lost 318 rtc interrupts [ +5.001620] hpet1: lost 318 rtc interrupts [ +4.999864] hpet1: lost 318 rtc interrupts [ +5.001014] hpet1: lost 318 rtc interrupts [ +5.001293] hpet1: lost 318 rtc interrupts [ +5.001231] hpet1: lost 318 rtc interrupts [ +5.001422] hpet1: lost 319 rtc interrupts [ +5.002305] hpet1: lost 318 rtc interrupts [ +5.001051] hpet1: lost 318 rtc interrupts [ +5.000916] hpet1: lost 318 rtc interrupts [Sep22 21:22] hpet1: lost 318 rtc interrupts [ +5.000971] hpet1: lost 318 rtc interrupts [ +5.001674] hpet1: lost 318 rtc interrupts [ +5.000617] hpet1: lost 318 rtc interrupts [ +5.001668] hpet1: lost 318 rtc interrupts [ +5.001565] hpet1: lost 318 rtc interrupts [ +5.000662] hpet1: lost 318 rtc interrupts [ +5.000760] hpet1: lost 318 rtc interrupts [ +5.001348] hpet1: lost 320 rtc interrupts

==> kernel <== 21:22:41 up 12 min, 0 users, load average: 1.45, 1.10, 0.73 Linux minikube 4.15.0 #1 SMP Wed Sep 18 07:44:58 PDT 2019 x86_64 GNU/Linux PRETTY_NAME="Buildroot 2018.05.3"

==> kube-addon-manager [b74f2dd18237] <== deployment.apps/dashboard-metrics-scraper unchanged deployment.apps/kubernetes-dashboard unchanged namespace/kubernetes-dashboard unchanged role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged serviceaccount/kubernetes-dashboard unchanged secret/kubernetes-dashboard-certs unchanged secret/kubernetes-dashboard-csrf unchanged secret/kubernetes-dashboard-key-holder unchanged service/kubernetes-dashboard unchanged service/dashboard-metrics-scraper unchanged serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-09-22T21:22:27+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-09-22T21:22:27+00:00 == INFO: == Reconciling with deprecated label == error: no objects passed to apply INFO: == Reconciling with addon-manager label == clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged configmap/kubernetes-dashboard-settings unchanged deployment.apps/dashboard-metrics-scraper unchanged deployment.apps/kubernetes-dashboard unchanged namespace/kubernetes-dashboard unchanged role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged error: no objects passed to apply error: no objects passed to apply serviceaccount/kubernetes-dashboard unchanged secret/kubernetes-dashboard-certs unchanged secret/kubernetes-dashboard-csrf unchanged secret/kubernetes-dashboard-key-holder unchanged service/kubernetes-dashboard unchanged service/dashboard-metrics-scraper unchanged serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-09-22T21:22:31+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-09-22T21:22:33+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged configmap/kubernetes-dashboard-settings unchanged deployment.apps/dashboard-metrics-scraper unchanged deployment.apps/kubernetes-dashboard unchanged namespace/kubernetes-dashboard unchanged role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged serviceaccount/kubernetes-dashboard unchanged secret/kubernetes-dashboard-certs unchanged secret/kubernetes-dashboard-csrf unchanged secret/kubernetes-dashboard-key-holder unchanged service/kubernetes-dashboard unchanged service/dashboard-metrics-scraper unchanged serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-09-22T21:22:37+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-09-22T21:22:37+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label ==

==> kube-apiserver [04543b0c1c48] <== I0922 21:12:17.730079 1 client.go:361] parsed scheme: "endpoint" I0922 21:12:17.730211 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0922 21:12:17.744326 1 client.go:361] parsed scheme: "endpoint" I0922 21:12:17.744993 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0922 21:12:17.765689 1 client.go:361] parsed scheme: "endpoint" I0922 21:12:17.767214 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0922 21:12:17.783531 1 client.go:361] parsed scheme: "endpoint" I0922 21:12:17.783563 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] W0922 21:12:17.898597 1 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources. W0922 21:12:17.918084 1 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources. W0922 21:12:17.958850 1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W0922 21:12:17.976962 1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W0922 21:12:17.987898 1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W0922 21:12:18.004964 1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources. W0922 21:12:18.005013 1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources. I0922 21:12:18.014093 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass. I0922 21:12:18.014113 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota. I0922 21:12:18.015854 1 client.go:361] parsed scheme: "endpoint" I0922 21:12:18.015884 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0922 21:12:18.024435 1 client.go:361] parsed scheme: "endpoint" I0922 21:12:18.024623 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0922 21:12:20.127375 1 secure_serving.go:123] Serving securely on [::]:8443 I0922 21:12:20.128600 1 crd_finalizer.go:274] Starting CRDFinalizer I0922 21:12:20.128858 1 apiservice_controller.go:94] Starting APIServiceRegistrationController I0922 21:12:20.128884 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0922 21:12:20.128894 1 available_controller.go:383] Starting AvailableConditionController I0922 21:12:20.128963 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0922 21:12:20.129124 1 autoregister_controller.go:140] Starting autoregister controller I0922 21:12:20.129143 1 cache.go:32] Waiting for caches to sync for autoregister controller I0922 21:12:20.129530 1 controller.go:81] Starting OpenAPI AggregationController I0922 21:12:20.184347 1 controller.go:85] Starting OpenAPI controller I0922 21:12:20.184395 1 customresource_discovery_controller.go:208] Starting DiscoveryController I0922 21:12:20.184654 1 naming_controller.go:288] Starting NamingConditionController I0922 21:12:20.184838 1 establishing_controller.go:73] Starting EstablishingController I0922 21:12:20.185133 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController I0922 21:12:20.185458 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController I0922 21:12:20.185857 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0922 21:12:20.185878 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister E0922 21:12:20.199827 1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.99.104, ResourceVersion: 0, AdditionalErrorMsg: I0922 21:12:20.344639 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0922 21:12:20.344670 1 cache.go:39] Caches are synced for AvailableConditionController controller I0922 21:12:20.344687 1 cache.go:39] Caches are synced for autoregister controller I0922 21:12:20.397741 1 shared_informer.go:204] Caches are synced for crd-autoregister I0922 21:12:21.127849 1 controller.go:107] OpenAPI AggregationController: Processing item I0922 21:12:21.127916 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0922 21:12:21.128123 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0922 21:12:21.135863 1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000 I0922 21:12:21.152432 1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000 I0922 21:12:21.152602 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist. I0922 21:12:22.911519 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0922 21:12:23.191794 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0922 21:12:23.377969 1 controller.go:606] quota admission added evaluator for: endpoints W0922 21:12:23.484547 1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.99.104] I0922 21:12:23.635298 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io I0922 21:12:24.679511 1 controller.go:606] quota admission added evaluator for: serviceaccounts I0922 21:12:24.952486 1 controller.go:606] quota admission added evaluator for: deployments.apps I0922 21:12:25.315984 1 controller.go:606] quota admission added evaluator for: daemonsets.apps I0922 21:12:31.184757 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps I0922 21:12:31.267049 1 controller.go:606] quota admission added evaluator for: events.events.k8s.io I0922 21:12:31.368444 1 controller.go:606] quota admission added evaluator for: replicasets.apps

==> kube-controller-manager [a08da7b221f9] <== I0922 21:12:30.316262 1 controllermanager.go:534] Started "pv-protection" I0922 21:12:30.316304 1 pv_protection_controller.go:81] Starting PV protection controller I0922 21:12:30.316320 1 shared_informer.go:197] Waiting for caches to sync for PV protection I0922 21:12:30.565068 1 controllermanager.go:534] Started "podgc" I0922 21:12:30.565121 1 gc_controller.go:75] Starting GC controller I0922 21:12:30.565137 1 shared_informer.go:197] Waiting for caches to sync for GC I0922 21:12:30.815695 1 controllermanager.go:534] Started "replicaset" I0922 21:12:30.815925 1 replica_set.go:182] Starting replicaset controller I0922 21:12:30.815949 1 shared_informer.go:197] Waiting for caches to sync for ReplicaSet I0922 21:12:30.966303 1 controllermanager.go:534] Started "csrapproving" I0922 21:12:30.966321 1 certificate_controller.go:113] Starting certificate controller I0922 21:12:30.966812 1 shared_informer.go:197] Waiting for caches to sync for certificate I0922 21:12:31.115106 1 controllermanager.go:534] Started "csrcleaner" I0922 21:12:31.115626 1 cleaner.go:81] Starting CSR cleaner controller I0922 21:12:31.116088 1 shared_informer.go:197] Waiting for caches to sync for garbage collector I0922 21:12:31.131263 1 shared_informer.go:197] Waiting for caches to sync for resource quota W0922 21:12:31.140946 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I0922 21:12:31.151873 1 shared_informer.go:204] Caches are synced for service account I0922 21:12:31.156701 1 shared_informer.go:204] Caches are synced for job I0922 21:12:31.165237 1 shared_informer.go:204] Caches are synced for GC I0922 21:12:31.166135 1 shared_informer.go:204] Caches are synced for TTL I0922 21:12:31.167298 1 shared_informer.go:204] Caches are synced for certificate I0922 21:12:31.167332 1 shared_informer.go:204] Caches are synced for bootstrap_signer I0922 21:12:31.173280 1 shared_informer.go:204] Caches are synced for HPA I0922 21:12:31.178212 1 shared_informer.go:204] Caches are synced for endpoint I0922 21:12:31.181073 1 shared_informer.go:204] Caches are synced for daemon sets I0922 21:12:31.201335 1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"980a148d-3ec2-498f-a13b-72403498a0d9", APIVersion:"apps/v1", ResourceVersion:"208", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-fxmd6 I0922 21:12:31.203283 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator I0922 21:12:31.205202 1 shared_informer.go:204] Caches are synced for taint I0922 21:12:31.205365 1 node_lifecycle_controller.go:1208] Initializing eviction metric for zone: W0922 21:12:31.205642 1 node_lifecycle_controller.go:903] Missing timestamp for Node minikube. Assuming now as a timestamp. I0922 21:12:31.205938 1 node_lifecycle_controller.go:1108] Controller detected that zone is now in state Normal. I0922 21:12:31.207866 1 taint_manager.go:186] Starting NoExecuteTaintManager I0922 21:12:31.208700 1 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"44fcb7a6-704b-4403-8c9d-80b03c403fce", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller I0922 21:12:31.216240 1 shared_informer.go:204] Caches are synced for certificate I0922 21:12:31.218820 1 shared_informer.go:204] Caches are synced for PV protection I0922 21:12:31.218994 1 shared_informer.go:204] Caches are synced for ReplicaSet I0922 21:12:31.230423 1 shared_informer.go:204] Caches are synced for namespace E0922 21:12:31.296953 1 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again I0922 21:12:31.366192 1 shared_informer.go:204] Caches are synced for deployment I0922 21:12:31.377217 1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"13c934d0-cb70-4666-b2f3-50b20d265591", APIVersion:"apps/v1", ResourceVersion:"191", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5644d7b6d9 to 2 I0922 21:12:31.396879 1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"0c56814e-89b4-4423-9b9c-99a202dbfadb", APIVersion:"apps/v1", ResourceVersion:"345", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-fssrp I0922 21:12:31.419367 1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"0c56814e-89b4-4423-9b9c-99a202dbfadb", APIVersion:"apps/v1", ResourceVersion:"345", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-k85b5 I0922 21:12:31.517155 1 shared_informer.go:204] Caches are synced for disruption I0922 21:12:31.517269 1 disruption.go:341] Sending events to api server. I0922 21:12:31.518178 1 shared_informer.go:204] Caches are synced for attach detach I0922 21:12:31.527218 1 shared_informer.go:204] Caches are synced for ReplicationController I0922 21:12:31.571276 1 shared_informer.go:204] Caches are synced for expand I0922 21:12:31.571363 1 shared_informer.go:204] Caches are synced for stateful set I0922 21:12:31.571628 1 shared_informer.go:204] Caches are synced for persistent volume I0922 21:12:31.622023 1 shared_informer.go:204] Caches are synced for PVC protection I0922 21:12:31.675227 1 shared_informer.go:204] Caches are synced for resource quota I0922 21:12:31.729139 1 shared_informer.go:204] Caches are synced for garbage collector I0922 21:12:31.731958 1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0922 21:12:31.733919 1 shared_informer.go:204] Caches are synced for garbage collector I0922 21:12:31.734132 1 shared_informer.go:204] Caches are synced for resource quota I0922 21:12:37.424314 1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"0f2044cd-ae99-49e1-8124-ecf9f1bb73b5", APIVersion:"apps/v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-76585494d8 to 1 I0922 21:12:37.433249 1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-76585494d8", UID:"a663f206-8b7f-4b12-98ed-868776e78703", APIVersion:"apps/v1", ResourceVersion:"422", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-76585494d8-m6c4s I0922 21:12:37.518411 1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"5cdffcec-af57-456e-ad1e-869354047f4c", APIVersion:"apps/v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-57f4cb4545 to 1 I0922 21:12:37.536935 1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-57f4cb4545", UID:"9a5cec2a-b936-4805-b2ec-0091e96ad0f6", APIVersion:"apps/v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-57f4cb4545-w8skk

==> kube-proxy [078c3f6ad769] <== W0922 21:12:33.541721 1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy I0922 21:12:33.598130 1 node.go:135] Successfully retrieved node IP: 10.0.2.15 I0922 21:12:33.598174 1 server_others.go:149] Using iptables Proxier. W0922 21:12:33.599557 1 proxier.go:287] clusterCIDR not specified, unable to distinguish between internal and external traffic I0922 21:12:33.601032 1 server.go:529] Version: v1.16.0 I0922 21:12:33.607974 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 I0922 21:12:33.608005 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I0922 21:12:33.609969 1 conntrack.go:83] Setting conntrack hashsize to 32768 I0922 21:12:33.615813 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I0922 21:12:33.616040 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I0922 21:12:33.617180 1 config.go:313] Starting service config controller I0922 21:12:33.618236 1 shared_informer.go:197] Waiting for caches to sync for service config I0922 21:12:33.630125 1 config.go:131] Starting endpoints config controller I0922 21:12:33.630173 1 shared_informer.go:197] Waiting for caches to sync for endpoints config I0922 21:12:33.733674 1 shared_informer.go:204] Caches are synced for service config I0922 21:12:33.733872 1 shared_informer.go:204] Caches are synced for endpoints config

==> kube-scheduler [7584f4aedfff] <== I0922 21:12:16.193107 1 serving.go:319] Generated self-signed cert in-memory W0922 21:12:20.252812 1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0922 21:12:20.252843 1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0922 21:12:20.252851 1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous. W0922 21:12:20.252928 1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0922 21:12:20.256085 1 server.go:143] Version: v1.16.0 I0922 21:12:20.256150 1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory W0922 21:12:20.268538 1 authorization.go:47] Authorization is disabled W0922 21:12:20.268883 1 authentication.go:79] Authentication is disabled I0922 21:12:20.268988 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I0922 21:12:20.269582 1 secure_serving.go:123] Serving securely on 127.0.0.1:10259 E0922 21:12:20.353029 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0922 21:12:20.363729 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0922 21:12:20.363857 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0922 21:12:20.364193 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0922 21:12:20.364463 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0922 21:12:20.364470 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0922 21:12:20.366963 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0922 21:12:20.367193 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0922 21:12:20.367227 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0922 21:12:20.367304 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0922 21:12:20.367339 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0922 21:12:21.368820 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0922 21:12:21.369098 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0922 21:12:21.371181 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0922 21:12:21.373349 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0922 21:12:21.373725 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0922 21:12:21.376110 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0922 21:12:21.380158 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0922 21:12:21.380547 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0922 21:12:21.382951 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0922 21:12:21.386832 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0922 21:12:21.390136 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope I0922 21:12:23.373001 1 leaderelection.go:241] attempting to acquire leader lease kube-system/kube-scheduler... I0922 21:12:23.380917 1 leaderelection.go:251] successfully acquired lease kube-system/kube-scheduler

==> kubelet <== -- Logs begin at Sun 2019-09-22 21:10:34 UTC, end at Sun 2019-09-22 21:22:41 UTC. -- Sep 22 21:12:17 minikube kubelet[3365]: E0922 21:12:17.589407 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:17 minikube kubelet[3365]: E0922 21:12:17.689781 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:17 minikube kubelet[3365]: E0922 21:12:17.790147 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:17 minikube kubelet[3365]: E0922 21:12:17.890650 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:17 minikube kubelet[3365]: E0922 21:12:17.991053 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:18 minikube kubelet[3365]: E0922 21:12:18.092299 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:18 minikube kubelet[3365]: E0922 21:12:18.192723 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:18 minikube kubelet[3365]: E0922 21:12:18.293060 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:18 minikube kubelet[3365]: E0922 21:12:18.393660 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:18 minikube kubelet[3365]: E0922 21:12:18.493909 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:18 minikube kubelet[3365]: E0922 21:12:18.594359 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:18 minikube kubelet[3365]: E0922 21:12:18.695011 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:18 minikube kubelet[3365]: E0922 21:12:18.795724 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:18 minikube kubelet[3365]: E0922 21:12:18.895937 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:18 minikube kubelet[3365]: E0922 21:12:18.997337 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:19 minikube kubelet[3365]: E0922 21:12:19.098184 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:19 minikube kubelet[3365]: E0922 21:12:19.198597 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:19 minikube kubelet[3365]: E0922 21:12:19.299161 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:19 minikube kubelet[3365]: E0922 21:12:19.400117 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:19 minikube kubelet[3365]: E0922 21:12:19.500738 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:19 minikube kubelet[3365]: E0922 21:12:19.602910 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:19 minikube kubelet[3365]: E0922 21:12:19.703568 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:19 minikube kubelet[3365]: E0922 21:12:19.804143 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:19 minikube kubelet[3365]: E0922 21:12:19.904667 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:20 minikube kubelet[3365]: E0922 21:12:20.005463 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:20 minikube kubelet[3365]: E0922 21:12:20.106582 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:20 minikube kubelet[3365]: E0922 21:12:20.208103 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:20 minikube kubelet[3365]: E0922 21:12:20.308329 3365 kubelet.go:2267] node "minikube" not found Sep 22 21:12:20 minikube kubelet[3365]: E0922 21:12:20.354008 3365 controller.go:220] failed to get node "minikube" when trying to set owner ref to the node lease: nodes "minikube" not found Sep 22 21:12:20 minikube kubelet[3365]: I0922 21:12:20.377399 3365 kubelet_node_status.go:75] Successfully registered node minikube Sep 22 21:12:20 minikube kubelet[3365]: I0922 21:12:20.388257 3365 reconciler.go:154] Reconciler: start to sync state Sep 22 21:12:20 minikube kubelet[3365]: E0922 21:12:20.429443 3365 controller.go:135] failed to ensure node lease exists, will retry in 3.2s, error: namespaces "kube-node-lease" not found Sep 22 21:12:31 minikube kubelet[3365]: I0922 21:12:31.343103 3365 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/b9742a62-b9b9-446c-aa49-d08157f90296-kube-proxy") pod "kube-proxy-fxmd6" (UID: "b9742a62-b9b9-446c-aa49-d08157f90296") Sep 22 21:12:31 minikube kubelet[3365]: I0922 21:12:31.343784 3365 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/b9742a62-b9b9-446c-aa49-d08157f90296-xtables-lock") pod "kube-proxy-fxmd6" (UID: "b9742a62-b9b9-446c-aa49-d08157f90296") Sep 22 21:12:31 minikube kubelet[3365]: I0922 21:12:31.343869 3365 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/b9742a62-b9b9-446c-aa49-d08157f90296-lib-modules") pod "kube-proxy-fxmd6" (UID: "b9742a62-b9b9-446c-aa49-d08157f90296") Sep 22 21:12:31 minikube kubelet[3365]: I0922 21:12:31.343932 3365 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-c8gpw" (UniqueName: "kubernetes.io/secret/b9742a62-b9b9-446c-aa49-d08157f90296-kube-proxy-token-c8gpw") pod "kube-proxy-fxmd6" (UID: "b9742a62-b9b9-446c-aa49-d08157f90296") Sep 22 21:12:31 minikube kubelet[3365]: I0922 21:12:31.545221 3365 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8c107f94-1328-44cc-8f29-9d2a69fd536d-config-volume") pod "coredns-5644d7b6d9-fssrp" (UID: "8c107f94-1328-44cc-8f29-9d2a69fd536d") Sep 22 21:12:31 minikube kubelet[3365]: I0922 21:12:31.545636 3365 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7cad0fe8-ed80-497e-a145-db5831ee9fc9-config-volume") pod "coredns-5644d7b6d9-k85b5" (UID: "7cad0fe8-ed80-497e-a145-db5831ee9fc9") Sep 22 21:12:31 minikube kubelet[3365]: I0922 21:12:31.545864 3365 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-j9f7j" (UniqueName: "kubernetes.io/secret/8c107f94-1328-44cc-8f29-9d2a69fd536d-coredns-token-j9f7j") pod "coredns-5644d7b6d9-fssrp" (UID: "8c107f94-1328-44cc-8f29-9d2a69fd536d") Sep 22 21:12:31 minikube kubelet[3365]: I0922 21:12:31.545944 3365 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-j9f7j" (UniqueName: "kubernetes.io/secret/7cad0fe8-ed80-497e-a145-db5831ee9fc9-coredns-token-j9f7j") pod "coredns-5644d7b6d9-k85b5" (UID: "7cad0fe8-ed80-497e-a145-db5831ee9fc9") Sep 22 21:12:32 minikube kubelet[3365]: W0922 21:12:32.667156 3365 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-fssrp through plugin: invalid network status for Sep 22 21:12:32 minikube kubelet[3365]: W0922 21:12:32.670581 3365 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-fssrp through plugin: invalid network status for Sep 22 21:12:32 minikube kubelet[3365]: W0922 21:12:32.673071 3365 pod_container_deletor.go:75] Container "6acc4cece7e4a1768fe96bbc3333168169497c34567fc0835e4b4dc470adf2f7" not found in pod's containers Sep 22 21:12:32 minikube kubelet[3365]: W0922 21:12:32.841674 3365 pod_container_deletor.go:75] Container "6d6f14ad05d03ab576ed90f27457cbe22a08cd1cc17c929db0a25511fe5e1417" not found in pod's containers Sep 22 21:12:32 minikube kubelet[3365]: W0922 21:12:32.853700 3365 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-k85b5 through plugin: invalid network status for Sep 22 21:12:33 minikube kubelet[3365]: W0922 21:12:33.850647 3365 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-fssrp through plugin: invalid network status for Sep 22 21:12:33 minikube kubelet[3365]: W0922 21:12:33.886673 3365 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-k85b5 through plugin: invalid network status for Sep 22 21:12:34 minikube kubelet[3365]: I0922 21:12:34.890915 3365 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-b425f" (UniqueName: "kubernetes.io/secret/9074e845-201b-42c2-af90-4714a7c24e62-storage-provisioner-token-b425f") pod "storage-provisioner" (UID: "9074e845-201b-42c2-af90-4714a7c24e62") Sep 22 21:12:34 minikube kubelet[3365]: I0922 21:12:34.891698 3365 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/9074e845-201b-42c2-af90-4714a7c24e62-tmp") pod "storage-provisioner" (UID: "9074e845-201b-42c2-af90-4714a7c24e62") Sep 22 21:12:37 minikube kubelet[3365]: I0922 21:12:37.614814 3365 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-5zwjl" (UniqueName: "kubernetes.io/secret/b20844d8-1689-41bb-be00-9663c0d2c213-kubernetes-dashboard-token-5zwjl") pod "dashboard-metrics-scraper-76585494d8-m6c4s" (UID: "b20844d8-1689-41bb-be00-9663c0d2c213") Sep 22 21:12:37 minikube kubelet[3365]: I0922 21:12:37.614882 3365 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/b20844d8-1689-41bb-be00-9663c0d2c213-tmp-volume") pod "dashboard-metrics-scraper-76585494d8-m6c4s" (UID: "b20844d8-1689-41bb-be00-9663c0d2c213") Sep 22 21:12:37 minikube kubelet[3365]: I0922 21:12:37.715711 3365 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/fb7ecd2c-246e-452e-863e-0d0277c924cb-tmp-volume") pod "kubernetes-dashboard-57f4cb4545-w8skk" (UID: "fb7ecd2c-246e-452e-863e-0d0277c924cb") Sep 22 21:12:37 minikube kubelet[3365]: I0922 21:12:37.715764 3365 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-5zwjl" (UniqueName: "kubernetes.io/secret/fb7ecd2c-246e-452e-863e-0d0277c924cb-kubernetes-dashboard-token-5zwjl") pod "kubernetes-dashboard-57f4cb4545-w8skk" (UID: "fb7ecd2c-246e-452e-863e-0d0277c924cb") Sep 22 21:12:38 minikube kubelet[3365]: W0922 21:12:38.561826 3365 pod_container_deletor.go:75] Container "97bbe5483acb8e1f952444a6a9a71642cd892daeae0403d73061fd545fb8a450" not found in pod's containers Sep 22 21:12:38 minikube kubelet[3365]: W0922 21:12:38.564362 3365 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-76585494d8-m6c4s through plugin: invalid network status for Sep 22 21:12:38 minikube kubelet[3365]: W0922 21:12:38.588308 3365 pod_container_deletor.go:75] Container "14a7f31cbadc91737084b8e95f714fb1be93353866255ec4ecd99aa77a0f64a3" not found in pod's containers Sep 22 21:12:38 minikube kubelet[3365]: W0922 21:12:38.592645 3365 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-57f4cb4545-w8skk through plugin: invalid network status for Sep 22 21:12:39 minikube kubelet[3365]: W0922 21:12:39.599873 3365 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-76585494d8-m6c4s through plugin: invalid network status for Sep 22 21:12:39 minikube kubelet[3365]: W0922 21:12:39.602478 3365 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-57f4cb4545-w8skk through plugin: invalid network status for Sep 22 21:12:44 minikube kubelet[3365]: W0922 21:12:44.703468 3365 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-76585494d8-m6c4s through plugin: invalid network status for

==> kubernetes-dashboard [0a1625f5f04a] <== 2019/09/22 21:12:38 Using namespace: kubernetes-dashboard 2019/09/22 21:12:38 Using in-cluster config to connect to apiserver 2019/09/22 21:12:38 Using secret token for csrf signing 2019/09/222 201:119/209/22 2:13:81 In2:38 itiaSltartingi overzing wcsrf atch token from kubernetes-dashboard-csrf secret 2019/09/22 21:12:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf 2019/09/22 21:12:39 Successful initial request to the apiserver, version: v1.16.0 2019/09/22 21:12:39 Generating JWE encryption key 2019/09/22 21:12:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting 2019/09/22 21:12:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard 2019/09/22 21:12:39 Initializing JWE encryption key from synchronized object 2019/09/22 21:12:39 Creating in-cluster Sidecar client 2019/09/22 21:12:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds. 2019/09/22 21:12:39 Serving insecurely on HTTP port: 9090 2019/09/22 21:13:09 Successful request to sidecar

==> storage-provisioner [c969f8c95784] <==

The operating system version:

macOS Mojave 10.14.16

afbjorklund commented 5 years ago

Same problem that killed helm, I guess ?

https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/

tstromberg commented 5 years ago

Will try to get this sorted out today. Thanks for the heads up!

sharifelgamal commented 5 years ago

Upgrading to kubectl 1.13.11 fixes this issue on my end. Alternatively, calling kubectl create as the error message suggests also works. It seems to be a version incompatibility between an older version of kubectl and a newer version of k8s.

nickebbitt commented 5 years ago

That makes sense, I was playing around with Linkerd last night and it warned me kubectl was not a new enough version so I upgraded it.

I think it was v1.11 that I was on previously.

afbjorklund commented 5 years ago

You can use minikube kubectl to automatically get a matching version

nickebbitt commented 5 years ago

Ah nice, thanks.

I wonder if it would be nice to implement a check similar to how linkerd does so you can verify your install?

afbjorklund commented 5 years ago

Not familiar with what Linkerd does, but you mean something like brew doctor - like a install sanity check ?

nickebbitt commented 5 years ago

yeah I think so, Linkerd gave me this output for example from a linkerd check command, see the kubernetes-version section:

linkerd check --pre
kubernetes-api
--------------
√ can initialize the client
√ can query the Kubernetes API

kubernetes-version
------------------
√ is running the minimum Kubernetes API version
× is running the minimum kubectl version
    kubectl is on version [1.11.2], but version [1.12.0] or more recent is required
    see https://linkerd.io/checks/#kubectl-version for hints

pre-kubernetes-setup
--------------------
√ control plane namespace does not already exist
√ can create Namespaces
√ can create ClusterRoles
√ can create ClusterRoleBindings
√ can create CustomResourceDefinitions
√ can create PodSecurityPolicies
√ can create ServiceAccounts
√ can create Services
√ can create Deployments
√ can create CronJobs
√ can create ConfigMaps
√ no clock skew detected

pre-kubernetes-capability
-------------------------
√ has NET_ADMIN capability
√ has NET_RAW capability

pre-linkerd-global-resources
----------------------------
√ no ClusterRoles exist
√ no ClusterRoleBindings exist
√ no CustomResourceDefinitions exist
√ no MutatingWebhookConfigurations exist
√ no ValidatingWebhookConfigurations exist
√ no PodSecurityPolicies exist

linkerd-version
---------------
√ can determine the latest version
√ cli is up-to-date

Status check results are ×
tstromberg commented 5 years ago

Related: #3329

nickebbitt commented 5 years ago

I'll take a look at that when I get chance, see if I can help.

Should we close this issue?

tstromberg commented 5 years ago

Closing in preference to #3329 - thanks for the issue report!