kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.36k stars 4.88k forks source link

failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"mypod\": Error response from daemon: #6160

Closed SwEngin closed 4 years ago

SwEngin commented 4 years ago

Create pod with resource limit:

The full output of the command that failed:

Warning FailedCreatePodSandBox 14m (x13 over 14m) kubelet, minikube Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown

The output of the minikube logs command:

==> Docker <== -- Logs begin at Thu 2019-12-26 08:53:07 UTC, end at Thu 2019-12-26 08:57:14 UTC. -- Dec 26 08:57:04 minikube dockerd[1945]: time="2019-12-26T08:57:04.833372377Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5cdfe40a5e48904b67a505a51066ae62995a2437cdef1fcf97c4cd93745bdcb3/shim.sock" debug=false pid=14153 Dec 26 08:57:04 minikube dockerd[1945]: time="2019-12-26T08:57:04.859249705Z" level=info msg="shim reaped" id=5cdfe40a5e48904b67a505a51066ae62995a2437cdef1fcf97c4cd93745bdcb3 Dec 26 08:57:04 minikube dockerd[1945]: time="2019-12-26T08:57:04.870242814Z" level=error msg="stream copy error: reading from a closed fifo" Dec 26 08:57:04 minikube dockerd[1945]: time="2019-12-26T08:57:04.870391148Z" level=error msg="stream copy error: reading from a closed fifo" Dec 26 08:57:04 minikube dockerd[1945]: time="2019-12-26T08:57:04.902900738Z" level=error msg="5cdfe40a5e48904b67a505a51066ae62995a2437cdef1fcf97c4cd93745bdcb3 cleanup: failed to delete container from containerd: no such container" Dec 26 08:57:04 minikube dockerd[1945]: time="2019-12-26T08:57:04.903052425Z" level=error msg="Handler for POST /containers/5cdfe40a5e48904b67a505a51066ae62995a2437cdef1fcf97c4cd93745bdcb3/start returned error: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:319: getting the final child's pid from pipe caused \\"read init-p: connection reset by peer\\"\": unknown" Dec 26 08:57:05 minikube dockerd[1945]: time="2019-12-26T08:57:05.894947608Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b8ef2959a79cc7d76c4a79408a10ef6ff35c0ac4a6f065c6843564ba57b5ee25/shim.sock" debug=false pid=14186 Dec 26 08:57:05 minikube dockerd[1945]: time="2019-12-26T08:57:05.913421025Z" level=info msg="shim reaped" id=b8ef2959a79cc7d76c4a79408a10ef6ff35c0ac4a6f065c6843564ba57b5ee25 Dec 26 08:57:05 minikube dockerd[1945]: time="2019-12-26T08:57:05.926269791Z" level=error msg="stream copy error: reading from a closed fifo" Dec 26 08:57:05 minikube dockerd[1945]: time="2019-12-26T08:57:05.927279472Z" level=error msg="stream copy error: reading from a closed fifo" Dec 26 08:57:05 minikube dockerd[1945]: time="2019-12-26T08:57:05.977161450Z" level=error msg="b8ef2959a79cc7d76c4a79408a10ef6ff35c0ac4a6f065c6843564ba57b5ee25 cleanup: failed to delete container from containerd: no such container" Dec 26 08:57:05 minikube dockerd[1945]: time="2019-12-26T08:57:05.977215253Z" level=error msg="Handler for POST /containers/b8ef2959a79cc7d76c4a79408a10ef6ff35c0ac4a6f065c6843564ba57b5ee25/start returned error: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:315: copying bootstrap data to pipe caused \\"write init-p: broken pipe\\"\": unknown" Dec 26 08:57:06 minikube dockerd[1945]: time="2019-12-26T08:57:06.967277278Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8ed054452be1c2e72637d45e699a03172497f055ebfe4f49bf6ee17dc6e2733d/shim.sock" debug=false pid=14249 Dec 26 08:57:06 minikube dockerd[1945]: time="2019-12-26T08:57:06.993097307Z" level=info msg="shim reaped" id=8ed054452be1c2e72637d45e699a03172497f055ebfe4f49bf6ee17dc6e2733d Dec 26 08:57:07 minikube dockerd[1945]: time="2019-12-26T08:57:07.003766006Z" level=error msg="stream copy error: reading from a closed fifo" Dec 26 08:57:07 minikube dockerd[1945]: time="2019-12-26T08:57:07.003956335Z" level=error msg="stream copy error: reading from a closed fifo" Dec 26 08:57:07 minikube dockerd[1945]: time="2019-12-26T08:57:07.047455405Z" level=error msg="8ed054452be1c2e72637d45e699a03172497f055ebfe4f49bf6ee17dc6e2733d cleanup: failed to delete container from containerd: no such container" Dec 26 08:57:07 minikube dockerd[1945]: time="2019-12-26T08:57:07.048170375Z" level=error msg="Handler for POST /containers/8ed054452be1c2e72637d45e699a03172497f055ebfe4f49bf6ee17dc6e2733d/start returned error: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:315: copying bootstrap data to pipe caused \\"write init-p: broken pipe\\"\": unknown" Dec 26 08:57:08 minikube dockerd[1945]: time="2019-12-26T08:57:08.005870269Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d73a0f7f422d9b2e8b623f6f9c6cde86d63b8258f628ab818dc350a6c1b600a4/shim.sock" debug=false pid=14303 Dec 26 08:57:08 minikube dockerd[1945]: time="2019-12-26T08:57:08.027754489Z" level=info msg="shim reaped" id=d73a0f7f422d9b2e8b623f6f9c6cde86d63b8258f628ab818dc350a6c1b600a4 Dec 26 08:57:08 minikube dockerd[1945]: time="2019-12-26T08:57:08.038288095Z" level=error msg="stream copy error: reading from a closed fifo" Dec 26 08:57:08 minikube dockerd[1945]: time="2019-12-26T08:57:08.038484026Z" level=error msg="stream copy error: reading from a closed fifo" Dec 26 08:57:08 minikube dockerd[1945]: time="2019-12-26T08:57:08.075359243Z" level=error msg="d73a0f7f422d9b2e8b623f6f9c6cde86d63b8258f628ab818dc350a6c1b600a4 cleanup: failed to delete container from containerd: no such container" Dec 26 08:57:08 minikube dockerd[1945]: time="2019-12-26T08:57:08.075449360Z" level=error msg="Handler for POST /containers/d73a0f7f422d9b2e8b623f6f9c6cde86d63b8258f628ab818dc350a6c1b600a4/start returned error: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:315: copying bootstrap data to pipe caused \\"write init-p: broken pipe\\"\": unknown" Dec 26 08:57:09 minikube dockerd[1945]: time="2019-12-26T08:57:09.066929147Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/31c2e630b6bbbe71686453fecafd2088bb1d12bf5e633d04806d366763b81560/shim.sock" debug=false pid=14335 Dec 26 08:57:09 minikube dockerd[1945]: time="2019-12-26T08:57:09.086305973Z" level=info msg="shim reaped" id=31c2e630b6bbbe71686453fecafd2088bb1d12bf5e633d04806d366763b81560 Dec 26 08:57:09 minikube dockerd[1945]: time="2019-12-26T08:57:09.096659008Z" level=error msg="stream copy error: reading from a closed fifo" Dec 26 08:57:09 minikube dockerd[1945]: time="2019-12-26T08:57:09.098955804Z" level=error msg="stream copy error: reading from a closed fifo" Dec 26 08:57:09 minikube dockerd[1945]: time="2019-12-26T08:57:09.135527607Z" level=error msg="31c2e630b6bbbe71686453fecafd2088bb1d12bf5e633d04806d366763b81560 cleanup: failed to delete container from containerd: no such container" Dec 26 08:57:09 minikube dockerd[1945]: time="2019-12-26T08:57:09.135636157Z" level=error msg="Handler for POST /containers/31c2e630b6bbbe71686453fecafd2088bb1d12bf5e633d04806d366763b81560/start returned error: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:315: copying bootstrap data to pipe caused \\"write init-p: broken pipe\\"\": unknown" Dec 26 08:57:10 minikube dockerd[1945]: time="2019-12-26T08:57:10.107835987Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/016a54954678588670253e22049df77c5f58d4716462b7acee7704468a315ccd/shim.sock" debug=false pid=14375 Dec 26 08:57:10 minikube dockerd[1945]: time="2019-12-26T08:57:10.130143680Z" level=info msg="shim reaped" id=016a54954678588670253e22049df77c5f58d4716462b7acee7704468a315ccd Dec 26 08:57:10 minikube dockerd[1945]: time="2019-12-26T08:57:10.141492042Z" level=error msg="stream copy error: reading from a closed fifo" Dec 26 08:57:10 minikube dockerd[1945]: time="2019-12-26T08:57:10.141718503Z" level=error msg="stream copy error: reading from a closed fifo" Dec 26 08:57:10 minikube dockerd[1945]: time="2019-12-26T08:57:10.177102529Z" level=error msg="016a54954678588670253e22049df77c5f58d4716462b7acee7704468a315ccd cleanup: failed to delete container from containerd: no such container" Dec 26 08:57:10 minikube dockerd[1945]: time="2019-12-26T08:57:10.177227492Z" level=error msg="Handler for POST /containers/016a54954678588670253e22049df77c5f58d4716462b7acee7704468a315ccd/start returned error: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:315: copying bootstrap data to pipe caused \\"write init-p: broken pipe\\"\": unknown" Dec 26 08:57:11 minikube dockerd[1945]: time="2019-12-26T08:57:11.179043769Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/54f1b0631177413eb309442b4f31f89aa27aa451097dfc978bbca5f4904950e4/shim.sock" debug=false pid=14411 Dec 26 08:57:11 minikube dockerd[1945]: time="2019-12-26T08:57:11.204889618Z" level=info msg="shim reaped" id=54f1b0631177413eb309442b4f31f89aa27aa451097dfc978bbca5f4904950e4 Dec 26 08:57:11 minikube dockerd[1945]: time="2019-12-26T08:57:11.216752817Z" level=error msg="stream copy error: reading from a closed fifo" Dec 26 08:57:11 minikube dockerd[1945]: time="2019-12-26T08:57:11.216868453Z" level=error msg="stream copy error: reading from a closed fifo" Dec 26 08:57:11 minikube dockerd[1945]: time="2019-12-26T08:57:11.257988432Z" level=error msg="54f1b0631177413eb309442b4f31f89aa27aa451097dfc978bbca5f4904950e4 cleanup: failed to delete container from containerd: no such container" Dec 26 08:57:11 minikube dockerd[1945]: time="2019-12-26T08:57:11.258144680Z" level=error msg="Handler for POST /containers/54f1b0631177413eb309442b4f31f89aa27aa451097dfc978bbca5f4904950e4/start returned error: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:315: copying bootstrap data to pipe caused \\"write init-p: broken pipe\\"\": unknown" Dec 26 08:57:12 minikube dockerd[1945]: time="2019-12-26T08:57:12.265605218Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3304b96a487cf5988ec7d0989c3eb00ed85472f856b09d4ae3988b45bd93e6a5/shim.sock" debug=false pid=14474 Dec 26 08:57:12 minikube dockerd[1945]: time="2019-12-26T08:57:12.284981229Z" level=info msg="shim reaped" id=3304b96a487cf5988ec7d0989c3eb00ed85472f856b09d4ae3988b45bd93e6a5 Dec 26 08:57:12 minikube dockerd[1945]: time="2019-12-26T08:57:12.295977907Z" level=error msg="stream copy error: reading from a closed fifo" Dec 26 08:57:12 minikube dockerd[1945]: time="2019-12-26T08:57:12.296145712Z" level=error msg="stream copy error: reading from a closed fifo" Dec 26 08:57:12 minikube dockerd[1945]: time="2019-12-26T08:57:12.336186035Z" level=error msg="3304b96a487cf5988ec7d0989c3eb00ed85472f856b09d4ae3988b45bd93e6a5 cleanup: failed to delete container from containerd: no such container" Dec 26 08:57:12 minikube dockerd[1945]: time="2019-12-26T08:57:12.336397855Z" level=error msg="Handler for POST /containers/3304b96a487cf5988ec7d0989c3eb00ed85472f856b09d4ae3988b45bd93e6a5/start returned error: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:315: copying bootstrap data to pipe caused \\"write init-p: broken pipe\\"\": unknown" Dec 26 08:57:13 minikube dockerd[1945]: time="2019-12-26T08:57:13.315389704Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7d1a1b460a0898c9bbcf02349bd5484c047e35d43b162957845875b22dd808e6/shim.sock" debug=false pid=14506 Dec 26 08:57:13 minikube dockerd[1945]: time="2019-12-26T08:57:13.332538772Z" level=info msg="shim reaped" id=7d1a1b460a0898c9bbcf02349bd5484c047e35d43b162957845875b22dd808e6 Dec 26 08:57:13 minikube dockerd[1945]: time="2019-12-26T08:57:13.343293711Z" level=error msg="stream copy error: reading from a closed fifo" Dec 26 08:57:13 minikube dockerd[1945]: time="2019-12-26T08:57:13.346162504Z" level=error msg="stream copy error: reading from a closed fifo" Dec 26 08:57:13 minikube dockerd[1945]: time="2019-12-26T08:57:13.383996233Z" level=error msg="7d1a1b460a0898c9bbcf02349bd5484c047e35d43b162957845875b22dd808e6 cleanup: failed to delete container from containerd: no such container" Dec 26 08:57:13 minikube dockerd[1945]: time="2019-12-26T08:57:13.384202044Z" level=error msg="Handler for POST /containers/7d1a1b460a0898c9bbcf02349bd5484c047e35d43b162957845875b22dd808e6/start returned error: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:315: copying bootstrap data to pipe caused \\"write init-p: broken pipe\\"\": unknown" Dec 26 08:57:14 minikube dockerd[1945]: time="2019-12-26T08:57:14.405672837Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fabba139a55ef84b1b762230241c432780e5b09db1576dc053aeb2446fe03b0b/shim.sock" debug=false pid=14539 Dec 26 08:57:14 minikube dockerd[1945]: time="2019-12-26T08:57:14.429226204Z" level=info msg="shim reaped" id=fabba139a55ef84b1b762230241c432780e5b09db1576dc053aeb2446fe03b0b Dec 26 08:57:14 minikube dockerd[1945]: time="2019-12-26T08:57:14.440912411Z" level=error msg="stream copy error: reading from a closed fifo" Dec 26 08:57:14 minikube dockerd[1945]: time="2019-12-26T08:57:14.440998263Z" level=error msg="stream copy error: reading from a closed fifo" Dec 26 08:57:14 minikube dockerd[1945]: time="2019-12-26T08:57:14.482643789Z" level=error msg="fabba139a55ef84b1b762230241c432780e5b09db1576dc053aeb2446fe03b0b cleanup: failed to delete container from containerd: no such container" Dec 26 08:57:14 minikube dockerd[1945]: time="2019-12-26T08:57:14.482760554Z" level=error msg="Handler for POST /containers/fabba139a55ef84b1b762230241c432780e5b09db1576dc053aeb2446fe03b0b/start returned error: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:315: copying bootstrap data to pipe caused \\"write init-p: broken pipe\\"\": unknown"

==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 536d24f37affa eb51a35975256 2 minutes ago Running kubernetes-dashboard 2 3fa3d0f6b93c7 82caa797fc686 4689081edb103 2 minutes ago Running storage-provisioner 2 e31e5e39a0145 9924e593d9953 174e0e8ef23df 2 minutes ago Running weave 2 98f281fd8eb07 fa84c68bbc70f 29024c9c6e706 2 minutes ago Running nginx-ingress-controller 2 8144c19e7c516 2cf27e6017546 k8s.gcr.io/metrics-server-amd64@sha256:49a9f12f7067d11f42c803dbe61ed2c1299959ad85cb315b25ff7eef8e6b8892 3 minutes ago Running metrics-server 1 57389855a1c99 692a56c56d8da weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 3 minutes ago Running weave-npc 0 98f281fd8eb07 42a90984b0243 70f311871ae12 3 minutes ago Running coredns 1 530dfe5ef48ec ccde998a2a376 70f311871ae12 3 minutes ago Running coredns 1 3b5bb286f008b 5fe3e5177d826 7d54289267dc5 3 minutes ago Running kube-proxy 1 a117c90e01c0c d48fd50759309 3b08661dc379d 3 minutes ago Running dashboard-metrics-scraper 1 c145a393a6cbe 2ec7feccc26a2 29024c9c6e706 3 minutes ago Exited nginx-ingress-controller 1 8144c19e7c516 9c5670570779c eb51a35975256 3 minutes ago Exited kubernetes-dashboard 1 3fa3d0f6b93c7 b4864a2ad94c3 174e0e8ef23df 3 minutes ago Exited weave 1 98f281fd8eb07 2a4fe2550fcab 4689081edb103 3 minutes ago Exited storage-provisioner 1 e31e5e39a0145 58d15d0aea33c 5eb3b74868724 3 minutes ago Running kube-controller-manager 1 234c0d7ac4a72 630397b74ce3d 0cae8d5cc64c7 3 minutes ago Running kube-apiserver 1 a27d39e58c193 e07815a452edd 303ce5db0e90d 3 minutes ago Running etcd 1 efcd316283315 872a91c7106c3 bd12a212f9dcb 3 minutes ago Running kube-addon-manager 1 c07b7318e35e0 9b1f3a159ebbe 78c190f736b11 3 minutes ago Running kube-scheduler 1 8434b4ef2fe21 55af93e1bfbdf k8s.gcr.io/metrics-server-amd64@sha256:49a9f12f7067d11f42c803dbe61ed2c1299959ad85cb315b25ff7eef8e6b8892 9 minutes ago Exited metrics-server 0 c0a121fbf8566 1e1d3697252cd 3b08661dc379d 9 minutes ago Exited dashboard-metrics-scraper 0 3f9d23b994bb1 ff24c3c52cd88 70f311871ae12 9 minutes ago Exited coredns 0 5de37181f5c54 0e2f51b615879 70f311871ae12 9 minutes ago Exited coredns 0 0bf9524c2f27f d9fb8cfdb3a7f 7d54289267dc5 9 minutes ago Exited kube-proxy 0 8d02a1d6ec15c ca861991abbf6 78c190f736b11 10 minutes ago Exited kube-scheduler 0 08c9a388186c4 f08ec117569dc 303ce5db0e90d 10 minutes ago Exited etcd 0 004cf1f823e48 7ac613b50aeaa bd12a212f9dcb 10 minutes ago Exited kube-addon-manager 0 82fa10b35cd05 4494645a1bb78 5eb3b74868724 10 minutes ago Exited kube-controller-manager 0 e2ac8ba33bc19 c99127cf08fd9 0cae8d5cc64c7 10 minutes ago Exited kube-apiserver 0 169b9b999bf25

==> coredns ["0e2f51b61587"] <== .:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.5 linux/amd64, go1.13.4, c2fd1b2 [INFO] SIGTERM: Shutting down servers then terminating [INFO] plugin/health: Going into lameduck mode for 5s

==> coredns ["42a90984b024"] <== E1226 08:54:19.152043 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1226 08:54:19.152669 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1226 08:54:19.154608 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout [INFO] plugin/ready: Still waiting on: "kubernetes" .:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.5 linux/amd64, go1.13.4, c2fd1b2 [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" I1226 08:54:19.152006 1 trace.go:82] Trace[992224277]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2019-12-26 08:53:49.148767088 +0000 UTC m=+0.109692896) (total time: 30.003161564s): Trace[992224277]: [30.003161564s] [30.003161564s] END E1226 08:54:19.152043 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1226 08:54:19.152043 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1226 08:54:19.152043 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I1226 08:54:19.152377 1 trace.go:82] Trace[1127456286]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2019-12-26 08:53:49.148667961 +0000 UTC m=+0.109593761) (total time: 30.003700457s): Trace[1127456286]: [30.003700457s] [30.003700457s] END E1226 08:54:19.152669 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1226 08:54:19.152669 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1226 08:54:19.152669 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I1226 08:54:19.154581 1 trace.go:82] Trace[1872913834]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2019-12-26 08:53:49.154302468 +0000 UTC m=+0.115228268) (total time: 30.000269157s): Trace[1872913834]: [30.000269157s] [30.000269157s] END E1226 08:54:19.154608 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1226 08:54:19.154608 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1226 08:54:19.154608 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

==> coredns ["ccde998a2a37"] <== E1226 08:54:19.152480 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1226 08:54:19.152966 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1226 08:54:19.154997 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&.resourceVer:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc36969s6e76816bio9n034ebc7 0: dialCocre DNS.-9.6..5 .1li4nux/iamod 64i,megoo1t.13.4, c2fd1b2 [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" I1226 08:54:19.152352 1 trace.go:82] Trace[1848971729]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2019-12-26 08:53:49.148797894 +0000 UTC m=+0.099988131) (total time: 30.00350676s): Trace[1848971729]: [30.00350676s] [30.00350676s] END E1226 08:54:19.152480 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1226 08:54:19.152480 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1226 08:54:19.152480 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I1226 08:54:19.152691 1 trace.go:82] Trace[1677074086]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2019-12-26 08:53:49.148660266 +0000 UTC m=+0.099850527) (total time: 30.004019454s): Trace[1677074086]: [30.004019454s] [30.004019454s] END E1226 08:54:19.152966 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1226 08:54:19.152966 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1226 08:54:19.152966 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I1226 08:54:19.154848 1 trace.go:82] Trace[476432371]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2019-12-26 08:53:49.154320512 +0000 UTC m=+0.105510752) (total time: 30.000519367s): Trace[476432371]: [30.000519367s] [30.000519367s] END E1226 08:54:19.154997 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1226 08:54:19.154997 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1226 08:54:19.154997 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout [INFO] plugin/ready: Still waiting on: "kubernetes"

==> coredns ["ff24c3c52cd8"] <== E1226 08:52:37.232647 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=628&timeout=5m10s&timeoutSeconds=310&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1226 08:52:37.232853 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=421&timeout=5m22s&timeoutSeconds=322&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1226 08:52:37.233039 1 reflector.go:283] pk./mod/k8s.io/client-go@3v0.0.0-20190620085101-78d2af79l2bgin/rtoloolasd/cachnen/ieflge coofrg.urati8: FD5 = 4e235fctc3h v1.E96966iets8: bcd9034ebc7 /10CoreD0.-1:.6.5 3/aplivnuex/dapmoints geso.1c3e.e4r,i cn2fd3134&tib2o ut=8mE11s&timeout2e:37.23=507 w a c h=t rueredflectoral tcpg o1:0283.]6.0 p:k4g3/: connes.ioc/clnieenit-gor@vf0us0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=628&timeout=5m10s&timeoutSeconds=310&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1226 08:52:37.232647 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=628&timeout=5m10s&timeoutSeconds=310&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1226 08:52:37.232647 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=628&timeout=5m10s&timeoutSeconds=310&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1226 08:52:37.232853 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=421&timeout=5m22s&timeoutSeconds=322&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1226 08:52:37.232853 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=421&timeout=5m22s&timeoutSeconds=322&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1226 08:52:37.232853 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=421&timeout=5m22s&timeoutSeconds=322&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1226 08:52:37.233039 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=1334&timeout=8m21s&timeoutSeconds=501&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1226 08:52:37.233039 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=1334&timeout=8m21s&timeoutSeconds=501&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1226 08:52:37.233039 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=1334&timeout=8m21s&timeoutSeconds=501&watch=true: dial tcp 10.96.0.1:443: connect: connection refused [INFO] SIGTERM: Shutting down servers then terminating [INFO] plugin/health: Going into lameduck mode for 5s

==> dmesg <== [ +0.000003] CPU: 1 PID: 14520 Comm: exe Tainted: G O 4.19.81 #1 [ +0.000001] Hardware name: BHYVE, BIOS 1.00 03/14/2014 [ +0.000000] Call Trace: [ +0.000006] dump_stack+0x5c/0x7b [ +0.000002] dump_header+0x66/0x28e [ +0.000001] oom_kill_process+0x251/0x270 [ +0.000001] out_of_memory+0x10b/0x4b0 [ +0.000002] mem_cgroup_out_of_memory+0xb0/0xd0 [ +0.000002] try_charge+0x70e/0x750 [ +0.000001] ? alloc_pages_nodemask+0x11f/0x2a0 [ +0.000001] mem_cgroup_try_charge+0x4c/0x150 [ +0.000001] mem_cgroup_try_charge_delay+0x17/0x40 [ +0.000002] handle_mm_fault+0x345/0xc30 [ +0.000001] ? switch_to_asm+0x35/0x70 [ +0.000001] ? switch_to_asm+0x41/0x70 [ +0.000002] handle_mm_fault+0xd7/0x230 [ +0.000001] do_page_fault+0x23e/0x4c0 [ +0.000002] ? page_fault+0x8/0x30 [ +0.000000] page_fault+0x1e/0x30 [ +0.000002] RIP: 0033:0x7fbcc536f08e [ +0.000001] Code: ff ff 66 2e 0f 1f 84 00 00 00 00 00 48 39 da 72 22 e9 76 f6 ff ff 66 0f 1f 44 00 00 48 8b 50 10 48 83 c0 18 4c 01 fa 48 39 c3 <48> 89 11 0f 86 59 f6 ff ff 8b 50 08 48 8b 08 4c 01 f9 48 83 fa 26 [ +0.000000] RSP: 002b:00007fff40502fa0 EFLAGS: 00010202 [ +0.000001] RAX: 000056124eaa8c10 RBX: 000056124eb28b48 RCX: 000056124f2b4168 [ +0.000000] RDX: 000056124ec36e10 RSI: 0000000000000000 RDI: 00007fbcc558a9f0 [ +0.000001] RBP: 00007fff405030a0 R08: 0000000000000000 R09: 0000000000000000 [ +0.000000] R10: 00007fbcc558b190 R11: 0000000000000206 R12: 0000000000000001 [ +0.000001] R13: 00007fbcc558b190 R14: 00007fbcc558b190 R15: 000056124e966000 [ +0.000076] Memory cgroup out of memory: Kill process 14520 (exe) score 1031000 or sacrifice child [ +0.000001] Killed process 14520 (exe) total-vm:17840kB, anon-rss:1772kB, file-rss:4kB, shmem-rss:2280kB [ +1.096305] exe invoked oom-killer: gfp_mask=0x7080c0(GFP_KERNEL_ACCOUNT|GFP_ZERO), nodemask=(null), order=0, oom_score_adj=-999 [ +0.000003] CPU: 1 PID: 14555 Comm: exe Tainted: G O 4.19.81 #1 [ +0.000001] Hardware name: BHYVE, BIOS 1.00 03/14/2014 [ +0.000000] Call Trace: [ +0.000005] dump_stack+0x5c/0x7b [ +0.000003] dump_header+0x66/0x28e [ +0.000002] oom_kill_process+0x251/0x270 [ +0.000001] out_of_memory+0x10b/0x4b0 [ +0.000002] mem_cgroup_out_of_memory+0xb0/0xd0 [ +0.000001] try_charge+0x70e/0x750 [ +0.000001] memcg_kmem_charge_memcg+0x3a/0xd0 [ +0.000002] ? switch_to_asm+0x41/0x70 [ +0.000001] ? switch_to_asm+0x35/0x70 [ +0.000000] memcg_kmem_charge+0x73/0x130 [ +0.000002] alloc_pages_nodemask+0x1fe/0x2a0 [ +0.000001] pte_alloc_one+0xe/0x40 [ +0.000002] handle_mm_fault+0xbdb/0xc30 [ +0.000002] handle_mm_fault+0xd7/0x230 [ +0.000001] __do_page_fault+0x23e/0x4c0 [ +0.000002] ? page_fault+0x8/0x30 [ +0.000000] page_fault+0x1e/0x30 [ +0.000002] RIP: 0033:0x7fe2b500d050 [ +0.000003] Code: Bad RIP value. [ +0.000000] RSP: 002b:00007ffea7778150 EFLAGS: 00010202 [ +0.000001] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000 [ +0.000000] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000 [ +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000 [ +0.000000] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 [ +0.000000] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 [ +0.000131] Memory cgroup out of memory: Kill process 14555 (exe) score 9000 or sacrifice child [ +0.000002] Killed process 14555 (exe) total-vm:9516kB, anon-rss:4kB, file-rss:0kB, shmem-rss:0kB

==> kernel <== 08:57:15 up 4 min, 0 users, load average: 1.32, 0.92, 0.39 Linux minikube 4.19.81 #1 SMP Tue Dec 10 16:09:50 PST 2019 x86_64 GNU/Linux PRETTY_NAME="Buildroot 2019.02.7"

==> kube-addon-manager ["7ac613b50aea"] <== service/metrics-server unchanged serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-12-26T08:52:28+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-12-26T08:52:30+00:00 == INFO: == Reconciling with deprecated label == error: no objects passed to apply error: no objects passed to apply error: error pruning namespaced object /v1, Kind=ConfigMap: Get https://localhost:8443/api/v1/namespaces/kube-system/configmaps?labelSelector=kubernetes.io%2Fcluster-service%21%3Dtrue%2Caddonmanager.kubernetes.io%2Fmode%3DReconcile: dial tcp 127.0.0.1:8443: connect: connection refused INFO: == Reconciling with addon-manager label == clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged configmap/kubernetes-dashboard-settings unchanged deployment.apps/dashboard-metrics-scraper unchanged deployment.apps/kubernetes-dashboard unchanged namespace/kubernetes-dashboard unchanged role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged serviceaccount/kubernetes-dashboard unchanged secret/kubernetes-dashboard-certs unchanged secret/kubernetes-dashboard-csrf unchanged secret/kubernetes-dashboard-key-holder unchanged service/kubernetes-dashboard unchanged service/dashboard-metrics-scraper unchanged deployment.apps/nginx-ingress-controller unchanged serviceaccount/nginx-ingress unchanged clusterrole.rbac.authorization.k8s.io/system:nginx-ingress unchanged role.rbac.authorization.k8s.io/system::nginx-ingress-role unchanged apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged deployment.apps/metrics-server unchanged service/metrics-server unchanged serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-12-26T08:52:34+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-12-26T08:52:35+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged configmap/kubernetes-dashboard-settings unchanged deployment.apps/dashboard-metrics-scraper unchanged deployment.apps/kubernetes-dashboard unchanged namespace/kubernetes-dashboard unchanged role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged serviceaccount/kubernetes-dashboard unchanged secret/kubernetes-dashboard-certs unchanged secret/kubernetes-dashboard-csrf unchanged secret/kubernetes-dashboard-key-holder unchanged service/kubernetes-dashboard unchanged service/dashboard-metrics-scraper unchanged deployment.apps/nginx-ingress-controller unchanged serviceaccount/nginx-ingress unchanged clusterrole.rbac.authorization.k8s.io/system:nginx-ingress unchanged role.rbac.authorization.k8s.io/system::nginx-ingress-role unchanged apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged deployment.apps/metrics-server unchanged service/metrics-server unchanged serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-12-26T08:52:37+00:00 ==

==> kube-addon-manager ["872a91c7106c"] <== error: no objects passed to apply INFO: == Kubernetes addon ensure completed at 2019-12-26T08:57:01+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged configmap/kubernetes-dashboard-settings unchanged deployment.apps/dashboard-metrics-scraper unchanged deployment.apps/kubernetes-dashboard unchanged namespace/kubernetes-dashboard unchanged role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged serviceaccount/kubernetes-dashboard unchanged secret/kubernetes-dashboard-certs unchanged secret/kubernetes-dashboard-csrf unchanged secret/kubernetes-dashboard-key-holdee unchranrged no oebrjveicte/ kubesrnde ttes-dpashboard unchanged lsyr vice/dashboard-metrics-scra:per uonbcjhatng eda ssdepdl oym eant.lap ps/nginx-ingress-controller unchanged serviceaccount/nginx-ingress unchanged clusterrole.rbac.authorization.k8s.io/system:nginx-ingress unchanged role.rbac.authorization.k8s.io/system::nginx-ingress-role unchanged apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged deployment.apps/metrics-server unchanged service/metrics-server unchanged serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-12-26T08:57:06+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-12-26T08:57:06+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged configmap/kubernetes-dashboard-settings unchanged deployment.apps/dashboard-metrics-scraper unchanged deployment.apps/kubernetes-dashboard unchanged namespace/kubernetes-dashboard unchanged role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged serviceaccount/kubernetes-dashboard unchanged secret/kubernetes-dashboard-certs unchanged secret/kubernetes-dashboard-csrf unchanged secret/kubernetes-dashboard-key-holder unchanged service/kubernetes-dashboard unchanged service/dashboard-metrics-scraper unchanged deployment.apps/nginx-ingress-controller unchanged serviceaccount/nginx-ingress unchanged clusterrole.rbac.authorization.k8s.io/system:nginx-ingress unchanged role.rbac.authorization.k8s.io/system::nginx-ingress-role unchanged apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged deployment.apps/metrics-server unchanged service/metrics-server unchanged serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-12-26T08:57:10+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-12-26T08:57:11+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label ==

==> kube-apiserver ["630397b74ce3"] <== I1226 08:53:45.358825 1 tlsconfig.go:219] Starting DynamicServingCertificateController I1226 08:53:45.359153 1 apiservice_controller.go:94] Starting APIServiceRegistrationController I1226 08:53:45.359366 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I1226 08:53:45.362674 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I1226 08:53:45.362710 1 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller I1226 08:53:45.362986 1 crd_finalizer.go:263] Starting CRDFinalizer I1226 08:53:45.364886 1 autoregister_controller.go:140] Starting autoregister controller I1226 08:53:45.364940 1 cache.go:32] Waiting for caches to sync for autoregister controller I1226 08:53:45.365310 1 controller.go:85] Starting OpenAPI controller I1226 08:53:45.365411 1 customresource_discovery_controller.go:208] Starting DiscoveryController I1226 08:53:45.365528 1 naming_controller.go:288] Starting NamingConditionController I1226 08:53:45.365633 1 establishing_controller.go:73] Starting EstablishingController I1226 08:53:45.365759 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController I1226 08:53:45.365816 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController I1226 08:53:45.366340 1 available_controller.go:386] Starting AvailableConditionController I1226 08:53:45.366364 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I1226 08:53:45.366385 1 controller.go:81] Starting OpenAPI AggregationController I1226 08:53:45.366429 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I1226 08:53:45.366441 1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt I1226 08:53:45.377488 1 crdregistration_controller.go:111] Starting crd-autoregister controller I1226 08:53:45.377497 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister E1226 08:53:45.377626 1 controller.go:151] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.64.7, ResourceVersion: 0, AdditionalErrorMsg: I1226 08:53:45.462918 1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller I1226 08:53:45.470653 1 cache.go:39] Caches are synced for AvailableConditionController controller I1226 08:53:45.472864 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I1226 08:53:45.479978 1 shared_informer.go:204] Caches are synced for crd-autoregister I1226 08:53:45.493181 1 cache.go:39] Caches are synced for autoregister controller I1226 08:53:45.546786 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io I1226 08:53:46.359853 1 controller.go:107] OpenAPI AggregationController: Processing item I1226 08:53:46.359892 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I1226 08:53:46.359900 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I1226 08:53:46.561611 1 storage_scheduling.go:142] all system priority classes are created successfully or already exist. I1226 08:53:47.353951 1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io I1226 08:53:48.300091 1 controller.go:606] quota admission added evaluator for: deployments.apps I1226 08:53:48.326223 1 controller.go:606] quota admission added evaluator for: serviceaccounts I1226 08:53:48.410406 1 controller.go:606] quota admission added evaluator for: daemonsets.apps I1226 08:53:48.423886 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io I1226 08:53:48.433307 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io E1226 08:53:50.475922 1 available_controller.go:419] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.181.239:443/apis/metrics.k8s.io/v1beta1: Get https://10.96.181.239:443/apis/metrics.k8s.io/v1beta1: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) E1226 08:53:50.479100 1 available_controller.go:419] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.181.239:443/apis/metrics.k8s.io/v1beta1: Get https://10.96.181.239:443/apis/metrics.k8s.io/v1beta1: dial tcp 10.96.181.239:443: connect: connection refused E1226 08:53:50.479616 1 available_controller.go:419] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.181.239:443/apis/metrics.k8s.io/v1beta1: Get https://10.96.181.239:443/apis/metrics.k8s.io/v1beta1: dial tcp 10.96.181.239:443: connect: connection refused E1226 08:53:50.481582 1 available_controller.go:419] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.181.239:443/apis/metrics.k8s.io/v1beta1: Get https://10.96.181.239:443/apis/metrics.k8s.io/v1beta1: dial tcp 10.96.181.239:443: connect: connection refused E1226 08:53:50.522571 1 available_controller.go:419] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.181.239:443/apis/metrics.k8s.io/v1beta1: Get https://10.96.181.239:443/apis/metrics.k8s.io/v1beta1: dial tcp 10.96.181.239:443: connect: connection refused E1226 08:53:50.603359 1 available_controller.go:419] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.181.239:443/apis/metrics.k8s.io/v1beta1: Get https://10.96.181.239:443/apis/metrics.k8s.io/v1beta1: dial tcp 10.96.181.239:443: connect: connection refused E1226 08:53:50.764638 1 available_controller.go:419] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.181.239:443/apis/metrics.k8s.io/v1beta1: Get https://10.96.181.239:443/apis/metrics.k8s.io/v1beta1: dial tcp 10.96.181.239:443: connect: connection refused E1226 08:53:51.085504 1 available_controller.go:419] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.181.239:443/apis/metrics.k8s.io/v1beta1: Get https://10.96.181.239:443/apis/metrics.k8s.io/v1beta1: dial tcp 10.96.181.239:443: connect: connection refused E1226 08:53:51.726327 1 available_controller.go:419] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.181.239:443/apis/metrics.k8s.io/v1beta1: Get https://10.96.181.239:443/apis/metrics.k8s.io/v1beta1: dial tcp 10.96.181.239:443: connect: connection refused E1226 08:53:53.007348 1 available_controller.go:419] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.181.239:443/apis/metrics.k8s.io/v1beta1: Get https://10.96.181.239:443/apis/metrics.k8s.io/v1beta1: dial tcp 10.96.181.239:443: connect: connection refused E1226 08:53:55.570886 1 available_controller.go:419] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.181.239:443/apis/metrics.k8s.io/v1beta1: Get https://10.96.181.239:443/apis/metrics.k8s.io/v1beta1: dial tcp 10.96.181.239:443: connect: connection refused E1226 08:54:00.692210 1 available_controller.go:419] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.181.239:443/apis/metrics.k8s.io/v1beta1: Get https://10.96.181.239:443/apis/metrics.k8s.io/v1beta1: dial tcp 10.96.181.239:443: connect: connection refused I1226 08:54:01.484745 1 controller.go:606] quota admission added evaluator for: endpoints E1226 08:54:10.425814 1 available_controller.go:419] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.181.239:443/apis/metrics.k8s.io/v1beta1: Get https://10.96.181.239:443/apis/metrics.k8s.io/v1beta1: dial tcp 10.96.181.239:443: connect: connection refused E1226 08:54:12.028710 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist I1226 08:54:12.028927 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. I1226 08:55:12.029846 1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io E1226 08:55:12.034838 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist I1226 08:55:12.034899 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. I1226 08:57:12.035237 1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io E1226 08:57:12.037875 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist I1226 08:57:12.037909 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.

==> kube-apiserver ["c99127cf08fd"] <== I1226 08:47:14.130292 1 cache.go:39] Caches are synced for AvailableConditionController controller I1226 08:47:14.131135 1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller I1226 08:47:14.133368 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I1226 08:47:14.137871 1 cache.go:39] Caches are synced for autoregister controller I1226 08:47:14.189182 1 shared_informer.go:204] Caches are synced for crd-autoregister I1226 08:47:15.029023 1 controller.go:107] OpenAPI AggregationController: Processing item I1226 08:47:15.029179 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I1226 08:47:15.029198 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I1226 08:47:15.037473 1 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000 I1226 08:47:15.042573 1 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000 I1226 08:47:15.042650 1 storage_scheduling.go:142] all system priority classes are created successfully or already exist. I1226 08:47:15.368555 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io I1226 08:47:15.404808 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io W1226 08:47:15.458788 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.64.7] I1226 08:47:15.459539 1 controller.go:606] quota admission added evaluator for: endpoints I1226 08:47:16.232438 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io I1226 08:47:16.935654 1 controller.go:606] quota admission added evaluator for: serviceaccounts I1226 08:47:16.948572 1 controller.go:606] quota admission added evaluator for: deployments.apps I1226 08:47:17.172453 1 controller.go:606] quota admission added evaluator for: daemonsets.apps I1226 08:47:24.462976 1 controller.go:606] quota admission added evaluator for: replicasets.apps I1226 08:47:24.487997 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps I1226 08:47:26.829119 1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io W1226 08:47:26.829316 1 handler_proxy.go:97] no RequestInfo found in the context E1226 08:47:26.829763 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1226 08:47:26.829815 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. I1226 08:47:37.026068 1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io E1226 08:47:37.037393 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist I1226 08:47:37.037423 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. I1226 08:48:37.037711 1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io E1226 08:48:37.040491 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist I1226 08:48:37.040530 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. I1226 08:50:37.040830 1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io E1226 08:50:37.043378 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist I1226 08:50:37.043447 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. I1226 08:52:15.118184 1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io E1226 08:52:15.120781 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist I1226 08:52:15.120793 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. I1226 08:52:37.226241 1 controller.go:180] Shutting down kubernetes service endpoint reconciler I1226 08:52:37.226581 1 crdregistration_controller.go:142] Shutting down crd-autoregister controller I1226 08:52:37.226640 1 controller.go:122] Shutting down OpenAPI controller I1226 08:52:37.226670 1 autoregister_controller.go:164] Shutting down autoregister controller I1226 08:52:37.226690 1 apiservice_controller.go:106] Shutting down APIServiceRegistrationController I1226 08:52:37.226719 1 nonstructuralschema_controller.go:203] Shutting down NonStructuralSchemaConditionController I1226 08:52:37.226742 1 establishing_controller.go:84] Shutting down EstablishingController I1226 08:52:37.226793 1 naming_controller.go:299] Shutting down NamingConditionController I1226 08:52:37.226830 1 customresource_discovery_controller.go:219] Shutting down DiscoveryController I1226 08:52:37.226858 1 apiapproval_controller.go:197] Shutting down KubernetesAPIApprovalPolicyConformantConditionController I1226 08:52:37.226904 1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller I1226 08:52:37.226943 1 crd_finalizer.go:275] Shutting down CRDFinalizer I1226 08:52:37.226954 1 available_controller.go:398] Shutting down AvailableConditionController I1226 08:52:37.227321 1 dynamic_cafile_content.go:181] Shutting down request-header::/var/lib/minikube/certs/front-proxy-ca.crt I1226 08:52:37.227359 1 dynamic_cafile_content.go:181] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt I1226 08:52:37.227498 1 dynamic_cafile_content.go:181] Shutting down request-header::/var/lib/minikube/certs/front-proxy-ca.crt I1226 08:52:37.227538 1 tlsconfig.go:234] Shutting down DynamicServingCertificateController I1226 08:52:37.227854 1 dynamic_serving_content.go:144] Shutting down serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key I1226 08:52:37.227888 1 dynamic_cafile_content.go:181] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt I1226 08:52:37.234118 1 controller.go:87] Shutting down OpenAPI AggregationController I1226 08:52:37.234546 1 secure_serving.go:222] Stopped listening on [::]:8443 E1226 08:52:37.246452 1 controller.go:183] Get https://[::1]:8443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp [::1]:8443: connect: connection refused

==> kube-controller-manager ["4494645a1bb7"] <== I1226 08:47:25.143572 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"nginx-ingress-controller-6fc5bcc8c9", UID:"71ac4dbf-d1fe-41b9-917f-13077dff475b", APIVersion:"apps/v1", ResourceVersion:"392", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "nginx-ingress-controller-6fc5bcc8c9-" is forbidden: error looking up service account kube-system/nginx-ingress: serviceaccount "nginx-ingress" not found E1226 08:47:25.146784 1 replica_set.go:534] sync "kube-system/nginx-ingress-controller-6fc5bcc8c9" failed with pods "nginx-ingress-controller-6fc5bcc8c9-" is forbidden: error looking up service account kube-system/nginx-ingress: serviceaccount "nginx-ingress" not found E1226 08:47:25.150711 1 replica_set.go:534] sync "kube-system/nginx-ingress-controller-6fc5bcc8c9" failed with pods "nginx-ingress-controller-6fc5bcc8c9-" is forbidden: error looking up service account kube-system/nginx-ingress: serviceaccount "nginx-ingress" not found I1226 08:47:25.150784 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"nginx-ingress-controller-6fc5bcc8c9", UID:"71ac4dbf-d1fe-41b9-917f-13077dff475b", APIVersion:"apps/v1", ResourceVersion:"395", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "nginx-ingress-controller-6fc5bcc8c9-" is forbidden: error looking up service account kube-system/nginx-ingress: serviceaccount "nginx-ingress" not found I1226 08:47:25.216988 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"metrics-server", UID:"c3f89f66-2de2-4e5a-9916-2e613590c8e0", APIVersion:"apps/v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set metrics-server-6754dbc9df to 1 I1226 08:47:25.249360 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-6754dbc9df", UID:"15862d57-27f2-45ce-99ac-7f89ae3cd469", APIVersion:"apps/v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-6754dbc9df-zmbc8 I1226 08:47:26.163459 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"nginx-ingress-controller-6fc5bcc8c9", UID:"71ac4dbf-d1fe-41b9-917f-13077dff475b", APIVersion:"apps/v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-ingress-controller-6fc5bcc8c9-spv5j I1226 08:47:29.430878 1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode. I1226 08:47:30.157305 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"74d1a16b-64b9-4bc2-9106-1ca412fa8be2", APIVersion:"apps/v1", ResourceVersion:"486", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-7b64584c5c to 1 I1226 08:47:30.173314 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-7b64584c5c", UID:"932fd5db-529a-4d88-aa2f-adabfa9dc20d", APIVersion:"apps/v1", ResourceVersion:"487", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-7b64584c5c-mzr99 I1226 08:47:30.186824 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"fff232be-ecfe-4e36-b195-4bed6ac7b58c", APIVersion:"apps/v1", ResourceVersion:"490", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-79d9cd965 to 1 I1226 08:47:30.206882 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-79d9cd965", UID:"cffc46a0-16da-4179-bbb4-e0915160f93b", APIVersion:"apps/v1", ResourceVersion:"495", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-79d9cd965-pp7hg I1226 08:52:22.133603 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"weave-net", UID:"d0db5239-6b80-4223-92e3-5858c61265d8", APIVersion:"apps/v1", ResourceVersion:"1285", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: weave-net-k7ggh E1226 08:52:22.158282 1 daemon_controller.go:290] kube-system/weave-net failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"weave-net", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/weave-net", UID:"d0db5239-6b80-4223-92e3-5858c61265d8", ResourceVersion:"1285", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712947142, loc:(time.Location)(0x6b951c0)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string{"name":"weave-net"}, Annotations:map[string]string{"cloud.weave.works/launcher-info":"{\n \"original-request\": {\n \"url\": \"/k8s/v1.10/net.yaml?k8s-version=Q2xpZW50IFZlcnNpb246IHZlcnNpb24uSW5mb3tNYWpvcjoiMSIsIE1pbm9yOiIxNyIsIEdpdFZlcnNpb246InYxLjE3LjAiLCBHaXRDb21taXQ6IjcwMTMyYjBmMTMwYWNjMGJlZDE5M2Q5YmE1OWRkMTg2ZjBlNjM0Y2YiLCBHaXRUcmVlU3RhdGU6ImNsZWFuIiwgQnVpbGREYXRlOiIyMDE5LTEyLTEzVDExOjUxOjQ0WiIsIEdvVmVyc2lvbjoiZ28xLjEzLjQiLCBDb21waWxlcjoiZ2MiLCBQbGF0Zm9ybToiZGFyd2luL2FtZDY0In0KU2VydmVyIFZlcnNpb246IHZlcnNpb24uSW5mb3tNYWpvcjoiMSIsIE1pbm9yOiIxNyIsIEdpdFZlcnNpb246InYxLjE3LjAiLCBHaXRDb21taXQ6IjcwMTMyYjBmMTMwYWNjMGJlZDE5M2Q5YmE1OWRkMTg2ZjBlNjM0Y2YiLCBHaXRUcmVlU3RhdGU6ImNsZWFuIiwgQnVpbGREYXRlOiIyMDE5LTEyLTA3VDIxOjEyOjE3WiIsIEdvVmVyc2lvbjoiZ28xLjEzLjQiLCBDb21waWxlcjoiZ2MiLCBQbGF0Zm9ybToibGludXgvYW1kNjQifQo=\",\n \"date\": \"Thu Dec 26 2019 08:52:21 GMT+0000 (UTC)\"\n },\n \"email-address\": \"support@weave.works\"\n}", "deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"cloud.weave.works/launcher-info\":\"{\n \\"original-request\\": {\n \\"url\\": \\"/k8s/v1.10/net.yaml?k8s-version=Q2xpZW50IFZlcnNpb246IHZlcnNpb24uSW5mb3tNYWpvcjoiMSIsIE1pbm9yOiIxNyIsIEdpdFZlcnNpb246InYxLjE3LjAiLCBHaXRDb21taXQ6IjcwMTMyYjBmMTMwYWNjMGJlZDE5M2Q5YmE1OWRkMTg2ZjBlNjM0Y2YiLCBHaXRUcmVlU3RhdGU6ImNsZWFuIiwgQnVpbGREYXRlOiIyMDE5LTEyLTEzVDExOjUxOjQ0WiIsIEdvVmVyc2lvbjoiZ28xLjEzLjQiLCBDb21waWxlcjoiZ2MiLCBQbGF0Zm9ybToiZGFyd2luL2FtZDY0In0KU2VydmVyIFZlcnNpb246IHZlcnNpb24uSW5mb3tNYWpvcjoiMSIsIE1pbm9yOiIxNyIsIEdpdFZlcnNpb246InYxLjE3LjAiLCBHaXRDb21taXQ6IjcwMTMyYjBmMTMwYWNjMGJlZDE5M2Q5YmE1OWRkMTg2ZjBlNjM0Y2YiLCBHaXRUcmVlU3RhdGU6ImNsZWFuIiwgQnVpbGREYXRlOiIyMDE5LTEyLTA3VDIxOjEyOjE3WiIsIEdvVmVyc2lvbjoiZ28xLjEzLjQiLCBDb21waWxlcjoiZ2MiLCBQbGF0Zm9ybToibGludXgvYW1kNjQifQo=\\",\n \\"date\\": \\"Thu Dec 26 2019 08:52:21 GMT+0000 (UTC)\\"\n },\n \\"email-address\\": \\"support@weave.works\\"\n}\"},\"labels\":{\"name\":\"weave-net\"},\"name\":\"weave-net\",\"namespace\":\"kube-system\"},\"spec\":{\"minReadySeconds\":5,\"selector\":{\"matchLabels\":{\"name\":\"weave-net\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"weave-net\"}},\"spec\":{\"containers\":[{\"command\":[\"/home/weave/launch.sh\"],\"env\":[{\"name\":\"HOSTNAME\",\"valueFrom\":{\"fieldRef\":{\"apiVersion\":\"v1\",\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\"docker.io/weaveworks/weave-kube:2.6.0\",\"name\":\"weave\",\"readinessProbe\":{\"httpGet\":{\"host\":\"127.0.0.1\",\"path\":\"/status\",\"port\":6784}},\"resources\":{\"requests\":{\"cpu\":\"10m\"}},\"securityContext\":{\"privileged\":true},\"volumeMounts\":[{\"mountPath\":\"/weavedb\",\"name\":\"weavedb\"},{\"mountPath\":\"/host/opt\",\"name\":\"cni-bin\"},{\"mountPath\":\"/host/home\",\"name\":\"cni-bin2\"},{\"mountPath\":\"/host/etc\",\"name\":\"cni-conf\"},{\"mountPath\":\"/host/var/lib/dbus\",\"name\":\"dbus\"},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\"}]},{\"env\":[{\"name\":\"HOSTNAME\",\"valueFrom\":{\"fieldRef\":{\"apiVersion\":\"v1\",\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\"docker.io/weaveworks/weave-npc:2.6.0\",\"name\":\"weave-npc\",\"resources\":{\"requests\":{\"cpu\":\"10m\"}},\"securityContext\":{\"privileged\":true},\"volumeMounts\":[{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\"}]}],\"hostNetwork\":true,\"hostPID\":true,\"restartPolicy\":\"Always\",\"securityContext\":{\"seLinuxOptions\":{}},\"serviceAccountName\":\"weave-net\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/var/lib/weave\"},\"name\":\"weavedb\"},{\"hostPath\":{\"path\":\"/opt\"},\"name\":\"cni-bin\"},{\"hostPath\":{\"path\":\"/home\"},\"name\":\"cni-bin2\"},{\"hostPath\":{\"path\":\"/etc\"},\"name\":\"cni-conf\"},{\"hostPath\":{\"path\":\"/var/lib/dbus\"},\"name\":\"dbus\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"}]}},\"updateStrategy\":{\"type\":\"RollingUpdate\"}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(v1.LabelSelector)(0xc001cb4820), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string{"name":"weave-net"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"weavedb", VolumeSource:v1.VolumeSource{HostPath:(v1.HostPathVolumeSource)(0xc001cb4840), EmptyDir:(v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(v1.GitRepoVolumeSource)(nil), Secret:(v1.SecretVolumeSource)(nil), NFS:(v1.NFSVolumeSource)(nil), ISCSI:(v1.ISCSIVolumeSource)(nil), Glusterfs:(v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(v1.RBDVolumeSource)(nil), FlexVolume:(v1.FlexVolumeSource)(nil), Cinder:(v1.CinderVolumeSource)(nil), CephFS:(v1.CephFSVolumeSource)(nil), Flocker:(v1.FlockerVolumeSource)(nil), DownwardAPI:(v1.DownwardAPIVolumeSource)(nil), FC:(v1.FCVolumeSource)(nil), AzureFile:(v1.AzureFileVolumeSource)(nil), ConfigMap:(v1.ConfigMapVolumeSource)(nil), VsphereVolume:(v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(v1.QuobyteVolumeSource)(nil), AzureDisk:(v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(v1.ProjectedVolumeSource)(nil), PortworxVolume:(v1.PortworxVolumeSource)(nil), ScaleIO:(v1.ScaleIOVolumeSource)(nil), StorageOS:(v1.StorageOSVolumeSource)(nil), CSI:(v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"cni-bin", VolumeSource:v1.VolumeSource{HostPath:(v1.HostPathVolumeSource)(0xc001cb4860), EmptyDir:(v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(v1.GitRepoVolumeSource)(nil), Secret:(v1.SecretVolumeSource)(nil), NFS:(v1.NFSVolumeSource)(nil), ISCSI:(v1.ISCSIVolumeSource)(nil), Glusterfs:(v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(v1.RBDVolumeSource)(nil), FlexVolume:(v1.FlexVolumeSource)(nil), Cinder:(v1.CinderVolumeSource)(nil), CephFS:(v1.CephFSVolumeSource)(nil), Flocker:(v1.FlockerVolumeSource)(nil), DownwardAPI:(v1.DownwardAPIVolumeSource)(nil), FC:(v1.FCVolumeSource)(nil), AzureFile:(v1.AzureFileVolumeSource)(nil), ConfigMap:(v1.ConfigMapVolumeSource)(nil), VsphereVolume:(v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(v1.QuobyteVolumeSource)(nil), AzureDisk:(v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(v1.ProjectedVolumeSource)(nil), PortworxVolume:(v1.PortworxVolumeSource)(nil), ScaleIO:(v1.ScaleIOVolumeSource)(nil), StorageOS:(v1.StorageOSVolumeSource)(nil), CSI:(v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"cni-bin2", VolumeSource:v1.VolumeSource{HostPath:(v1.HostPathVolumeSource)(0xc001cb4880), EmptyDir:(v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(v1.GitRepoVolumeSource)(nil), Secret:(v1.SecretVolumeSource)(nil), NFS:(v1.NFSVolumeSource)(nil), ISCSI:(v1.ISCSIVolumeSource)(nil), Glusterfs:(v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(v1.RBDVolumeSource)(nil), FlexVolume:(v1.FlexVolumeSource)(nil), Cinder:(v1.CinderVolumeSource)(nil), CephFS:(v1.CephFSVolumeSource)(nil), Flocker:(v1.FlockerVolumeSource)(nil), DownwardAPI:(v1.DownwardAPIVolumeSource)(nil), FC:(v1.FCVolumeSource)(nil), AzureFile:(v1.AzureFileVolumeSource)(nil), ConfigMap:(v1.ConfigMapVolumeSource)(nil), VsphereVolume:(v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(v1.QuobyteVolumeSource)(nil), AzureDisk:(v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(v1.ProjectedVolumeSource)(nil), PortworxVolume:(v1.PortworxVolumeSource)(nil), ScaleIO:(v1.ScaleIOVolumeSource)(nil), StorageOS:(v1.StorageOSVolumeSource)(nil), CSI:(v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"cni-conf", VolumeSource:v1.VolumeSource{HostPath:(v1.HostPathVolumeSource)(0xc001cb48a0), EmptyDir:(v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(v1.GitRepoVolumeSource)(nil), Secret:(v1.SecretVolumeSource)(nil), NFS:(v1.NFSVolumeSource)(nil), ISCSI:(v1.ISCSIVolumeSource)(nil), Glusterfs:(v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(v1.RBDVolumeSource)(nil), FlexVolume:(v1.FlexVolumeSource)(nil), Cinder:(v1.CinderVolumeSource)(nil), CephFS:(v1.CephFSVolumeSource)(nil), Flocker:(v1.FlockerVolumeSource)(nil), DownwardAPI:(v1.DownwardAPIVolumeSource)(nil), FC:(v1.FCVolumeSource)(nil), AzureFile:(v1.AzureFileVolumeSource)(nil), ConfigMap:(v1.ConfigMapVolumeSource)(nil), VsphereVolume:(v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(v1.QuobyteVolumeSource)(nil), AzureDisk:(v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(v1.ProjectedVolumeSource)(nil), PortworxVolume:(v1.PortworxVolumeSource)(nil), ScaleIO:(v1.ScaleIOVolumeSource)(nil), StorageOS:(v1.StorageOSVolumeSource)(nil), CSI:(v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"dbus", VolumeSource:v1.VolumeSource{HostPath:(v1.HostPathVolumeSource)(0xc001cb48c0), EmptyDir:(v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(v1.GitRepoVolumeSource)(nil), Secret:(v1.SecretVolumeSource)(nil), NFS:(v1.NFSVolumeSource)(nil), ISCSI:(v1.ISCSIVolumeSource)(nil), Glusterfs:(v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(v1.RBDVolumeSource)(nil), FlexVolume:(v1.FlexVolumeSource)(nil), Cinder:(v1.CinderVolumeSource)(nil), CephFS:(v1.CephFSVolumeSource)(nil), Flocker:(v1.FlockerVolumeSource)(nil), DownwardAPI:(v1.DownwardAPIVolumeSource)(nil), FC:(v1.FCVolumeSource)(nil), AzureFile:(v1.AzureFileVolumeSource)(nil), ConfigMap:(v1.ConfigMapVolumeSource)(nil), VsphereVolume:(v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(v1.QuobyteVolumeSource)(nil), AzureDisk:(v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(v1.ProjectedVolumeSource)(nil), PortworxVolume:(v1.PortworxVolumeSource)(nil), ScaleIO:(v1.ScaleIOVolumeSource)(nil), StorageOS:(v1.StorageOSVolumeSource)(nil), CSI:(v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(v1.HostPathVolumeSource)(0xc001cb48e0), EmptyDir:(v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(v1.GitRepoVolumeSource)(nil), Secret:(v1.SecretVolumeSource)(nil), NFS:(v1.NFSVolumeSource)(nil), ISCSI:(v1.ISCSIVolumeSource)(nil), Glusterfs:(v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(v1.RBDVolumeSource)(nil), FlexVolume:(v1.FlexVolumeSource)(nil), Cinder:(v1.CinderVolumeSource)(nil), CephFS:(v1.CephFSVolumeSource)(nil), Flocker:(v1.FlockerVolumeSource)(nil), DownwardAPI:(v1.DownwardAPIVolumeSource)(nil), FC:(v1.FCVolumeSource)(nil), AzureFile:(v1.AzureFileVolumeSource)(nil), ConfigMap:(v1.ConfigMapVolumeSource)(nil), VsphereVolume:(v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(v1.QuobyteVolumeSource)(nil), AzureDisk:(v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(v1.ProjectedVolumeSource)(nil), PortworxVolume:(v1.PortworxVolumeSource)(nil), ScaleIO:(v1.ScaleIOVolumeSource)(nil), StorageOS:(v1.StorageOSVolumeSource)(nil), CSI:(v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(v1.HostPathVolumeSource)(0xc001cb4900), EmptyDir:(v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(v1.GitRepoVolumeSource)(nil), Secret:(v1.SecretVolumeSource)(nil), NFS:(v1.NFSVolumeSource)(nil), ISCSI:(v1.ISCSIVolumeSource)(nil), Glusterfs:(v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(v1.RBDVolumeSource)(nil), FlexVolume:(v1.FlexVolumeSource)(nil), Cinder:(v1.CinderVolumeSource)(nil), CephFS:(v1.CephFSVolumeSource)(nil), Flocker:(v1.FlockerVolumeSource)(nil), DownwardAPI:(v1.DownwardAPIVolumeSource)(nil), FC:(v1.FCVolumeSource)(nil), AzureFile:(v1.AzureFileVolumeSource)(nil), ConfigMap:(v1.ConfigMapVolumeSource)(nil), VsphereVolume:(v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(v1.QuobyteVolumeSource)(nil), AzureDisk:(v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(v1.ProjectedVolumeSource)(nil), PortworxVolume:(v1.PortworxVolumeSource)(nil), ScaleIO:(v1.ScaleIOVolumeSource)(nil), StorageOS:(v1.StorageOSVolumeSource)(nil), CSI:(v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"weave", Image:"docker.io/weaveworks/weave-kube:2.6.0", Command:[]string{"/home/weave/launch.sh"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOSTNAME", Value:"", ValueFrom:(v1.EnvVarSource)(0xc001cb4920)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"weavedb", ReadOnly:false, MountPath:"/weavedb", SubPath:"", MountPropagation:(v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"cni-bin", ReadOnly:false, MountPath:"/host/opt", SubPath:"", MountPropagation:(v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"cni-bin2", ReadOnly:false, MountPath:"/host/home", SubPath:"", MountPropagation:(v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"cni-conf", ReadOnly:false, MountPath:"/host/etc", SubPath:"", MountPropagation:(v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"dbus", ReadOnly:false, MountPath:"/host/var/lib/dbus", SubPath:"", MountPropagation:(v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:false, MountPath:"/lib/modules", SubPath:"", MountPropagation:(v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(v1.Probe)(nil), ReadinessProbe:(v1.Probe)(0xc001c4fda0), StartupProbe:(v1.Probe)(nil), Lifecycle:(v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(v1.SecurityContext)(0xc001c38b90), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"weave-npc", Image:"docker.io/weaveworks/weave-npc:2.6.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOSTNAME", Value:"", ValueFrom:(v1.EnvVarSource)(0xc001cb4960)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(v1.Probe)(nil), ReadinessProbe:(v1.Probe)(nil), StartupProbe:(v1.Probe)(nil), Lifecycle:(v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(v1.SecurityContext)(0xc001c38c30), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(int64)(0xc00205d5a8), ActiveDeadlineSeconds:(int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"weave-net", DeprecatedServiceAccount:"weave-net", AutomountServiceAccountToken:(bool)(nil), NodeName:"", HostNetwork:true, HostPID:true, HostIPC:false, ShareProcessNamespace:(bool)(nil), SecurityContext:(v1.PodSecurityContext)(0xc001c57860), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(int32)(nil), DNSConfig:(v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(string)(nil), EnableServiceLinks:(bool)(nil), PreemptionPolicy:(v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(v1.RollingUpdateDaemonSet)(0xc001d54288)}, MinReadySeconds:5, RevisionHistoryLimit:(int32)(0xc00205d5ec)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "weave-net": the object has been modified; please apply your changes to the latest version and try again E1226 08:52:37.234212 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.Pod: Get https://localhost:8443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=1291&timeout=6m7s&timeoutSeconds=367&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.234269 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.ControllerRevision: Get https://localhost:8443/apis/apps/v1/controllerrevisions?allowWatchBookmarks=true&resourceVersion=1286&timeout=5m23s&timeoutSeconds=323&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.234485 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.ClusterRole: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles?allowWatchBookmarks=true&resourceVersion=1280&timeout=7m32s&timeoutSeconds=452&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.234508 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.Node: Get https://localhost:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1330&timeout=6m43s&timeoutSeconds=403&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.234526 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=1&timeout=5m19s&timeoutSeconds=319&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.234545 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.VolumeAttachment: Get https://localhost:8443/apis/storage.k8s.io/v1/volumeattachments?allowWatchBookmarks=true&resourceVersion=1&timeout=8m59s&timeoutSeconds=539&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.234563 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?allowWatchBookmarks=true&resourceVersion=1&timeout=8m34s&timeoutSeconds=514&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.234584 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.CSINode: Get https://localhost:8443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=42&timeout=5m56s&timeoutSeconds=356&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.234601 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=1&timeout=9m17s&timeoutSeconds=557&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.234668 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=1334&timeout=9m59s&timeoutSeconds=599&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.234696 1 reflector.go:320] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch v1.PartialObjectMetadata: Get https://localhost:8443/apis/apiextensions.k8s.io/v1/customresourcedefinitions?allowWatchBookmarks=true&resourceVersion=1&timeout=7m2s&timeoutSeconds=422&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.234715 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.DaemonSet: Get https://localhost:8443/apis/apps/v1/daemonsets?allowWatchBookmarks=true&resourceVersion=1288&timeout=9m51s&timeoutSeconds=591&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.234741 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1beta1.CronJob: Get https://localhost:8443/apis/batch/v1beta1/cronjobs?allowWatchBookmarks=true&resourceVersion=1&timeout=7m43s&timeoutSeconds=463&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.234762 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.NetworkPolicy: Get https://localhost:8443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=1&timeout=8m21s&timeoutSeconds=501&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.234780 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.Deployment: Get https://localhost:8443/apis/apps/v1/deployments?allowWatchBookmarks=true&resourceVersion=724&timeout=5m29s&timeoutSeconds=329&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.234798 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.RoleBinding: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/rolebindings?allowWatchBookmarks=true&resourceVersion=1284&timeout=5m56s&timeoutSeconds=356&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.234815 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.Role: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/roles?allowWatchBookmarks=true&resourceVersion=1283&timeout=5m15s&timeoutSeconds=315&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.234832 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.Namespace: Get https://localhost:8443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=628&timeout=5m26s&timeoutSeconds=326&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.234856 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=723&timeout=5m33s&timeoutSeconds=333&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.234883 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=1&timeout=5m48s&timeoutSeconds=348&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.234902 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=421&timeout=9m1s&timeoutSeconds=541&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.235595 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1beta1.Event: Get https://localhost:8443/apis/events.k8s.io/v1beta1/events?allowWatchBookmarks=true&resourceVersion=1323&timeout=6m6s&timeoutSeconds=366&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.235623 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1beta1.Ingress: Get https://localhost:8443/apis/networking.k8s.io/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=1&timeout=5m46s&timeoutSeconds=346&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.235643 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=370&timeout=7m22s&timeoutSeconds=442&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.235684 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1beta1.PodSecurityPolicy: Get https://localhost:8443/apis/policy/v1beta1/podsecuritypolicies?allowWatchBookmarks=true&resourceVersion=1&timeout=6m41s&timeoutSeconds=401&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.235712 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1beta1.EndpointSlice: Get https://localhost:8443/apis/discovery.k8s.io/v1beta1/endpointslices?allowWatchBookmarks=true&resourceVersion=1&timeout=8m17s&timeoutSeconds=497&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.235732 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.ValidatingWebhookConfiguration: Get https://localhost:8443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=1&timeout=9m11s&timeoutSeconds=551&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.235752 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.MutatingWebhookConfiguration: Get https://localhost:8443/apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=1&timeout=6m8s&timeoutSeconds=368&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.235771 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.ConfigMap: Get https://localhost:8443/api/v1/configmaps?allowWatchBookmarks=true&resourceVersion=1329&timeout=7m51s&timeoutSeconds=471&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.235811 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.Lease: Get https://localhost:8443/apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=1335&timeout=6m7s&timeoutSeconds=367&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.235832 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.LimitRange: Get https://localhost:8443/api/v1/limitranges?allowWatchBookmarks=true&resourceVersion=1&timeout=7m7s&timeoutSeconds=427&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.235852 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.Secret: Get https://localhost:8443/api/v1/secrets?allowWatchBookmarks=true&resourceVersion=1279&timeout=6m16s&timeoutSeconds=376&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.235871 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.HorizontalPodAutoscaler: Get https://localhost:8443/apis/autoscaling/v1/horizontalpodautoscalers?allowWatchBookmarks=true&resourceVersion=1&timeout=8m47s&timeoutSeconds=527&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.235891 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1beta1.Ingress: Get https://localhost:8443/apis/extensions/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=1&timeout=8m18s&timeoutSeconds=498&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.235909 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=1&timeout=9m27s&timeoutSeconds=567&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.235928 1 reflector.go:320] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch v1.PartialObjectMetadata: Get https://localhost:8443/apis/apiregistration.k8s.io/v1/apiservices?allowWatchBookmarks=true&resourceVersion=569&timeout=8m13s&timeoutSeconds=493&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.235947 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.ClusterRoleBinding: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?allowWatchBookmarks=true&resourceVersion=1282&timeout=6m55s&timeoutSeconds=415&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.235970 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.PriorityClass: Get https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses?allowWatchBookmarks=true&resourceVersion=44&timeout=8m9s&timeoutSeconds=489&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.235989 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.ServiceAccount: Get https://localhost:8443/api/v1/serviceaccounts?allowWatchBookmarks=true&resourceVersion=1281&timeout=8m7s&timeoutSeconds=487&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.236020 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.PodTemplate: Get https://localhost:8443/api/v1/podtemplates?allowWatchBookmarks=true&resourceVersion=1&timeout=5m2s&timeoutSeconds=302&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.236046 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=6m19s&timeoutSeconds=379&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.236065 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.Job: Get https://localhost:8443/apis/batch/v1/jobs?allowWatchBookmarks=true&resourceVersion=1&timeout=9m18s&timeoutSeconds=558&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.236091 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=1&timeout=7m41s&timeoutSeconds=461&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.236146 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.ResourceQuota: Get https://localhost:8443/api/v1/resourcequotas?allowWatchBookmarks=true&resourceVersion=1&timeout=9m35s&timeoutSeconds=575&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.236199 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CertificateSigningRequest: Get https://localhost:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=330&timeout=8m39s&timeoutSeconds=519&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused

==> kube-controller-manager ["58d15d0aea33"] <== I1226 08:54:10.071281 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling I1226 08:54:10.071299 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io I1226 08:54:10.071430 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io I1226 08:54:10.071620 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints I1226 08:54:10.071674 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates I1226 08:54:10.071688 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges I1226 08:54:10.071712 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions I1226 08:54:10.071885 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps I1226 08:54:10.071935 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io I1226 08:54:10.071955 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io I1226 08:54:10.071974 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io I1226 08:54:10.072140 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io I1226 08:54:10.072193 1 controllermanager.go:533] Started "resourcequota" I1226 08:54:10.072562 1 resource_quota_controller.go:271] Starting resource quota controller I1226 08:54:10.072595 1 shared_informer.go:197] Waiting for caches to sync for resource quota I1226 08:54:10.072621 1 resource_quota_monitor.go:303] QuotaMonitor running I1226 08:54:10.077761 1 controllermanager.go:533] Started "tokencleaner" W1226 08:54:10.077797 1 controllermanager.go:525] Skipping "nodeipam" I1226 08:54:10.078426 1 tokencleaner.go:117] Starting token cleaner controller I1226 08:54:10.078435 1 shared_informer.go:197] Waiting for caches to sync for token_cleaner I1226 08:54:10.078440 1 shared_informer.go:204] Caches are synced for token_cleaner W1226 08:54:10.089561 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I1226 08:54:10.106168 1 shared_informer.go:204] Caches are synced for job I1226 08:54:10.106398 1 shared_informer.go:204] Caches are synced for PV protection I1226 08:54:10.109289 1 shared_informer.go:204] Caches are synced for ReplicationController I1226 08:54:10.109480 1 shared_informer.go:204] Caches are synced for PVC protection I1226 08:54:10.111132 1 shared_informer.go:204] Caches are synced for TTL I1226 08:54:10.117426 1 shared_informer.go:204] Caches are synced for persistent volume I1226 08:54:10.117433 1 shared_informer.go:204] Caches are synced for namespace I1226 08:54:10.122832 1 shared_informer.go:204] Caches are synced for disruption I1226 08:54:10.122898 1 disruption.go:338] Sending events to api server. I1226 08:54:10.137185 1 shared_informer.go:204] Caches are synced for expand I1226 08:54:10.141390 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator I1226 08:54:10.156901 1 shared_informer.go:204] Caches are synced for HPA I1226 08:54:10.157106 1 shared_informer.go:204] Caches are synced for service account I1226 08:54:10.157503 1 shared_informer.go:204] Caches are synced for GC I1226 08:54:10.157746 1 shared_informer.go:204] Caches are synced for bootstrap_signer I1226 08:54:10.159124 1 shared_informer.go:204] Caches are synced for ReplicaSet I1226 08:54:10.190016 1 shared_informer.go:204] Caches are synced for attach detach I1226 08:54:10.212111 1 shared_informer.go:204] Caches are synced for deployment I1226 08:54:10.231444 1 shared_informer.go:204] Caches are synced for taint I1226 08:54:10.231609 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: W1226 08:54:10.231700 1 node_lifecycle_controller.go:1058] Missing timestamp for Node minikube. Assuming now as a timestamp. I1226 08:54:10.231790 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal. I1226 08:54:10.231833 1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"d797ced8-1fe6-403c-99c8-08fa2135a543", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller I1226 08:54:10.231873 1 taint_manager.go:186] Starting NoExecuteTaintManager I1226 08:54:10.406637 1 shared_informer.go:204] Caches are synced for endpoint I1226 08:54:10.641307 1 shared_informer.go:204] Caches are synced for certificate-csrsigning I1226 08:54:10.657841 1 shared_informer.go:204] Caches are synced for daemon sets I1226 08:54:10.658140 1 shared_informer.go:204] Caches are synced for certificate-csrapproving I1226 08:54:10.677157 1 shared_informer.go:204] Caches are synced for resource quota I1226 08:54:10.706904 1 shared_informer.go:204] Caches are synced for stateful set I1226 08:54:10.718187 1 shared_informer.go:204] Caches are synced for garbage collector I1226 08:54:10.718245 1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage W1226 08:54:10.858259 1 garbagecollector.go:639] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] I1226 08:54:10.858760 1 shared_informer.go:197] Waiting for caches to sync for garbage collector I1226 08:54:10.859086 1 shared_informer.go:204] Caches are synced for garbage collector E1226 08:54:11.858742 1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request I1226 08:54:11.859095 1 shared_informer.go:197] Waiting for caches to sync for resource quota I1226 08:54:11.859263 1 shared_informer.go:204] Caches are synced for resource quota

==> kube-proxy ["5fe3e5177d82"] <== W1226 08:53:49.278328 1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy I1226 08:53:49.295350 1 node.go:135] Successfully retrieved node IP: 192.168.64.7 I1226 08:53:49.295395 1 server_others.go:145] Using iptables Proxier. W1226 08:53:49.296011 1 proxier.go:286] clusterCIDR not specified, unable to distinguish between internal and external traffic I1226 08:53:49.297323 1 server.go:571] Version: v1.17.0 I1226 08:53:49.298353 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 I1226 08:53:49.298391 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I1226 08:53:49.298509 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I1226 08:53:49.298718 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I1226 08:53:49.304389 1 config.go:313] Starting service config controller I1226 08:53:49.304462 1 shared_informer.go:197] Waiting for caches to sync for service config I1226 08:53:49.304912 1 config.go:131] Starting endpoints config controller I1226 08:53:49.304928 1 shared_informer.go:197] Waiting for caches to sync for endpoints config I1226 08:53:49.404814 1 shared_informer.go:204] Caches are synced for service config I1226 08:53:49.405120 1 shared_informer.go:204] Caches are synced for endpoints config

==> kube-proxy ["d9fb8cfdb3a7"] <== W1226 08:47:26.195399 1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy I1226 08:47:26.202104 1 node.go:135] Successfully retrieved node IP: 192.168.64.7 I1226 08:47:26.202268 1 server_others.go:145] Using iptables Proxier. W1226 08:47:26.202413 1 proxier.go:286] clusterCIDR not specified, unable to distinguish between internal and external traffic I1226 08:47:26.202653 1 server.go:571] Version: v1.17.0 I1226 08:47:26.203165 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 I1226 08:47:26.203320 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I1226 08:47:26.203454 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I1226 08:47:26.203622 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I1226 08:47:26.204924 1 config.go:131] Starting endpoints config controller I1226 08:47:26.205104 1 shared_informer.go:197] Waiting for caches to sync for endpoints config I1226 08:47:26.205243 1 config.go:313] Starting service config controller I1226 08:47:26.205291 1 shared_informer.go:197] Waiting for caches to sync for service config I1226 08:47:26.305889 1 shared_informer.go:204] Caches are synced for service config I1226 08:47:26.306786 1 shared_informer.go:204] Caches are synced for endpoints config E1226 08:52:37.239104 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1334&timeout=7m35s&timeoutSeconds=455&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.239298 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=421&timeout=5m15s&timeoutSeconds=315&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused

==> kube-scheduler ["9b1f3a159ebb"] <== I1226 08:53:40.591604 1 serving.go:312] Generated self-signed cert in-memory W1226 08:53:41.560463 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found W1226 08:53:41.560815 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found W1226 08:53:45.405310 1 authentication.go:348] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W1226 08:53:45.405330 1 authentication.go:296] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W1226 08:53:45.405336 1 authentication.go:297] Continuing without authentication configuration. This may treat all requests as anonymous. W1226 08:53:45.405340 1 authentication.go:298] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false W1226 08:53:45.530765 1 authorization.go:47] Authorization is disabled W1226 08:53:45.530798 1 authentication.go:92] Authentication is disabled I1226 08:53:45.530822 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I1226 08:53:45.533146 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 I1226 08:53:45.533272 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1226 08:53:45.533279 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1226 08:53:45.533288 1 tlsconfig.go:219] Starting DynamicServingCertificateController I1226 08:53:45.633469 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1226 08:53:45.633738 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler... I1226 08:54:01.490330 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler

==> kube-scheduler ["ca861991abbf"] <== I1226 08:47:10.814714 1 serving.go:312] Generated self-signed cert in-memory W1226 08:47:11.129823 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found W1226 08:47:11.130161 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found W1226 08:47:14.095941 1 authentication.go:348] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W1226 08:47:14.095973 1 authentication.go:296] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W1226 08:47:14.095980 1 authentication.go:297] Continuing without authentication configuration. This may treat all requests as anonymous. W1226 08:47:14.095984 1 authentication.go:298] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false W1226 08:47:14.122867 1 authorization.go:47] Authorization is disabled W1226 08:47:14.122898 1 authentication.go:92] Authentication is disabled I1226 08:47:14.122908 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I1226 08:47:14.124321 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1226 08:47:14.124487 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1226 08:47:14.124664 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 I1226 08:47:14.124910 1 tlsconfig.go:219] Starting DynamicServingCertificateController E1226 08:47:14.126702 1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E1226 08:47:14.128361 1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1226 08:47:14.128614 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E1226 08:47:14.129053 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E1226 08:47:14.129760 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E1226 08:47:14.133271 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E1226 08:47:14.133542 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E1226 08:47:14.133785 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E1226 08:47:14.133936 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E1226 08:47:14.134143 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E1226 08:47:14.134367 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E1226 08:47:14.134569 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E1226 08:47:15.128384 1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E1226 08:47:15.130281 1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1226 08:47:15.134404 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E1226 08:47:15.136721 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E1226 08:47:15.137537 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E1226 08:47:15.138723 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E1226 08:47:15.140085 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E1226 08:47:15.141399 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E1226 08:47:15.142856 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E1226 08:47:15.143405 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E1226 08:47:15.145481 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E1226 08:47:15.146939 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope I1226 08:47:16.225276 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler... I1226 08:47:16.225740 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1226 08:47:16.234531 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler E1226 08:47:24.498425 1 factory.go:494] pod is already present in the activeQ E1226 08:47:26.183125 1 factory.go:494] pod is already present in the activeQ E1226 08:47:26.408418 1 factory.go:494] pod is already present in the activeQ E1226 08:52:37.233882 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.Node: Get https://localhost:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1330&timeout=9m41s&timeoutSeconds=581&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.233988 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=1&timeout=7m13s&timeoutSeconds=433&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.234735 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=1&timeout=9m28s&timeoutSeconds=568&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.234785 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=1&timeout=7m31s&timeoutSeconds=451&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.234828 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=8m49s&timeoutSeconds=529&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.234868 1 reflector.go:320] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch v1.Pod: Get https://localhost:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=1291&timeoutSeconds=495&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.234909 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=1&timeout=6m19s&timeoutSeconds=379&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.234940 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=421&timeout=9m11s&timeoutSeconds=551&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.235179 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.CSINode: Get https://localhost:8443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=42&timeout=8m11s&timeoutSeconds=491&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.235291 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=723&timeout=5m45s&timeoutSeconds=345&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.235355 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=370&timeout=9m47s&timeoutSeconds=587&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1226 08:52:37.235442 1 reflector.go:320] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to watch v1.ConfigMap: Get https://localhost:8443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=155&timeout=8m29s&timeoutSeconds=509&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused

==> kubelet <== -- Logs begin at Thu 2019-12-26 08:53:07 UTC, end at Thu 2019-12-26 08:57:15 UTC. -- Dec 26 08:57:04 minikube kubelet[2352]: W1226 08:57:04.852644 2352 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/kubepods/podb0950d20-5207-4c4f-abac-ea6e877cfb01/5cdfe40a5e48904b67a505a51066ae62995a2437cdef1fcf97c4cd93745bdcb3": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/kubepods/podb0950d20-5207-4c4f-abac-ea6e877cfb01/5cdfe40a5e48904b67a505a51066ae62995a2437cdef1fcf97c4cd93745bdcb3: no such file or directory Dec 26 08:57:04 minikube kubelet[2352]: E1226 08:57:04.903412 2352 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod “my pod”: Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:319: getting the final child's pid from pipe caused \"read init-p: connection reset by peer\"": unknown Dec 26 08:57:04 minikube kubelet[2352]: E1226 08:57:04.903470 2352 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:319: getting the final child's pid from pipe caused \"read init-p: connection reset by peer\"": unknown Dec 26 08:57:04 minikube kubelet[2352]: E1226 08:57:04.903483 2352 kuberuntime_manager.go:729] createPodSandbox for pod "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:319: getting the final child's pid from pipe caused \"read init-p: connection reset by peer\"": unknown Dec 26 08:57:04 minikube kubelet[2352]: E1226 08:57:04.903517 2352 pod_workers.go:191] Error syncing pod b0950d20-5207-4c4f-abac-ea6e877cfb01 ("mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)"), skipping: failed to "CreatePodSandbox" for "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" with CreatePodSandboxError: "CreatePodSandbox for pod \"mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"mypod\": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:319: getting the final child's pid from pipe caused \\"read init-p: connection reset by peer\\"\": unknown" Dec 26 08:57:05 minikube kubelet[2352]: W1226 08:57:05.797157 2352 pod_container_deletor.go:75] Container "5cdfe40a5e48904b67a505a51066ae62995a2437cdef1fcf97c4cd93745bdcb3" not found in pod's containers Dec 26 08:57:05 minikube kubelet[2352]: E1226 08:57:05.977702 2352 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:05 minikube kubelet[2352]: E1226 08:57:05.977737 2352 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:05 minikube kubelet[2352]: E1226 08:57:05.977764 2352 kuberuntime_manager.go:729] createPodSandbox for pod "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:05 minikube kubelet[2352]: E1226 08:57:05.977801 2352 pod_workers.go:191] Error syncing pod b0950d20-5207-4c4f-abac-ea6e877cfb01 ("mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)"), skipping: failed to "CreatePodSandbox" for "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" with CreatePodSandboxError: "CreatePodSandbox for pod \"mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"mypod\": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:315: copying bootstrap data to pipe caused \\"write init-p: broken pipe\\"\": unknown" Dec 26 08:57:05 minikube kubelet[2352]: W1226 08:57:05.978098 2352 container.go:412] Failed to create summary reader for "/kubepods/podb0950d20-5207-4c4f-abac-ea6e877cfb01/b8ef2959a79cc7d76c4a79408a10ef6ff35c0ac4a6f065c6843564ba57b5ee25": none of the resources are being tracked. Dec 26 08:57:06 minikube kubelet[2352]: W1226 08:57:06.855419 2352 pod_container_deletor.go:75] Container "b8ef2959a79cc7d76c4a79408a10ef6ff35c0ac4a6f065c6843564ba57b5ee25" not found in pod's containers Dec 26 08:57:07 minikube kubelet[2352]: E1226 08:57:07.050058 2352 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:07 minikube kubelet[2352]: E1226 08:57:07.050147 2352 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:07 minikube kubelet[2352]: E1226 08:57:07.050160 2352 kuberuntime_manager.go:729] createPodSandbox for pod "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:07 minikube kubelet[2352]: E1226 08:57:07.050238 2352 pod_workers.go:191] Error syncing pod b0950d20-5207-4c4f-abac-ea6e877cfb01 ("mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)"), skipping: failed to "CreatePodSandbox" for "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" with CreatePodSandboxError: "CreatePodSandbox for pod \"mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"mypod\": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:315: copying bootstrap data to pipe caused \\"write init-p: broken pipe\\"\": unknown" Dec 26 08:57:07 minikube kubelet[2352]: W1226 08:57:07.052564 2352 container.go:412] Failed to create summary reader for "/kubepods/podb0950d20-5207-4c4f-abac-ea6e877cfb01/8ed054452be1c2e72637d45e699a03172497f055ebfe4f49bf6ee17dc6e2733d": none of the resources are being tracked. Dec 26 08:57:07 minikube kubelet[2352]: W1226 08:57:07.911780 2352 pod_container_deletor.go:75] Container "8ed054452be1c2e72637d45e699a03172497f055ebfe4f49bf6ee17dc6e2733d" not found in pod's containers Dec 26 08:57:08 minikube kubelet[2352]: E1226 08:57:08.075791 2352 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:08 minikube kubelet[2352]: E1226 08:57:08.075843 2352 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:08 minikube kubelet[2352]: E1226 08:57:08.075855 2352 kuberuntime_manager.go:729] createPodSandbox for pod "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:08 minikube kubelet[2352]: E1226 08:57:08.075885 2352 pod_workers.go:191] Error syncing pod b0950d20-5207-4c4f-abac-ea6e877cfb01 ("mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)"), skipping: failed to "CreatePodSandbox" for "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" with CreatePodSandboxError: "CreatePodSandbox for pod \"mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"mypod\": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:315: copying bootstrap data to pipe caused \\"write init-p: broken pipe\\"\": unknown" Dec 26 08:57:08 minikube kubelet[2352]: W1226 08:57:08.076062 2352 container.go:412] Failed to create summary reader for "/kubepods/podb0950d20-5207-4c4f-abac-ea6e877cfb01/d73a0f7f422d9b2e8b623f6f9c6cde86d63b8258f628ab818dc350a6c1b600a4": none of the resources are being tracked. Dec 26 08:57:08 minikube kubelet[2352]: W1226 08:57:08.964564 2352 pod_container_deletor.go:75] Container "d73a0f7f422d9b2e8b623f6f9c6cde86d63b8258f628ab818dc350a6c1b600a4" not found in pod's containers Dec 26 08:57:09 minikube kubelet[2352]: E1226 08:57:09.135967 2352 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:09 minikube kubelet[2352]: E1226 08:57:09.136046 2352 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:09 minikube kubelet[2352]: E1226 08:57:09.136058 2352 kuberuntime_manager.go:729] createPodSandbox for pod "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:09 minikube kubelet[2352]: E1226 08:57:09.136088 2352 pod_workers.go:191] Error syncing pod b0950d20-5207-4c4f-abac-ea6e877cfb01 ("mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)"), skipping: failed to "CreatePodSandbox" for "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" with CreatePodSandboxError: "CreatePodSandbox for pod \"mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"mypod\": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:315: copying bootstrap data to pipe caused \\"write init-p: broken pipe\\"\": unknown" Dec 26 08:57:09 minikube kubelet[2352]: W1226 08:57:09.136361 2352 container.go:412] Failed to create summary reader for "/kubepods/podb0950d20-5207-4c4f-abac-ea6e877cfb01/31c2e630b6bbbe71686453fecafd2088bb1d12bf5e633d04806d366763b81560": none of the resources are being tracked. Dec 26 08:57:10 minikube kubelet[2352]: W1226 08:57:10.014880 2352 pod_container_deletor.go:75] Container "31c2e630b6bbbe71686453fecafd2088bb1d12bf5e633d04806d366763b81560" not found in pod's containers Dec 26 08:57:10 minikube kubelet[2352]: E1226 08:57:10.177689 2352 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:10 minikube kubelet[2352]: E1226 08:57:10.177774 2352 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:10 minikube kubelet[2352]: E1226 08:57:10.177786 2352 kuberuntime_manager.go:729] createPodSandbox for pod "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:10 minikube kubelet[2352]: E1226 08:57:10.177825 2352 pod_workers.go:191] Error syncing pod b0950d20-5207-4c4f-abac-ea6e877cfb01 ("mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)"), skipping: failed to "CreatePodSandbox" for "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" with CreatePodSandboxError: "CreatePodSandbox for pod \"mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"mypod\": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:315: copying bootstrap data to pipe caused \\"write init-p: broken pipe\\"\": unknown" Dec 26 08:57:10 minikube kubelet[2352]: W1226 08:57:10.179353 2352 container.go:412] Failed to create summary reader for "/kubepods/podb0950d20-5207-4c4f-abac-ea6e877cfb01/016a54954678588670253e22049df77c5f58d4716462b7acee7704468a315ccd": none of the resources are being tracked. Dec 26 08:57:11 minikube kubelet[2352]: W1226 08:57:11.084074 2352 pod_container_deletor.go:75] Container "016a54954678588670253e22049df77c5f58d4716462b7acee7704468a315ccd" not found in pod's containers Dec 26 08:57:11 minikube kubelet[2352]: E1226 08:57:11.258414 2352 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:11 minikube kubelet[2352]: E1226 08:57:11.258443 2352 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:11 minikube kubelet[2352]: E1226 08:57:11.258458 2352 kuberuntime_manager.go:729] createPodSandbox for pod "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:11 minikube kubelet[2352]: E1226 08:57:11.258488 2352 pod_workers.go:191] Error syncing pod b0950d20-5207-4c4f-abac-ea6e877cfb01 ("mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)"), skipping: failed to "CreatePodSandbox" for "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" with CreatePodSandboxError: "CreatePodSandbox for pod \"mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"mypod\": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:315: copying bootstrap data to pipe caused \\"write init-p: broken pipe\\"\": unknown" Dec 26 08:57:11 minikube kubelet[2352]: W1226 08:57:11.260164 2352 container.go:412] Failed to create summary reader for "/kubepods/podb0950d20-5207-4c4f-abac-ea6e877cfb01/54f1b0631177413eb309442b4f31f89aa27aa451097dfc978bbca5f4904950e4": none of the resources are being tracked. Dec 26 08:57:12 minikube kubelet[2352]: W1226 08:57:12.165669 2352 pod_container_deletor.go:75] Container "54f1b0631177413eb309442b4f31f89aa27aa451097dfc978bbca5f4904950e4" not found in pod's containers Dec 26 08:57:12 minikube kubelet[2352]: E1226 08:57:12.336998 2352 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:12 minikube kubelet[2352]: E1226 08:57:12.337291 2352 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:12 minikube kubelet[2352]: E1226 08:57:12.337484 2352 kuberuntime_manager.go:729] createPodSandbox for pod "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:12 minikube kubelet[2352]: E1226 08:57:12.337522 2352 pod_workers.go:191] Error syncing pod b0950d20-5207-4c4f-abac-ea6e877cfb01 ("mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)"), skipping: failed to "CreatePodSandbox" for "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" with CreatePodSandboxError: "CreatePodSandbox for pod \"mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"mypod\": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:315: copying bootstrap data to pipe caused \\"write init-p: broken pipe\\"\": unknown" Dec 26 08:57:12 minikube kubelet[2352]: W1226 08:57:12.337963 2352 container.go:412] Failed to create summary reader for "/kubepods/podb0950d20-5207-4c4f-abac-ea6e877cfb01/3304b96a487cf5988ec7d0989c3eb00ed85472f856b09d4ae3988b45bd93e6a5": none of the resources are being tracked. Dec 26 08:57:13 minikube kubelet[2352]: W1226 08:57:13.221922 2352 pod_container_deletor.go:75] Container "3304b96a487cf5988ec7d0989c3eb00ed85472f856b09d4ae3988b45bd93e6a5" not found in pod's containers Dec 26 08:57:13 minikube kubelet[2352]: E1226 08:57:13.384724 2352 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:13 minikube kubelet[2352]: E1226 08:57:13.384760 2352 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:13 minikube kubelet[2352]: E1226 08:57:13.384772 2352 kuberuntime_manager.go:729] createPodSandbox for pod "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:13 minikube kubelet[2352]: E1226 08:57:13.384810 2352 pod_workers.go:191] Error syncing pod b0950d20-5207-4c4f-abac-ea6e877cfb01 ("mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)"), skipping: failed to "CreatePodSandbox" for "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" with CreatePodSandboxError: "CreatePodSandbox for pod \"mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"mypod\": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:315: copying bootstrap data to pipe caused \\"write init-p: broken pipe\\"\": unknown" Dec 26 08:57:13 minikube kubelet[2352]: W1226 08:57:13.386573 2352 container.go:412] Failed to create summary reader for "/kubepods/podb0950d20-5207-4c4f-abac-ea6e877cfb01/7d1a1b460a0898c9bbcf02349bd5484c047e35d43b162957845875b22dd808e6": none of the resources are being tracked. Dec 26 08:57:14 minikube kubelet[2352]: W1226 08:57:14.273882 2352 pod_container_deletor.go:75] Container "7d1a1b460a0898c9bbcf02349bd5484c047e35d43b162957845875b22dd808e6" not found in pod's containers Dec 26 08:57:14 minikube kubelet[2352]: E1226 08:57:14.483412 2352 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:14 minikube kubelet[2352]: E1226 08:57:14.483441 2352 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:14 minikube kubelet[2352]: E1226 08:57:14.483451 2352 kuberuntime_manager.go:729] createPodSandbox for pod "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:315: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown Dec 26 08:57:14 minikube kubelet[2352]: E1226 08:57:14.483481 2352 pod_workers.go:191] Error syncing pod b0950d20-5207-4c4f-abac-ea6e877cfb01 ("mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)"), skipping: failed to "CreatePodSandbox" for "mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)" with CreatePodSandboxError: "CreatePodSandbox for pod \"mypod_mynamespace(b0950d20-5207-4c4f-abac-ea6e877cfb01)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"mypod\": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:315: copying bootstrap data to pipe caused \\"write init-p: broken pipe\\"\": unknown" Dec 26 08:57:14 minikube kubelet[2352]: W1226 08:57:14.483487 2352 container.go:412] Failed to create summary reader for "/kubepods/podb0950d20-5207-4c4f-abac-ea6e877cfb01/fabba139a55ef84b1b762230241c432780e5b09db1576dc053aeb2446fe03b0b": none of the resources are being tracked. Dec 26 08:57:15 minikube kubelet[2352]: W1226 08:57:15.343132 2352 pod_container_deletor.go:75] Container "fabba139a55ef84b1b762230241c432780e5b09db1576dc053aeb2446fe03b0b" not found in pod's containers

==> kubernetes-dashboard ["536d24f37aff"] <== 2019/12/26 08:54:40 Starting overwatch 2019/12/26 08:54:40 Using namespace: kubernetes-dashboard 2019/12/26 08:54:40 Using in-cluster config to connect to apiserver 2019/12/26 08:54:40 Using secret token for csrf signing 2019/12/26 08:54:40 Initializing csrf token from kubernetes-dashboard-csrf secret 2019/12/26 08:54:40 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf 2019/12/26 08:54:40 Successful initial request to the apiserver, version: v1.17.0 2019/12/26 08:54:40 Generating JWE encryption key 2019/12/26 08:54:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting 2019/12/26 08:54:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard 2019/12/26 08:54:40 Initializing JWE encryption key from synchronized object 2019/12/26 08:54:40 Creating in-cluster Sidecar client 2019/12/26 08:54:40 Successful request to sidecar 2019/12/26 08:54:40 Serving insecurely on HTTP port: 9090

==> kubernetes-dashboard ["9c5670570779"] <== 2019/12/26 08:53:48 Starting overwatch panic: Get https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf: dial tcp 10.96.0.1:443: i/o timeout

goroutine 1 [running]: github.com/kubernetes/dashboard/src/app/backend/client4/csrf.g namfTspenMan kerb).riniet(e0sxc0000b0df4d0 ) 2019/12/26 08:53:48 Using in-cluster config to connect to apiserver 201 /home//1r2a/v26 b0u8i:l5d/kub ersnientges/dashb trod/sr /faopr/ acskrefn /sclinit/c srf2m1ana1g2e/r2.6g o:4:0 +:438b Initializing csrf token from kubernetes-dashboard-csrf secret github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...) /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:65 github.com/kubernetes/dashboard/src/app/backend/client.(clientManager).initCSRFKey(0xc000343c00) /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:494 +0xc7 github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc000343c00) /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:462 +0x47 github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...) /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:543 main.main() /home/travis/build/kubernetes/dashboard/src/app/backend/dashboard.go:105 +0x212

==> storage-provisioner ["2a4fe2550fca"] <== F1226 08:54:18.388874 1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout

==> storage-provisioner ["82caa797fc68"] <==

The operating system version: minikube version: v1.6.2 commit: 54f28ac5d3a815d1196cd5d57d707439ee4bb392

Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-13T11:51:44Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}

macOS Mojave Version 10.14.6

SwEngin commented 4 years ago

Things I tried:

Neither of them worked

SwEngin commented 4 years ago

Worked all the sudden

SwEngin commented 4 years ago

There is definitely a network problem with the latest release as I have to restart minikube everytime a pod gets created for the image to be pulled successfully also for anything that communicates with the internet

tstromberg commented 4 years ago

I don’t yet have a clear way to replicate this issue. Do you mind adding some additional details? Here is additional information that would be helpful:

Thank you for sharing your experience!

tstromberg commented 4 years ago

I'm also curious about the weavenet comment, since it isn't automatically setup by minikube.

STRRL commented 4 years ago

Same issue, but no weave component in my minikube.

And network seems worked well.

It occurs when I trying example in kubernetes in action.

Here is the object:

apiVersion: v1
kind: Pod
metadata:
  name: downward-env
spec:
  containers:
    - name: main
      image: busybox
      command: ["sleep", "9999999"]
      resources:
        requests:
          cpu: 15m
          memory: 100Ki
        limits:
          # cpu: 100m
          memory: 4Mi
      env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: SERVICE_ACCOUNT
          valueFrom:
            fieldRef:
              fieldPath: spec.serviceAccountName
        - name: CONTAINER_CPU_REQUEST_MILLICORES
          valueFrom:
            resourceFieldRef:
              resource: requests.cpu
              divisor: 1m
        - name: CONTAINER_MEMORY_LIMIT_KIBIBYTES
          valueFrom:
            resourceFieldRef:
              resource: limits.memory
              divisor: 1Ki
STRRL commented 4 years ago

After increasing resources limit for memory to 20Mi, it worked.

It seems like caused by OOM.

iamshijun commented 4 years ago

After increasing resources limit for memory to 20Mi, it worked.

It seems like caused by OOM.

You save my day

SwEngin commented 4 years ago

Sorry for the late answer. After the latest update everything works perfectly fine

sgc109 commented 3 years ago

@STRRL I was having exactly the same problem with you(with Kubernetes in action example) Thank you!!!