kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
28.8k stars 4.82k forks source link

Unable to build images with Minikube as the target registry #7911

Closed cmeans closed 4 years ago

cmeans commented 4 years ago

Steps to reproduce the issue:

  1. minikube start
  2. eval $(minikube docker-env) 2a. docker images //displays minikube related images
  3. docker build -t identity .

Full output of failed command: Sending build context to Docker daemon 126.4MB Step 1/12 : FROM ruby:2.5.7 Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

Full output of minikube start command used, if not already included: ā†³ $ minikube start šŸ˜„ minikube v1.9.2 on Darwin 10.15.4 āœØ Using the hyperkit driver based on existing profile šŸ‘ Starting control plane node m01 in cluster minikube šŸ”„ Restarting existing hyperkit VM for "minikube" ... šŸ³ Preparing Kubernetes v1.18.0 on Docker 19.03.8 ... ā— This VM is having trouble accessing https://k8s.gcr.io šŸ’” To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/ šŸŒŸ Enabling addons: dashboard, default-storageclass, ingress, ingress-dns, registry, storage-provisioner šŸ„ Done! kubectl is now configured to use "minikube"

Optional: Full output of minikube logs command:

==> Docker <== -- Logs begin at Sat 2020-04-25 22:27:59 UTC, end at Sat 2020-04-25 22:34:24 UTC. -- Apr 25 22:29:08 minikube dockerd[1862]: time="2020-04-25T22:29:08.235889493Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 25 22:29:08 minikube dockerd[1862]: time="2020-04-25T22:29:08.236009993Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 25 22:29:08 minikube dockerd[1862]: time="2020-04-25T22:29:08.236107549Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc Apr 25 22:29:08 minikube dockerd[1862]: time="2020-04-25T22:29:08.236132307Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 25 22:29:08 minikube dockerd[1862]: time="2020-04-25T22:29:08.238213979Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 25 22:29:08 minikube dockerd[1862]: time="2020-04-25T22:29:08.238319655Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 25 22:29:08 minikube dockerd[1862]: time="2020-04-25T22:29:08.238349317Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc Apr 25 22:29:08 minikube dockerd[1862]: time="2020-04-25T22:29:08.238363734Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 25 22:29:08 minikube dockerd[1862]: time="2020-04-25T22:29:08.444328649Z" level=warning msg="Your kernel does not support cgroup blkio weight" Apr 25 22:29:08 minikube dockerd[1862]: time="2020-04-25T22:29:08.444417671Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Apr 25 22:29:08 minikube dockerd[1862]: time="2020-04-25T22:29:08.444435672Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" Apr 25 22:29:08 minikube dockerd[1862]: time="2020-04-25T22:29:08.444461677Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" Apr 25 22:29:08 minikube dockerd[1862]: time="2020-04-25T22:29:08.444472700Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" Apr 25 22:29:08 minikube dockerd[1862]: time="2020-04-25T22:29:08.444482412Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" Apr 25 22:29:08 minikube dockerd[1862]: time="2020-04-25T22:29:08.444867671Z" level=info msg="Loading containers: start." Apr 25 22:29:08 minikube dockerd[1862]: time="2020-04-25T22:29:08.954998726Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Apr 25 22:29:09 minikube dockerd[1862]: time="2020-04-25T22:29:09.076988843Z" level=info msg="Loading containers: done." Apr 25 22:29:09 minikube dockerd[1862]: time="2020-04-25T22:29:09.111404227Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8 Apr 25 22:29:09 minikube dockerd[1862]: time="2020-04-25T22:29:09.112323514Z" level=info msg="Daemon has completed initialization" Apr 25 22:29:09 minikube dockerd[1862]: time="2020-04-25T22:29:09.133654323Z" level=info msg="API listen on /var/run/docker.sock" Apr 25 22:29:09 minikube dockerd[1862]: time="2020-04-25T22:29:09.133773920Z" level=info msg="API listen on [::]:2376" Apr 25 22:29:09 minikube systemd[1]: Started Docker Application Container Engine. Apr 25 22:29:19 minikube dockerd[1862]: time="2020-04-25T22:29:19.083453582Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/582e5fed3a675bcd3edcc1736c747f15ad52ae7168e70d3c352cdb8354e72b82/shim.sock" debug=false pid=2794 Apr 25 22:29:19 minikube dockerd[1862]: time="2020-04-25T22:29:19.098420170Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0203e2b93630f18a1bfb7eeb420311163556cb3f1c153274410ce84c730c463c/shim.sock" debug=false pid=2798 Apr 25 22:29:19 minikube dockerd[1862]: time="2020-04-25T22:29:19.255657447Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a44b4e0865b1b466733fd845f091263f8f8ca3fdd6a27150aebd6a1439302cac/shim.sock" debug=false pid=2865 Apr 25 22:29:19 minikube dockerd[1862]: time="2020-04-25T22:29:19.303122050Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/988f0bdba06b769251530d9a12c81084a5fd64a62016d2f466321e97864e7a99/shim.sock" debug=false pid=2884 Apr 25 22:29:19 minikube dockerd[1862]: time="2020-04-25T22:29:19.677528885Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2137eeb991380621d50dc364d30b034353d73156552e104b9e042167378c9269/shim.sock" debug=false pid=3016 Apr 25 22:29:19 minikube dockerd[1862]: time="2020-04-25T22:29:19.717858454Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b04e5bf485e953ac8591b5b4c592390e6b7ff13684d229ed9ad38400628ecb97/shim.sock" debug=false pid=3038 Apr 25 22:29:19 minikube dockerd[1862]: time="2020-04-25T22:29:19.806222930Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/13842c78e71bbfd9a7a13f19de8c1e8555dac3e1b6c3887b7bcefa30eca05a9e/shim.sock" debug=false pid=3063 Apr 25 22:29:19 minikube dockerd[1862]: time="2020-04-25T22:29:19.827249136Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/78f9e648bc5d6d91136efe3cdbf46c634328d833c86971671ee04605ee71fa4b/shim.sock" debug=false pid=3084 Apr 25 22:29:31 minikube dockerd[1862]: time="2020-04-25T22:29:31.833970383Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/47dcf51759a887487916525836a8221681103c1ab9ba9c334bb5500313a57ee9/shim.sock" debug=false pid=3651 Apr 25 22:29:31 minikube dockerd[1862]: time="2020-04-25T22:29:31.977331379Z" level=warning msg="Published ports are discarded when using host network mode" Apr 25 22:29:32 minikube dockerd[1862]: time="2020-04-25T22:29:32.149192333Z" level=warning msg="Published ports are discarded when using host network mode" Apr 25 22:29:32 minikube dockerd[1862]: time="2020-04-25T22:29:32.749744829Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ec29fc65a1685bf6ced1f53805049c4f41058dfa3c3614019d160c0f8fa9d43d/shim.sock" debug=false pid=3713 Apr 25 22:29:32 minikube dockerd[1862]: time="2020-04-25T22:29:32.816197126Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d48b755c2353d11099161f1e0ed807d970227bab5f830b530372cc5636f43edc/shim.sock" debug=false pid=3729 Apr 25 22:29:33 minikube dockerd[1862]: time="2020-04-25T22:29:33.104276846Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ffa22f6614754216dace7f15c712f81c91312f5f7ef85811d4c9416da6566c53/shim.sock" debug=false pid=3796 Apr 25 22:29:33 minikube dockerd[1862]: time="2020-04-25T22:29:33.691986729Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1c59ecf9618633ad9c5545baaca9f356229aa13fa2910baa15817d640c9fe65f/shim.sock" debug=false pid=3883 Apr 25 22:29:33 minikube dockerd[1862]: time="2020-04-25T22:29:33.698941779Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ff3ddadfdcb06f876abc0fc2531867878a4850913bb3679514341e693a9bc310/shim.sock" debug=false pid=3889 Apr 25 22:29:33 minikube dockerd[1862]: time="2020-04-25T22:29:33.840706736Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1ecb0d17e7bfefd2dac541d8f6a81f20869702665e0737591931bdfa6b439044/shim.sock" debug=false pid=3933 Apr 25 22:29:33 minikube dockerd[1862]: time="2020-04-25T22:29:33.864576070Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0d6184489d1cf7d7cd0f2094314fa4988908d68948da795fe0042b1f093c6c42/shim.sock" debug=false pid=3944 Apr 25 22:29:33 minikube dockerd[1862]: time="2020-04-25T22:29:33.927533749Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f1a52ad6ad28fb84516e0f221eb90c27eecc123c93076fe0c559460e5bafd2ad/shim.sock" debug=false pid=3970 Apr 25 22:29:34 minikube dockerd[1862]: time="2020-04-25T22:29:34.497005816Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f2775ac142f6e48aa056e09b6bf3e8aeafd58f47b7edc1efaa3086835760d898/shim.sock" debug=false pid=4066 Apr 25 22:29:34 minikube dockerd[1862]: time="2020-04-25T22:29:34.623519864Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/babeaec1253c8022fb95bdbed399eafbc826f2112aa7ea19fc41dd3dec1d5dc8/shim.sock" debug=false pid=4077 Apr 25 22:29:35 minikube dockerd[1862]: time="2020-04-25T22:29:35.081403485Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d3c94e38c218f32d7559f031e427bc8b8d958c99aa08ec669efd1636b3d99056/shim.sock" debug=false pid=4162 Apr 25 22:29:36 minikube dockerd[1862]: time="2020-04-25T22:29:36.042552948Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/080752d8c2efd6a11c0fc7ed48ee5fea72d4e0d46ca5115b22cb42bdb8713fc0/shim.sock" debug=false pid=4280 Apr 25 22:29:37 minikube dockerd[1862]: time="2020-04-25T22:29:37.087121391Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/099ac00a10764231910d44cb4158aef6732974704c6292c25037d0dbd04b9801/shim.sock" debug=false pid=4410 Apr 25 22:29:37 minikube dockerd[1862]: time="2020-04-25T22:29:37.364612220Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/be23e6a66f1f53adfa53ab59ce556df4eca279bdcdc3e14888b4c2c8981e5297/shim.sock" debug=false pid=4444 Apr 25 22:29:38 minikube dockerd[1862]: time="2020-04-25T22:29:38.064653289Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c4031bbc66ec32681cf5fc776a4591c857dfe27be22f3c2820e6922c4a9af3eb/shim.sock" debug=false pid=4503 Apr 25 22:29:38 minikube dockerd[1862]: time="2020-04-25T22:29:38.285819550Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/99f6b11f68d41ff25fda3b45d5b11214b219c7be82772cc67919b010b128b747/shim.sock" debug=false pid=4528 Apr 25 22:29:39 minikube dockerd[1862]: time="2020-04-25T22:29:39.026902327Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b282610299e5d1a993ac00f7b29d650aa5d7add3e06292f21f70635ac22b3bdd/shim.sock" debug=false pid=4608 Apr 25 22:29:40 minikube dockerd[1862]: time="2020-04-25T22:29:39.999501068Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8b428671eea1ba28d398d55a74a14563bc1ea2089a8f157161d275b13faee2a6/shim.sock" debug=false pid=4705 Apr 25 22:29:40 minikube dockerd[1862]: time="2020-04-25T22:29:40.070961651Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c0fbf9152d3e20e02be35982008398072cf0540001c16469e2a1528e95981ac7/shim.sock" debug=false pid=4715 Apr 25 22:30:09 minikube dockerd[1862]: time="2020-04-25T22:30:09.870437757Z" level=info msg="shim reaped" id=f2775ac142f6e48aa056e09b6bf3e8aeafd58f47b7edc1efaa3086835760d898 Apr 25 22:30:09 minikube dockerd[1862]: time="2020-04-25T22:30:09.880713348Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 25 22:30:11 minikube dockerd[1862]: time="2020-04-25T22:30:11.060530125Z" level=info msg="shim reaped" id=c4031bbc66ec32681cf5fc776a4591c857dfe27be22f3c2820e6922c4a9af3eb Apr 25 22:30:11 minikube dockerd[1862]: time="2020-04-25T22:30:11.070785523Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 25 22:30:23 minikube dockerd[1862]: time="2020-04-25T22:30:23.862275029Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/010f9f4554e7fcea266c1c33533dee020e2062d8dacd3715ebd0604b22b7f372/shim.sock" debug=false pid=5354 Apr 25 22:30:31 minikube dockerd[1862]: time="2020-04-25T22:30:31.838155756Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ba7abb45cd1db9d3f0d56d788d10d50ffc077a8adbafbc6ee0262ba74fcd6214/shim.sock" debug=false pid=5426 Apr 25 22:31:30 minikube dockerd[1862]: time="2020-04-25T22:31:30.276549404Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 25 22:31:30 minikube dockerd[1862]: time="2020-04-25T22:31:30.277312192Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" ==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID ba7abb45cd1db cdc71b5a8a0ee 3 minutes ago Running kubernetes-dashboard 5 1c59ecf961863 010f9f4554e7f 4689081edb103 4 minutes ago Running storage-provisioner 9 ec29fc65a1685 c0fbf9152d3e2 3b08661dc379d 4 minutes ago Running dashboard-metrics-scraper 4 ff3ddadfdcb06 8b428671eea1b 29024c9c6e706 4 minutes ago Running nginx-ingress-controller 1 080752d8c2efd b282610299e5d 708bc6af7e5e5 4 minutes ago Running registry 1 f1a52ad6ad28f 99f6b11f68d41 60dc18151daf8 4 minutes ago Running registry-proxy 1 0d6184489d1cf c4031bbc66ec3 cdc71b5a8a0ee 4 minutes ago Exited kubernetes-dashboard 4 1c59ecf961863 099ac00a10764 43940c34f24f3 4 minutes ago Running kube-proxy 4 1ecb0d17e7bfe be23e6a66f1f5 67da37a9a360e 4 minutes ago Running coredns 4 ffa22f6614754 d3c94e38c218f 2a2abf12e45e3 4 minutes ago Running minikube-ingress-dns 1 d48b755c2353d babeaec1253c8 67da37a9a360e 4 minutes ago Running coredns 4 47dcf51759a88 f2775ac142f6e 4689081edb103 4 minutes ago Exited storage-provisioner 8 ec29fc65a1685 78f9e648bc5d6 303ce5db0e90d 5 minutes ago Running etcd 4 988f0bdba06b7 13842c78e71bb a31f78c7c8ce1 5 minutes ago Running kube-scheduler 4 a44b4e0865b1b 2137eeb991380 d3e55153f52fb 5 minutes ago Running kube-controller-manager 4 0203e2b93630f b04e5bf485e95 74060cea7f704 5 minutes ago Running kube-apiserver 4 582e5fed3a675 0713fc71eecf7 quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:d0b22f715fcea5598ef7f869d308b55289a3daaa12922fa52a1abf17703c88e7 4 hours ago Exited nginx-ingress-controller 0 e15a2fc4d2e96 ecb45f648c18d gcr.io/google_containers/kube-registry-proxy@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da 4 hours ago Exited registry-proxy 0 b7ffc8aa67e75 6ce82e0c951bd registry.hub.docker.com/library/registry@sha256:7d081088e4bfd632a88e3f3bcd9e007ef44a796fddfe3261407a3f9f04abe1e7 4 hours ago Exited registry 0 86c461b51619c 51777d8862402 cryptexlabs/minikube-ingress-dns@sha256:d07dfd1b882d8ee70d71514434c10fdd8c54d347b5a883323154d6096f1e8c67 4 hours ago Exited minikube-ingress-dns 0 3d8c9803bc4a4 019fdb1f37cac 3b08661dc379d 4 hours ago Exited dashboard-metrics-scraper 3 0351faa1c64ed 4b8cb6b53f7ef 67da37a9a360e 4 hours ago Exited coredns 3 4603f728e6587 7d268289d822b 67da37a9a360e 4 hours ago Exited coredns 3 8082e42b729b3 86e5694249736 43940c34f24f3 4 hours ago Exited kube-proxy 3 03b0efd6a5e57 bbbae162f8ad3 d3e55153f52fb 4 hours ago Exited kube-controller-manager 3 49b7ce3dc7dbc 0566c6e4edab3 74060cea7f704 4 hours ago Exited kube-apiserver 3 073d86c9becb1 088044dedb344 303ce5db0e90d 4 hours ago Exited etcd 3 908111fbbac88 6eb85fd0f2aa7 a31f78c7c8ce1 4 hours ago Exited kube-scheduler 3 c85a9f926cc14 ==> coredns [4b8cb6b53f7e] <== I0425 18:35:02.093717 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-25 18:34:32.092547115 +0000 UTC m=+0.047715423) (total time: 30.001070222s): Trace[2019727887]: [30.001070222s] [30.001070222s] END E0425 18:35:02.093738 1 reflector.go:153][pkg/mod/kNF.]i op/lculgiienntr-egod@y:0 St7l.2 /wtaitlin/cocn:e /"rkeufbleecntot.es":105: Failed to l3ist *v1N.FNO]mespacleu: iGn/t lhotatdps R/u/n1nin9g .0n.1i:g4u4r3, /gap1/.v13/.na,m sdpaacfs6?5limit=500&reNsFoOr] epVeugin/ready: Still waitingr soino:n ="0: bdernetial" cp 10.96.0.1:443: i/o timeout I0425 18:35:02.096704 [ I FO] plu itnr/aecadey. gSo:116] lT rwaitcieg[ 1o4n2: 1k3u1e8rne]t:es" eflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-25 18:34:32.09628508 +0000 UTC m=+0.051453377) (total time: 30.000399427s): Trace[1427131847]: [30.000399427s] [30.000399427s] END E0425 18:35:02.096717 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I0425 18:35:02.096990 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-25 18:34:32.096285682 +0000 UTC m=+0.051453990) (total time: 30.000687165s): Trace[939984059]: [30.000687165s] [30.000687165s] END E0425 18:35:02.097004 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout ==> coredns [7d268289d822] <== .:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.7 linux/amd64, go1.13.6, da7f65b [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" I0425 18:35:01.513169 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-25 18:34:31.510398926 +0000 UTC m=+0.253461361) (total time: 30.002619883s): Trace[2019727887]: [30.002619883s] [30.002619883s] END E0425 18:35:01.513200 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I0425 18:35:01.513412 1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-25 18:34:31.51017762 +0000 UTC m=+0.253240077) (total time: 30.003154849s): Trace[1427131847]: [30.003154849s] [30.003154849s] END E0425 18:35:01.513420 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I0425 18:35:01.514954 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-25 18:34:31.511677507 +0000 UTC m=+0.254739928) (total time: 30.003254317s): Trace[939984059]: [30.003254317s] [30.003254317s] END E0425 18:35:01.514965 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout ==> coredns [babeaec1253c] <== I0425 22:30:06.595314 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-25 22:29:36.59208816 +0000 UTC m=+0.222324079) (total time: 30.002936061s): Trace[2019727887]: [30.002936061s] [30.002936061s] END E0425 22:30:06.595347 1 ref.lector.g:o:31 53] pkg/mod/k8[sINiFoO]c lplugti-n/orevoa.d: .un/ning c/onfcghuer/ation eMD5 =. geo2:51fcc:36Faile96 96o le7s6t81*6vb1c.d9034sepca7 e: GetCorteDps-:1/6/.70.96l.in.u1x/4ait4ng op: /"v1benranmteespaces?INiFOi] p5u0g0inr/resaoyu: cSetill waiting on: "kuberVetssi" n=0: dial tcp 10.96.0.1:443: i/o timeout I0425 22:30:06.596349 1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-25 22:29:36.594245586 +0000 UTC m=+0.224481500) (total time: 30.00082908s): Trace[1427131847]: [30.00082908s] [30.00082908s] END E0425 22:30:06.596457 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I0425 22:30:06.597458 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-25 22:29:36.592084967 +0000 UTC m=+0.222320874) (total time: 30.005351377s): Trace[939984059]: [30.005351377s] [30.005351377s] END E0425 22:30:06.597559 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout ==> coredns [be23e6a66f1f] <== I0425 22:30:09.107758 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-25 22:29:39.106830757 +0000 UTC m=+0.056039158) (total time: 30.000693222s): Trace[2019727887]: [30.0006.93222s] [30.0:05036 93222s] END [IEN0F4O2]5 2lu:g3n0/re9l.o1ad7:8 Ru9n n i n c n1f irgerfalteiotno rM.Dg5o :=1 543e]2 3kgc/cmod/k896.i6o/el76en6tbcod@9v003.41b7c7/ toolsC/ocraeDhSe-/1r.6.l7ector.gloin1u0x5/a mFda6il egdo 1t.o13l6i,s t a*7v1[.INNaFmOe]s ppalcueg:inGretd ht tStsi:l//10a.i9t6i.0g.1o:44 3"/kauberne/tneasm"e spa[cINsF?]l pmliutg=in/&reeasdoyur cSetVersion=0: dial tcp 10.96.0.1:443: i/o timeout I0425 22:30:09.108668 1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/clienitl-lgow@avit.i1ng. 2o/to"okluse/rcnaethees" eflector.go:105 (started: 2020-04-25 22:29:39.107670162 +0000 UTC m=+0.056878556) (total time: 30.000960965s): Trace[1427131847]: [30.000960965s] [30.000960965s] END E0425 22:30:09.108687 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I0425 22:30:09.108807 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-25 22:29:39.107655395 +0000 UTC m=+0.056863780) (total time: 30.001136764s): Trace[939984059]: [30.001136764s] [30.001136764s] END E0425 22:30:09.108817 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout ==> describe nodes <== Name: minikube Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=93af9c1e43cab9618e301bc9fa720c63d5efa393 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_04_25T12_42_31_0700 minikube.k8s.io/version=v1.9.2 node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Sat, 25 Apr 2020 17:42:26 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Sat, 25 Apr 2020 22:34:21 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Sat, 25 Apr 2020 22:29:33 +0000 Sat, 25 Apr 2020 17:42:18 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Sat, 25 Apr 2020 22:29:33 +0000 Sat, 25 Apr 2020 17:42:18 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Sat, 25 Apr 2020 22:29:33 +0000 Sat, 25 Apr 2020 17:42:18 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Sat, 25 Apr 2020 22:29:33 +0000 Sat, 25 Apr 2020 17:42:47 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.64.9 Hostname: minikube Capacity: cpu: 2 ephemeral-storage: 16954224Ki hugepages-2Mi: 0 memory: 3936948Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 16954224Ki hugepages-2Mi: 0 memory: 3936948Ki pods: 110 System Info: Machine ID: 2d936fc993f74e498f756e249e259f95 System UUID: fb7a11ea-0000-0000-927a-acde48001122 Boot ID: fc6b0572-2f3f-4bc3-8e86-ed276cac66ae Kernel Version: 4.19.107 OS Image: Buildroot 2019.02.10 Operating System: linux Architecture: amd64 Container Runtime Version: docker://19.3.8 Kubelet Version: v1.18.0 Kube-Proxy Version: v1.18.0 Non-terminated Pods: (14 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-66bff467f8-9gkzt 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4h51m kube-system coredns-66bff467f8-nqk76 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4h51m kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h51m kube-system kube-apiserver-minikube 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4h51m kube-system kube-controller-manager-minikube 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4h51m kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h49m kube-system kube-proxy-gjh8v 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h51m kube-system kube-scheduler-minikube 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4h51m kube-system nginx-ingress-controller-6d57c87cb9-ghmbv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h49m kube-system registry-4xt8d 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h49m kube-system registry-proxy-g5zjd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h49m kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h51m kubernetes-dashboard dashboard-metrics-scraper-84bfdf55ff-jc98b 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h49m kubernetes-dashboard kubernetes-dashboard-bc446cc64-kvr4w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h49m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (37%) 0 (0%) memory 140Mi (3%) 340Mi (8%) ephemeral-storage 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 5m8s kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 5m7s (x8 over 5m8s) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 5m7s (x8 over 5m8s) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 5m7s (x7 over 5m8s) kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 5m7s kubelet, minikube Updated Node Allocatable limit across pods Normal Starting 4m45s kube-proxy, minikube Starting kube-proxy. ==> dmesg <== [Apr25 22:27] ERROR: earlyprintk= earlyser already used [ +0.000000] You have booted with nomodeset. This means your GPU drivers are DISABLED [ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly [ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it [ +0.182076] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xC0, should be 0x1D (20180810/tbprint-177) [ +6.438387] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-184) [ +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-620) [ +0.015421] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 [ +2.841475] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument [ +0.015963] systemd-fstab-generator[1101]: Ignoring "noauto" for root device [ +0.004685] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling. [ +0.000003] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.) [Apr25 22:28] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack. [ +0.007323] vboxguest: loading out-of-tree module taints kernel. [ +0.004011] vboxguest: PCI device not found, probably running on physical hardware. [Apr25 22:29] systemd-fstab-generator[1852]: Ignoring "noauto" for root device [ +1.858323] systemd-fstab-generator[2145]: Ignoring "noauto" for root device [ +7.766303] kauditd_printk_skb: 107 callbacks suppressed [ +14.148898] kauditd_printk_skb: 32 callbacks suppressed [ +9.138671] kauditd_printk_skb: 47 callbacks suppressed [ +16.268631] kauditd_printk_skb: 38 callbacks suppressed [Apr25 22:30] NFSD: Unable to end grace period: -110 [ +4.882339] kauditd_printk_skb: 2 callbacks suppressed [ +6.575656] kauditd_printk_skb: 2 callbacks suppressed ==> etcd [088044dedb34] <== 2020-04-25 20:14:20.183439 I | mvcc: store.index: compact 19846 2020-04-25 20:14:20.197157 I | mvcc: finished scheduled compaction at 19846 (took 13.065193ms) 2020-04-25 20:19:20.189245 I | mvcc: store.index: compact 20542 2020-04-25 20:19:20.203548 I | mvcc: finished scheduled compaction at 20542 (took 13.514399ms) 2020-04-25 20:24:20.195055 I | mvcc: store.index: compact 21239 2020-04-25 20:24:20.214436 I | mvcc: finished scheduled compaction at 21239 (took 18.581185ms) 2020-04-25 20:29:20.203032 I | mvcc: store.index: compact 21934 2020-04-25 20:29:20.216847 I | mvcc: finished scheduled compaction at 21934 (took 13.246626ms) 2020-04-25 20:34:20.208511 I | mvcc: store.index: compact 22631 2020-04-25 20:34:20.223341 I | mvcc: finished scheduled compaction at 22631 (took 14.105298ms) 2020-04-25 20:39:20.227003 I | mvcc: store.index: compact 23328 2020-04-25 20:39:20.241281 I | mvcc: finished scheduled compaction at 23328 (took 13.790203ms) 2020-04-25 20:44:20.236230 I | mvcc: store.index: compact 24025 2020-04-25 20:44:20.250476 I | mvcc: finished scheduled compaction at 24025 (took 13.851641ms) 2020-04-25 20:49:20.240854 I | mvcc: store.index: compact 24721 2020-04-25 20:49:20.254422 I | mvcc: finished scheduled compaction at 24721 (took 13.099499ms) 2020-04-25 20:54:20.250129 I | mvcc: store.index: compact 25415 2020-04-25 20:54:20.264914 I | mvcc: finished scheduled compaction at 25415 (took 14.2195ms) 2020-04-25 20:57:33.920162 I | etcdserver: start to snapshot (applied: 30003, lastsnap: 20002) 2020-04-25 20:57:33.926290 I | etcdserver: saved snapshot at index 30003 2020-04-25 20:57:33.928144 I | etcdserver: compacted raft log at 25003 2020-04-25 20:59:20.256925 I | mvcc: store.index: compact 26111 2020-04-25 20:59:20.271897 I | mvcc: finished scheduled compaction at 26111 (took 14.650818ms) 2020-04-25 21:04:20.267007 I | mvcc: store.index: compact 26807 2020-04-25 21:04:20.282515 I | mvcc: finished scheduled compaction at 26807 (took 14.903794ms) 2020-04-25 21:09:20.278269 I | mvcc: store.index: compact 27502 2020-04-25 21:09:20.294980 I | mvcc: finished scheduled compaction at 27502 (took 13.064161ms) 2020-04-25 21:14:20.290759 I | mvcc: store.index: compact 28199 2020-04-25 21:14:20.304836 I | mvcc: finished scheduled compaction at 28199 (took 13.283951ms) 2020-04-25 21:19:20.297948 I | mvcc: store.index: compact 28894 2020-04-25 21:19:20.312730 I | mvcc: finished scheduled compaction at 28894 (took 14.255079ms) 2020-04-25 21:24:20.304062 I | mvcc: store.index: compact 29588 2020-04-25 21:24:20.317109 I | mvcc: finished scheduled compaction at 29588 (took 12.707914ms) 2020-04-25 21:29:20.309077 I | mvcc: store.index: compact 30284 2020-04-25 21:29:20.322477 I | mvcc: finished scheduled compaction at 30284 (took 13.009719ms) 2020-04-25 21:34:20.316818 I | mvcc: store.index: compact 30978 2020-04-25 21:34:20.333211 I | mvcc: finished scheduled compaction at 30978 (took 14.846488ms) 2020-04-25 21:39:20.324347 I | mvcc: store.index: compact 31674 2020-04-25 21:39:20.344561 I | mvcc: finished scheduled compaction at 31674 (took 19.441728ms) 2020-04-25 21:44:20.330772 I | mvcc: store.index: compact 32370 2020-04-25 21:44:20.345564 I | mvcc: finished scheduled compaction at 32370 (took 14.495316ms) 2020-04-25 21:49:20.336887 I | mvcc: store.index: compact 33066 2020-04-25 21:49:20.350299 I | mvcc: finished scheduled compaction at 33066 (took 12.887076ms) 2020-04-25 21:54:20.342449 I | mvcc: store.index: compact 33762 2020-04-25 21:54:20.356468 I | mvcc: finished scheduled compaction at 33762 (took 13.399519ms) 2020-04-25 21:59:20.354523 I | mvcc: store.index: compact 34456 2020-04-25 21:59:20.367957 I | mvcc: finished scheduled compaction at 34456 (took 12.969306ms) 2020-04-25 22:01:09.747981 I | etcdserver: start to snapshot (applied: 40004, lastsnap: 30003) 2020-04-25 22:01:09.754144 I | etcdserver: saved snapshot at index 40004 2020-04-25 22:01:09.755891 I | etcdserver: compacted raft log at 35004 2020-04-25 22:04:20.359745 I | mvcc: store.index: compact 35150 2020-04-25 22:04:20.374166 I | mvcc: finished scheduled compaction at 35150 (took 13.811339ms) 2020-04-25 22:09:20.367210 I | mvcc: store.index: compact 35846 2020-04-25 22:09:20.381452 I | mvcc: finished scheduled compaction at 35846 (took 13.730676ms) 2020-04-25 22:14:20.371773 I | mvcc: store.index: compact 36542 2020-04-25 22:14:20.385838 I | mvcc: finished scheduled compaction at 36542 (took 13.669661ms) 2020-04-25 22:19:20.378403 I | mvcc: store.index: compact 37236 2020-04-25 22:19:20.392308 I | mvcc: finished scheduled compaction at 37236 (took 13.246693ms) 2020-04-25 22:24:20.385043 I | mvcc: store.index: compact 37932 2020-04-25 22:24:20.399914 I | mvcc: finished scheduled compaction at 37932 (took 14.348423ms) ==> etcd [78f9e648bc5d] <== [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-04-25 22:29:20.568824 I | etcdmain: etcd Version: 3.4.3 2020-04-25 22:29:20.568892 I | etcdmain: Git SHA: 3cf2f69b5 2020-04-25 22:29:20.568899 I | etcdmain: Go Version: go1.12.12 2020-04-25 22:29:20.568903 I | etcdmain: Go OS/Arch: linux/amd64 2020-04-25 22:29:20.568909 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2 2020-04-25 22:29:20.568961 N | etcdmain: the server is already initialized as member before, starting as etcd member... [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-04-25 22:29:20.569020 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-04-25 22:29:20.709206 I | embed: name = minikube 2020-04-25 22:29:20.709299 I | embed: data dir = /var/lib/minikube/etcd 2020-04-25 22:29:20.709312 I | embed: member dir = /var/lib/minikube/etcd/member 2020-04-25 22:29:20.709321 I | embed: heartbeat = 100ms 2020-04-25 22:29:20.709326 I | embed: election = 1000ms 2020-04-25 22:29:20.709330 I | embed: snapshot count = 10000 2020-04-25 22:29:20.709343 I | embed: advertise client URLs = https://192.168.64.9:2379 2020-04-25 22:29:20.709348 I | embed: initial advertise peer URLs = https://192.168.64.9:2380 2020-04-25 22:29:20.709355 I | embed: initial cluster = 2020-04-25 22:29:20.805369 I | etcdserver: recovered store from snapshot at index 40004 2020-04-25 22:29:20.812266 I | mvcc: restore compact to 37932 2020-04-25 22:29:23.658989 I | etcdserver: restarting member bcc92fe27c5136b in cluster 5ab2023f7fe478e6 at commit index 44148 raft2020/04/25 22:29:23 INFO: bcc92fe27c5136b switched to configuration voters=(850216049952756587) raft2020/04/25 22:29:23 INFO: bcc92fe27c5136b became follower at term 5 raft2020/04/25 22:29:23 INFO: newRaft bcc92fe27c5136b [peers: [bcc92fe27c5136b], term: 5, commit: 44148, applied: 40004, lastindex: 44148, lastterm: 5] 2020-04-25 22:29:23.661160 I | etcdserver/api: enabled capabilities for version 3.4 2020-04-25 22:29:23.661261 I | etcdserver/membership: added member bcc92fe27c5136b [https://192.168.64.9:2380] to cluster 5ab2023f7fe478e6 from store 2020-04-25 22:29:23.661386 I | etcdserver/membership: set the cluster version to 3.4 from store 2020-04-25 22:29:23.665287 I | mvcc: restore compact to 37932 2020-04-25 22:29:23.684824 W | auth: simple token is not cryptographically signed 2020-04-25 22:29:23.689690 I | etcdserver: starting server... [version: 3.4.3, cluster version: 3.4] 2020-04-25 22:29:23.690823 I | etcdserver: bcc92fe27c5136b as single-node; fast-forwarding 9 ticks (election ticks 10) 2020-04-25 22:29:23.698704 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-04-25 22:29:23.699200 I | embed: listening for metrics on http://127.0.0.1:2381 2020-04-25 22:29:23.702313 I | embed: listening for peers on 192.168.64.9:2380 raft2020/04/25 22:29:24 INFO: bcc92fe27c5136b is starting a new election at term 5 raft2020/04/25 22:29:24 INFO: bcc92fe27c5136b became candidate at term 6 raft2020/04/25 22:29:24 INFO: bcc92fe27c5136b received MsgVoteResp from bcc92fe27c5136b at term 6 raft2020/04/25 22:29:24 INFO: bcc92fe27c5136b became leader at term 6 raft2020/04/25 22:29:24 INFO: raft.node: bcc92fe27c5136b elected leader bcc92fe27c5136b at term 6 2020-04-25 22:29:24.667478 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.64.9:2379]} to cluster 5ab2023f7fe478e6 2020-04-25 22:29:24.667697 I | embed: ready to serve client requests 2020-04-25 22:29:24.667997 I | embed: ready to serve client requests 2020-04-25 22:29:24.670198 I | embed: serving client requests on 192.168.64.9:2379 2020-04-25 22:29:24.670860 I | embed: serving client requests on 127.0.0.1:2379 ==> kernel <== 22:34:27 up 6 min, 0 users, load average: 0.73, 0.92, 0.50 Linux minikube 4.19.107 #1 SMP Thu Mar 26 11:33:10 PDT 2020 x86_64 GNU/Linux PRETTY_NAME="Buildroot 2019.02.10" ==> kube-apiserver [0566c6e4edab] <== /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:248 +0x38a k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007dba540, 0x5147220, 0xc0006ff9d0, 0xc010692200) /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0x84 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x456239b, 0xf, 0xc007dd4480, 0xc007dba540, 0x5147220, 0xc0006ff9d0, 0xc010692200) /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:154 +0x6b1 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x5147220, 0xc0006ff9d0, 0xc010692200) /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:64 +0x512 net/http.HandlerFunc.ServeHTTP(0xc007d87940, 0x5147220, 0xc0006ff9d0, 0xc010692200) /usr/local/go/src/net/http/server.go:2007 +0x44 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x5147220, 0xc0006ff9d0, 0xc010692200) /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:126 +0x59f net/http.HandlerFunc.ServeHTTP(0xc007dd7ec0, 0x5147220, 0xc0006ff9d0, 0xc010692200) /usr/local/go/src/net/http/server.go:2007 +0x44 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x5147220, 0xc0006ff9d0, 0xc010692200) /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:50 +0x1fe6 net/http.HandlerFunc.ServeHTTP(0xc007d87980, 0x5147220, 0xc0006ff9d0, 0xc010692200) /usr/local/go/src/net/http/server.go:2007 +0x44 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x5147220, 0xc0006ff9d0, 0xc010692100) /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:70 +0x5ce net/http.HandlerFunc.ServeHTTP(0xc007d81450, 0x5147220, 0xc0006ff9d0, 0xc010692100) /usr/local/go/src/net/http/server.go:2007 +0x44 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP(0xc007dd2ac0, 0x5147220, 0xc0006ff9d0, 0xc010692100) /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:92 +0x462 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithWaitGroup.func1(0x5147220, 0xc0006ff9d0, 0xc010692100) /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/waitgroup.go:59 +0x121 net/http.HandlerFunc.ServeHTTP(0xc007dd7ef0, 0x5147220, 0xc0006ff9d0, 0xc010692100) /usr/local/go/src/net/http/server.go:2007 +0x44 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1(0x5147220, 0xc0006ff9d0, 0xc010692000) /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/requestinfo.go:39 +0x274 net/http.HandlerFunc.ServeHTTP(0xc007dd7f20, 0x5147220, 0xc0006ff9d0, 0xc010692000) /usr/local/go/src/net/http/server.go:2007 +0x44 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.WithLogging.func1(0x513a020, 0xc005853d40, 0xc010491e00) /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:89 +0x2ca net/http.HandlerFunc.ServeHTTP(0xc007dd2ae0, 0x513a020, 0xc005853d40, 0xc010491e00) /usr/local/go/src/net/http/server.go:2007 +0x44 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withPanicRecovery.func1(0x513a020, 0xc005853d40, 0xc010491e00) /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/wrap.go:51 +0x13e net/http.HandlerFunc.ServeHTTP(0xc007dd2b00, 0x513a020, 0xc005853d40, 0xc010491e00) /usr/local/go/src/net/http/server.go:2007 +0x44 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*APIServerHandler).ServeHTTP(0xc007dd7f50, 0x513a020, 0xc005853d40, 0xc010491e00) /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:189 +0x51 net/http.serverHandler.ServeHTTP(0xc0093d49a0, 0x513a020, 0xc005853d40, 0xc010491e00) /usr/local/go/src/net/http/server.go:2802 +0xa4 net/http.initNPNRequest.ServeHTTP(0x51546a0, 0xc007129d40, 0xc00e243c00, 0xc0093d49a0, 0x513a020, 0xc005853d40, 0xc010491e00) /usr/local/go/src/net/http/server.go:3366 +0x8d k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*serverConn).runHandler(0xc007babe00, 0xc005853d40, 0xc010491e00, 0xc0095998e0) /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:2149 +0x9f created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*serverConn).processHeaders /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:1883 +0x4eb I0425 18:34:41.697921 1 controller.go:606] quota admission added evaluator for: endpoints I0425 18:34:49.162792 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io W0425 20:50:26.237218 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted W0425 21:18:35.315059 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted W0425 21:24:56.385517 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted W0425 21:38:36.426561 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted W0425 21:48:16.467395 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted W0425 21:55:47.506373 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted W0425 22:07:56.529930 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted W0425 22:17:44.552069 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted W0425 22:24:44.572457 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted ==> kube-apiserver [b04e5bf485e9] <== I0425 22:29:26.178549 1 client.go:361] parsed scheme: "endpoint" I0425 22:29:26.178749 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0425 22:29:26.193925 1 client.go:361] parsed scheme: "endpoint" I0425 22:29:26.193951 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] W0425 22:29:26.501358 1 genericapiserver.go:409] Skipping API batch/v2alpha1 because it has no resources. W0425 22:29:26.527600 1 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources. W0425 22:29:26.565342 1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources. W0425 22:29:26.633508 1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W0425 22:29:26.645448 1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W0425 22:29:26.697713 1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W0425 22:29:26.773403 1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources. W0425 22:29:26.773504 1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources. I0425 22:29:26.810163 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0425 22:29:26.810330 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I0425 22:29:26.814427 1 client.go:361] parsed scheme: "endpoint" I0425 22:29:26.814876 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0425 22:29:26.846915 1 client.go:361] parsed scheme: "endpoint" I0425 22:29:26.847095 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0425 22:29:30.982977 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt I0425 22:29:30.983477 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I0425 22:29:30.984102 1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key I0425 22:29:30.984862 1 secure_serving.go:178] Serving securely on [::]:8443 I0425 22:29:30.985110 1 apiservice_controller.go:94] Starting APIServiceRegistrationController I0425 22:29:30.985288 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0425 22:29:30.986503 1 crd_finalizer.go:266] Starting CRDFinalizer I0425 22:29:30.987622 1 autoregister_controller.go:141] Starting autoregister controller I0425 22:29:30.987792 1 cache.go:32] Waiting for caches to sync for autoregister controller I0425 22:29:30.987958 1 available_controller.go:387] Starting AvailableConditionController I0425 22:29:30.988165 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0425 22:29:30.989407 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0425 22:29:30.991378 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0425 22:29:30.991417 1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller I0425 22:29:30.991452 1 controller.go:86] Starting OpenAPI controller I0425 22:29:30.991677 1 customresource_discovery_controller.go:209] Starting DiscoveryController I0425 22:29:30.991850 1 naming_controller.go:291] Starting NamingConditionController I0425 22:29:30.991990 1 establishing_controller.go:76] Starting EstablishingController I0425 22:29:30.992097 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController I0425 22:29:30.992355 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0425 22:29:30.992540 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0425 22:29:30.992768 1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister I0425 22:29:30.993102 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I0425 22:29:30.993278 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt I0425 22:29:30.997465 1 controller.go:81] Starting OpenAPI AggregationController I0425 22:29:31.253759 1 shared_informer.go:230] Caches are synced for crd-autoregister I0425 22:29:31.273656 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io I0425 22:29:31.286814 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0425 22:29:31.307942 1 cache.go:39] Caches are synced for autoregister controller I0425 22:29:31.308517 1 cache.go:39] Caches are synced for AvailableConditionController controller I0425 22:29:31.309411 1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller E0425 22:29:31.320056 1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service I0425 22:29:31.983291 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0425 22:29:31.983400 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0425 22:29:32.000172 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist. I0425 22:29:33.999733 1 controller.go:606] quota admission added evaluator for: serviceaccounts I0425 22:29:34.086222 1 controller.go:606] quota admission added evaluator for: deployments.apps I0425 22:29:34.291398 1 controller.go:606] quota admission added evaluator for: daemonsets.apps I0425 22:29:34.357418 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0425 22:29:34.379139 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0425 22:29:46.914022 1 controller.go:606] quota admission added evaluator for: endpoints I0425 22:29:56.962900 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io ==> kube-controller-manager [2137eeb99138] <== I0425 22:29:55.813582 1 controllermanager.go:533] Started "persistentvolume-binder" I0425 22:29:55.813861 1 pv_controller_base.go:295] Starting persistent volume controller I0425 22:29:55.813887 1 shared_informer.go:223] Waiting for caches to sync for persistent volume I0425 22:29:55.825263 1 controllermanager.go:533] Started "endpointslice" I0425 22:29:55.825334 1 endpointslice_controller.go:213] Starting endpoint slice controller I0425 22:29:55.825344 1 shared_informer.go:223] Waiting for caches to sync for endpoint_slice I0425 22:29:55.974670 1 controllermanager.go:533] Started "replicationcontroller" I0425 22:29:55.974761 1 replica_set.go:181] Starting replicationcontroller controller I0425 22:29:55.974771 1 shared_informer.go:223] Waiting for caches to sync for ReplicationController I0425 22:29:56.129633 1 controllermanager.go:533] Started "job" I0425 22:29:56.129739 1 job_controller.go:144] Starting job controller I0425 22:29:56.129781 1 shared_informer.go:223] Waiting for caches to sync for job I0425 22:29:56.276622 1 controllermanager.go:533] Started "deployment" I0425 22:29:56.276722 1 deployment_controller.go:153] Starting deployment controller I0425 22:29:56.276843 1 shared_informer.go:223] Waiting for caches to sync for deployment I0425 22:29:56.574371 1 controllermanager.go:533] Started "disruption" I0425 22:29:56.574696 1 disruption.go:331] Starting disruption controller I0425 22:29:56.574820 1 shared_informer.go:223] Waiting for caches to sync for disruption I0425 22:29:56.730340 1 controllermanager.go:533] Started "csrsigning" I0425 22:29:56.730488 1 certificate_controller.go:119] Starting certificate controller "csrsigning" I0425 22:29:56.731972 1 shared_informer.go:223] Waiting for caches to sync for certificate-csrsigning I0425 22:29:56.730507 1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key I0425 22:29:56.744877 1 shared_informer.go:223] Waiting for caches to sync for resource quota W0425 22:29:56.774701 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I0425 22:29:56.818166 1 shared_informer.go:230] Caches are synced for certificate-csrapproving I0425 22:29:56.829464 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator I0425 22:29:56.836809 1 shared_informer.go:230] Caches are synced for TTL I0425 22:29:56.845531 1 shared_informer.go:230] Caches are synced for certificate-csrsigning I0425 22:29:56.876605 1 shared_informer.go:230] Caches are synced for bootstrap_signer I0425 22:29:56.887355 1 shared_informer.go:230] Caches are synced for namespace I0425 22:29:56.904272 1 shared_informer.go:230] Caches are synced for service account I0425 22:29:56.914362 1 shared_informer.go:230] Caches are synced for persistent volume I0425 22:29:56.925808 1 shared_informer.go:230] Caches are synced for daemon sets I0425 22:29:56.926187 1 shared_informer.go:230] Caches are synced for taint I0425 22:29:56.941156 1 taint_manager.go:187] Starting NoExecuteTaintManager I0425 22:29:56.943977 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: W0425 22:29:56.944458 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp. I0425 22:29:56.944840 1 node_lifecycle_controller.go:1249] Controller detected that zone is now in state Normal. I0425 22:29:56.947408 1 shared_informer.go:230] Caches are synced for GC I0425 22:29:56.949950 1 shared_informer.go:230] Caches are synced for expand I0425 22:29:56.950720 1 shared_informer.go:230] Caches are synced for PVC protection I0425 22:29:56.951419 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"bdf14ea9-20f4-4eec-b14f-a1aaaaf166c7", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller I0425 22:29:56.951803 1 shared_informer.go:230] Caches are synced for endpoint_slice I0425 22:29:56.952894 1 shared_informer.go:230] Caches are synced for job I0425 22:29:56.954433 1 shared_informer.go:230] Caches are synced for attach detach I0425 22:29:56.956597 1 shared_informer.go:230] Caches are synced for PV protection I0425 22:29:56.975376 1 shared_informer.go:230] Caches are synced for ReplicationController I0425 22:29:56.978232 1 shared_informer.go:230] Caches are synced for endpoint I0425 22:29:56.981446 1 shared_informer.go:230] Caches are synced for ReplicaSet I0425 22:29:57.081865 1 shared_informer.go:230] Caches are synced for stateful set I0425 22:29:57.225519 1 shared_informer.go:223] Waiting for caches to sync for garbage collector I0425 22:29:57.347851 1 shared_informer.go:230] Caches are synced for resource quota I0425 22:29:57.355105 1 shared_informer.go:230] Caches are synced for garbage collector I0425 22:29:57.355223 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0425 22:29:57.377653 1 shared_informer.go:230] Caches are synced for deployment I0425 22:29:57.377776 1 shared_informer.go:230] Caches are synced for disruption I0425 22:29:57.380685 1 disruption.go:339] Sending events to api server. I0425 22:29:57.381648 1 shared_informer.go:230] Caches are synced for HPA I0425 22:29:57.414914 1 shared_informer.go:230] Caches are synced for resource quota I0425 22:29:57.427161 1 shared_informer.go:230] Caches are synced for garbage collector ==> kube-controller-manager [bbbae162f8ad] <== W0425 18:34:48.003066 1 controllermanager.go:525] Skipping "ttl-after-finished" I0425 18:34:48.003031 1 pv_protection_controller.go:83] Starting PV protection controller I0425 18:34:48.003473 1 shared_informer.go:223] Waiting for caches to sync for PV protection I0425 18:34:48.156882 1 controllermanager.go:533] Started "endpointslice" I0425 18:34:48.156957 1 endpointslice_controller.go:213] Starting endpoint slice controller I0425 18:34:48.156967 1 shared_informer.go:223] Waiting for caches to sync for endpoint_slice I0425 18:34:48.451303 1 controllermanager.go:533] Started "disruption" I0425 18:34:48.451555 1 disruption.go:331] Starting disruption controller I0425 18:34:48.451595 1 shared_informer.go:223] Waiting for caches to sync for disruption I0425 18:34:48.602375 1 controllermanager.go:533] Started "csrcleaner" I0425 18:34:48.602454 1 cleaner.go:82] Starting CSR cleaner controller I0425 18:34:48.752961 1 controllermanager.go:533] Started "attachdetach" I0425 18:34:48.753120 1 attach_detach_controller.go:338] Starting attach detach controller I0425 18:34:48.753251 1 shared_informer.go:223] Waiting for caches to sync for attach detach I0425 18:34:48.903284 1 controllermanager.go:533] Started "cronjob" I0425 18:34:48.903343 1 core.go:239] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true. W0425 18:34:48.903353 1 controllermanager.go:525] Skipping "route" I0425 18:34:48.903395 1 cronjob_controller.go:97] Starting CronJob Manager I0425 18:34:49.051591 1 node_lifecycle_controller.go:78] Sending events to api server E0425 18:34:49.052189 1 core.go:229] failed to start cloud node lifecycle controller: no cloud provider provided W0425 18:34:49.052317 1 controllermanager.go:525] Skipping "cloud-node-lifecycle" I0425 18:34:49.053057 1 shared_informer.go:223] Waiting for caches to sync for resource quota I0425 18:34:49.060203 1 shared_informer.go:223] Waiting for caches to sync for garbage collector W0425 18:34:49.135593 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I0425 18:34:49.143743 1 shared_informer.go:230] Caches are synced for certificate-csrsigning I0425 18:34:49.154845 1 shared_informer.go:230] Caches are synced for ReplicationController I0425 18:34:49.155451 1 shared_informer.go:230] Caches are synced for certificate-csrapproving I0425 18:34:49.157372 1 shared_informer.go:230] Caches are synced for endpoint_slice I0425 18:34:49.158922 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator I0425 18:34:49.160866 1 shared_informer.go:230] Caches are synced for ReplicaSet I0425 18:34:49.177826 1 shared_informer.go:230] Caches are synced for bootstrap_signer I0425 18:34:49.183056 1 shared_informer.go:230] Caches are synced for namespace I0425 18:34:49.201930 1 shared_informer.go:230] Caches are synced for stateful set I0425 18:34:49.206136 1 shared_informer.go:230] Caches are synced for PVC protection I0425 18:34:49.206172 1 shared_informer.go:230] Caches are synced for GC I0425 18:34:49.211283 1 shared_informer.go:230] Caches are synced for job I0425 18:34:49.218653 1 shared_informer.go:230] Caches are synced for service account I0425 18:34:49.231521 1 shared_informer.go:230] Caches are synced for TTL I0425 18:34:49.298961 1 shared_informer.go:230] Caches are synced for endpoint I0425 18:34:49.479721 1 shared_informer.go:230] Caches are synced for taint I0425 18:34:49.480259 1 taint_manager.go:187] Starting NoExecuteTaintManager I0425 18:34:49.481419 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"bdf14ea9-20f4-4eec-b14f-a1aaaaf166c7", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller I0425 18:34:49.481064 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: W0425 18:34:49.482083 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp. I0425 18:34:49.482528 1 node_lifecycle_controller.go:1249] Controller detected that zone is now in state Normal. I0425 18:34:49.502640 1 shared_informer.go:230] Caches are synced for HPA I0425 18:34:49.503595 1 shared_informer.go:230] Caches are synced for daemon sets I0425 18:34:49.601304 1 shared_informer.go:230] Caches are synced for persistent volume I0425 18:34:49.604063 1 shared_informer.go:230] Caches are synced for PV protection I0425 18:34:49.653428 1 shared_informer.go:230] Caches are synced for expand I0425 18:34:49.656962 1 shared_informer.go:230] Caches are synced for attach detach I0425 18:34:49.657304 1 shared_informer.go:230] Caches are synced for resource quota I0425 18:34:49.660720 1 shared_informer.go:230] Caches are synced for garbage collector I0425 18:34:49.671104 1 shared_informer.go:230] Caches are synced for resource quota I0425 18:34:49.711912 1 shared_informer.go:230] Caches are synced for garbage collector I0425 18:34:49.712224 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0425 18:34:49.729411 1 shared_informer.go:230] Caches are synced for deployment I0425 18:34:49.751785 1 shared_informer.go:230] Caches are synced for disruption I0425 18:34:49.752237 1 disruption.go:339] Sending events to api server. I0425 19:34:48.618007 1 cleaner.go:167] Cleaning CSR "csr-dgpcs" as it is more than 1h0m0s old and approved. ==> kube-proxy [099ac00a1076] <== W0425 22:29:40.741140 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy I0425 22:29:40.884844 1 node.go:136] Successfully retrieved node IP: 192.168.64.9 I0425 22:29:40.885290 1 server_others.go:186] Using iptables Proxier. W0425 22:29:40.885804 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined I0425 22:29:40.885841 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local I0425 22:29:40.914677 1 server.go:583] Version: v1.18.0 I0425 22:29:40.945695 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 I0425 22:29:40.945764 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I0425 22:29:40.946106 1 conntrack.go:83] Setting conntrack hashsize to 32768 I0425 22:29:40.983141 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I0425 22:29:40.984419 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I0425 22:29:40.987183 1 config.go:315] Starting service config controller I0425 22:29:40.987206 1 shared_informer.go:223] Waiting for caches to sync for service config I0425 22:29:41.000957 1 config.go:133] Starting endpoints config controller I0425 22:29:41.004402 1 shared_informer.go:223] Waiting for caches to sync for endpoints config I0425 22:29:41.093577 1 shared_informer.go:230] Caches are synced for service config I0425 22:29:41.106346 1 shared_informer.go:230] Caches are synced for endpoints config ==> kube-proxy [86e569424973] <== W0425 18:34:32.016639 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy I0425 18:34:32.046739 1 node.go:136] Successfully retrieved node IP: 192.168.64.9 I0425 18:34:32.047019 1 server_others.go:186] Using iptables Proxier. W0425 18:34:32.047047 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined I0425 18:34:32.047315 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local I0425 18:34:32.074549 1 server.go:583] Version: v1.18.0 I0425 18:34:32.084289 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 I0425 18:34:32.084330 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I0425 18:34:32.100146 1 conntrack.go:83] Setting conntrack hashsize to 32768 I0425 18:34:32.107943 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I0425 18:34:32.108011 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I0425 18:34:32.109161 1 config.go:315] Starting service config controller I0425 18:34:32.110186 1 shared_informer.go:223] Waiting for caches to sync for service config I0425 18:34:32.110401 1 config.go:133] Starting endpoints config controller I0425 18:34:32.110424 1 shared_informer.go:223] Waiting for caches to sync for endpoints config I0425 18:34:32.210556 1 shared_informer.go:230] Caches are synced for service config I0425 18:34:32.212054 1 shared_informer.go:230] Caches are synced for endpoints config ==> kube-scheduler [13842c78e71b] <== I0425 22:29:20.769536 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0425 22:29:20.769601 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0425 22:29:21.254042 1 serving.go:313] Generated self-signed cert in-memory W0425 22:29:31.157995 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0425 22:29:31.158368 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0425 22:29:31.158539 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous. W0425 22:29:31.158673 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0425 22:29:31.263468 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0425 22:29:31.263491 1 registry.go:150] Registering EvenPodsSpread predicate and priority function W0425 22:29:31.277770 1 authorization.go:47] Authorization is disabled W0425 22:29:31.277820 1 authentication.go:40] Authentication is disabled I0425 22:29:31.277843 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I0425 22:29:31.294323 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0425 22:29:31.294366 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0425 22:29:31.302203 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 I0425 22:29:31.304520 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0425 22:29:31.395958 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0425 22:29:31.412541 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler... I0425 22:29:46.929382 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler ==> kube-scheduler [6eb85fd0f2aa] <== I0425 18:34:15.967531 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0425 18:34:15.968273 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0425 18:34:17.396190 1 serving.go:313] Generated self-signed cert in-memory W0425 18:34:25.830059 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0425 18:34:25.830305 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0425 18:34:25.830324 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous. W0425 18:34:25.830332 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0425 18:34:25.932516 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0425 18:34:25.932536 1 registry.go:150] Registering EvenPodsSpread predicate and priority function W0425 18:34:25.942055 1 authorization.go:47] Authorization is disabled W0425 18:34:25.942102 1 authentication.go:40] Authentication is disabled I0425 18:34:25.942215 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I0425 18:34:25.957469 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 I0425 18:34:25.961040 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0425 18:34:25.961164 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0425 18:34:25.964878 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0425 18:34:26.067028 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0425 18:34:26.068573 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler... I0425 18:34:41.720005 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler ==> kubelet <== -- Logs begin at Sat 2020-04-25 22:27:59 UTC, end at Sat 2020-04-25 22:34:29 UTC. -- Apr 25 22:29:31 minikube kubelet[2195]: I0425 22:29:31.340660 2195 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-5lmn5" (UniqueName: "kubernetes.io/secret/df8fdd66-c7bd-4838-89e9-24254818ef86-minikube-ingress-dns-token-5lmn5") pod "kube-ingress-dns-minikube" (UID: "df8fdd66-c7bd-4838-89e9-24254818ef86") Apr 25 22:29:31 minikube kubelet[2195]: I0425 22:29:31.340739 2195 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd39c9b9-8f30-408b-89ef-3ab410c58617-config-volume") pod "coredns-66bff467f8-9gkzt" (UID: "fd39c9b9-8f30-408b-89ef-3ab410c58617") Apr 25 22:29:31 minikube kubelet[2195]: I0425 22:29:31.340821 2195 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-9c7lk" (UniqueName: "kubernetes.io/secret/ce65243c-131f-46f9-9fc7-49fd3959815e-kubernetes-dashboard-token-9c7lk") pod "kubernetes-dashboard-bc446cc64-kvr4w" (UID: "ce65243c-131f-46f9-9fc7-49fd3959815e") Apr 25 22:29:31 minikube kubelet[2195]: I0425 22:29:31.340895 2195 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/8206b8ad-61c4-4507-8188-5360ab0af665-kube-proxy") pod "kube-proxy-gjh8v" (UID: "8206b8ad-61c4-4507-8188-5360ab0af665") Apr 25 22:29:31 minikube kubelet[2195]: I0425 22:29:31.340974 2195 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-9c7lk" (UniqueName: "kubernetes.io/secret/4365d0fd-2765-4ebe-88c5-33d51b88998b-kubernetes-dashboard-token-9c7lk") pod "dashboard-metrics-scraper-84bfdf55ff-jc98b" (UID: "4365d0fd-2765-4ebe-88c5-33d51b88998b") Apr 25 22:29:31 minikube kubelet[2195]: I0425 22:29:31.341050 2195 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-79mp6" (UniqueName: "kubernetes.io/secret/893c3bc7-6a93-404a-9911-5cf28ff93d2a-coredns-token-79mp6") pod "coredns-66bff467f8-nqk76" (UID: "893c3bc7-6a93-404a-9911-5cf28ff93d2a") Apr 25 22:29:31 minikube kubelet[2195]: I0425 22:29:31.341126 2195 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-6njxv" (UniqueName: "kubernetes.io/secret/ec6dffe3-23a2-421f-8188-3f6cefde49f9-storage-provisioner-token-6njxv") pod "storage-provisioner" (UID: "ec6dffe3-23a2-421f-8188-3f6cefde49f9") Apr 25 22:29:31 minikube kubelet[2195]: I0425 22:29:31.341206 2195 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-6pbdl" (UniqueName: "kubernetes.io/secret/70f3322b-3036-43dc-9033-853fd39be709-default-token-6pbdl") pod "registry-4xt8d" (UID: "70f3322b-3036-43dc-9033-853fd39be709") Apr 25 22:29:31 minikube kubelet[2195]: I0425 22:29:31.341284 2195 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/4365d0fd-2765-4ebe-88c5-33d51b88998b-tmp-volume") pod "dashboard-metrics-scraper-84bfdf55ff-jc98b" (UID: "4365d0fd-2765-4ebe-88c5-33d51b88998b") Apr 25 22:29:31 minikube kubelet[2195]: I0425 22:29:31.341360 2195 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/ce65243c-131f-46f9-9fc7-49fd3959815e-tmp-volume") pod "kubernetes-dashboard-bc446cc64-kvr4w" (UID: "ce65243c-131f-46f9-9fc7-49fd3959815e") Apr 25 22:29:31 minikube kubelet[2195]: I0425 22:29:31.341437 2195 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-79mp6" (UniqueName: "kubernetes.io/secret/fd39c9b9-8f30-408b-89ef-3ab410c58617-coredns-token-79mp6") pod "coredns-66bff467f8-9gkzt" (UID: "fd39c9b9-8f30-408b-89ef-3ab410c58617") Apr 25 22:29:31 minikube kubelet[2195]: I0425 22:29:31.341513 2195 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/893c3bc7-6a93-404a-9911-5cf28ff93d2a-config-volume") pod "coredns-66bff467f8-nqk76" (UID: "893c3bc7-6a93-404a-9911-5cf28ff93d2a") Apr 25 22:29:31 minikube kubelet[2195]: I0425 22:29:31.341593 2195 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-hvtn5" (UniqueName: "kubernetes.io/secret/8206b8ad-61c4-4507-8188-5360ab0af665-kube-proxy-token-hvtn5") pod "kube-proxy-gjh8v" (UID: "8206b8ad-61c4-4507-8188-5360ab0af665") Apr 25 22:29:31 minikube kubelet[2195]: I0425 22:29:31.341677 2195 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "nginx-ingress-token-wp2xx" (UniqueName: "kubernetes.io/secret/081a51a9-b139-48dd-96ef-d358faa9aecb-nginx-ingress-token-wp2xx") pod "nginx-ingress-controller-6d57c87cb9-ghmbv" (UID: "081a51a9-b139-48dd-96ef-d358faa9aecb") Apr 25 22:29:31 minikube kubelet[2195]: I0425 22:29:31.341739 2195 reconciler.go:157] Reconciler: start to sync state Apr 25 22:29:32 minikube kubelet[2195]: I0425 22:29:32.448595 2195 request.go:621] Throttling request took 1.137287895s, request: GET:https://192.168.64.9:8443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dkube-proxy-token-hvtn5&limit=500&resourceVersion=0 Apr 25 22:29:32 minikube kubelet[2195]: I0425 22:29:32.857636 2195 kubelet_node_status.go:112] Node minikube was previously registered Apr 25 22:29:32 minikube kubelet[2195]: I0425 22:29:32.857804 2195 kubelet_node_status.go:73] Successfully registered node minikube Apr 25 22:29:33 minikube kubelet[2195]: W0425 22:29:33.994535 2195 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-nqk76 through plugin: invalid network status for Apr 25 22:29:34 minikube kubelet[2195]: W0425 22:29:34.140006 2195 pod_container_deletor.go:77] Container "47dcf51759a887487916525836a8221681103c1ab9ba9c334bb5500313a57ee9" not found in pod's containers Apr 25 22:29:36 minikube kubelet[2195]: W0425 22:29:36.787497 2195 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-9gkzt through plugin: invalid network status for Apr 25 22:29:37 minikube kubelet[2195]: W0425 22:29:37.610412 2195 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-bc446cc64-kvr4w through plugin: invalid network status for Apr 25 22:29:37 minikube kubelet[2195]: W0425 22:29:37.648604 2195 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/registry-proxy-g5zjd through plugin: invalid network status for Apr 25 22:29:37 minikube kubelet[2195]: W0425 22:29:37.674536 2195 pod_container_deletor.go:77] Container "0d6184489d1cf7d7cd0f2094314fa4988908d68948da795fe0042b1f093c6c42" not found in pod's containers Apr 25 22:29:37 minikube kubelet[2195]: W0425 22:29:37.708959 2195 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-9gkzt through plugin: invalid network status for Apr 25 22:29:38 minikube kubelet[2195]: W0425 22:29:38.417457 2195 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/registry-4xt8d through plugin: invalid network status for Apr 25 22:29:39 minikube kubelet[2195]: W0425 22:29:39.111921 2195 pod_container_deletor.go:77] Container "ffa22f6614754216dace7f15c712f81c91312f5f7ef85811d4c9416da6566c53" not found in pod's containers Apr 25 22:29:39 minikube kubelet[2195]: W0425 22:29:39.130450 2195 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/nginx-ingress-controller-6d57c87cb9-ghmbv through plugin: invalid network status for Apr 25 22:29:39 minikube kubelet[2195]: W0425 22:29:39.207450 2195 pod_container_deletor.go:77] Container "1ecb0d17e7bfefd2dac541d8f6a81f20869702665e0737591931bdfa6b439044" not found in pod's containers Apr 25 22:29:39 minikube kubelet[2195]: W0425 22:29:39.285869 2195 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-84bfdf55ff-jc98b through plugin: invalid network status for Apr 25 22:29:39 minikube kubelet[2195]: W0425 22:29:39.482473 2195 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/nginx-ingress-controller-6d57c87cb9-ghmbv through plugin: invalid network status for Apr 25 22:29:39 minikube kubelet[2195]: W0425 22:29:39.644689 2195 pod_container_deletor.go:77] Container "080752d8c2efd6a11c0fc7ed48ee5fea72d4e0d46ca5115b22cb42bdb8713fc0" not found in pod's containers Apr 25 22:29:39 minikube kubelet[2195]: W0425 22:29:39.657374 2195 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/registry-4xt8d through plugin: invalid network status for Apr 25 22:29:40 minikube kubelet[2195]: W0425 22:29:40.455845 2195 pod_container_deletor.go:77] Container "f1a52ad6ad28fb84516e0f221eb90c27eecc123c93076fe0c559460e5bafd2ad" not found in pod's containers Apr 25 22:29:40 minikube kubelet[2195]: W0425 22:29:40.464103 2195 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-nqk76 through plugin: invalid network status for Apr 25 22:29:40 minikube kubelet[2195]: W0425 22:29:40.564888 2195 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-bc446cc64-kvr4w through plugin: invalid network status for Apr 25 22:29:40 minikube kubelet[2195]: W0425 22:29:40.603011 2195 pod_container_deletor.go:77] Container "1c59ecf9618633ad9c5545baaca9f356229aa13fa2910baa15817d640c9fe65f" not found in pod's containers Apr 25 22:29:40 minikube kubelet[2195]: W0425 22:29:40.762650 2195 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-84bfdf55ff-jc98b through plugin: invalid network status for Apr 25 22:29:42 minikube kubelet[2195]: W0425 22:29:42.023971 2195 pod_container_deletor.go:77] Container "ff3ddadfdcb06f876abc0fc2531867878a4850913bb3679514341e693a9bc310" not found in pod's containers Apr 25 22:29:43 minikube kubelet[2195]: W0425 22:29:43.053684 2195 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/registry-proxy-g5zjd through plugin: invalid network status for Apr 25 22:29:43 minikube kubelet[2195]: W0425 22:29:43.138842 2195 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/nginx-ingress-controller-6d57c87cb9-ghmbv through plugin: invalid network status for Apr 25 22:29:43 minikube kubelet[2195]: W0425 22:29:43.171854 2195 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/registry-4xt8d through plugin: invalid network status for Apr 25 22:29:43 minikube kubelet[2195]: W0425 22:29:43.197995 2195 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-nqk76 through plugin: invalid network status for Apr 25 22:29:43 minikube kubelet[2195]: W0425 22:29:43.220868 2195 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-bc446cc64-kvr4w through plugin: invalid network status for Apr 25 22:29:43 minikube kubelet[2195]: W0425 22:29:43.264143 2195 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-84bfdf55ff-jc98b through plugin: invalid network status for Apr 25 22:29:43 minikube kubelet[2195]: W0425 22:29:43.296256 2195 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-9gkzt through plugin: invalid network status for Apr 25 22:30:10 minikube kubelet[2195]: I0425 22:30:10.107677 2195 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: cab2e17fd71726b46a210a1839b5e30245ab194cc5d0fb70ec37c78a8dc60b91 Apr 25 22:30:10 minikube kubelet[2195]: I0425 22:30:10.111444 2195 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: f2775ac142f6e48aa056e09b6bf3e8aeafd58f47b7edc1efaa3086835760d898 Apr 25 22:30:10 minikube kubelet[2195]: E0425 22:30:10.113333 2195 pod_workers.go:191] Error syncing pod ec6dffe3-23a2-421f-8188-3f6cefde49f9 ("storage-provisioner_kube-system(ec6dffe3-23a2-421f-8188-3f6cefde49f9)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ec6dffe3-23a2-421f-8188-3f6cefde49f9)" Apr 25 22:30:11 minikube kubelet[2195]: W0425 22:30:11.157648 2195 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-bc446cc64-kvr4w through plugin: invalid network status for Apr 25 22:30:11 minikube kubelet[2195]: I0425 22:30:11.168935 2195 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 3718366a5f9c6da94867caab3221750a7ce5c3b37726e78cea9b8eac926aaaa5 Apr 25 22:30:11 minikube kubelet[2195]: I0425 22:30:11.169779 2195 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: c4031bbc66ec32681cf5fc776a4591c857dfe27be22f3c2820e6922c4a9af3eb Apr 25 22:30:11 minikube kubelet[2195]: E0425 22:30:11.170351 2195 pod_workers.go:191] Error syncing pod ce65243c-131f-46f9-9fc7-49fd3959815e ("kubernetes-dashboard-bc446cc64-kvr4w_kubernetes-dashboard(ce65243c-131f-46f9-9fc7-49fd3959815e)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-bc446cc64-kvr4w_kubernetes-dashboard(ce65243c-131f-46f9-9fc7-49fd3959815e)" Apr 25 22:30:12 minikube kubelet[2195]: W0425 22:30:12.193470 2195 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-bc446cc64-kvr4w through plugin: invalid network status for Apr 25 22:30:19 minikube kubelet[2195]: W0425 22:30:19.351014 2195 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-bc446cc64-kvr4w through plugin: invalid network status for Apr 25 22:30:19 minikube kubelet[2195]: I0425 22:30:19.716759 2195 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: c4031bbc66ec32681cf5fc776a4591c857dfe27be22f3c2820e6922c4a9af3eb Apr 25 22:30:19 minikube kubelet[2195]: E0425 22:30:19.717463 2195 pod_workers.go:191] Error syncing pod ce65243c-131f-46f9-9fc7-49fd3959815e ("kubernetes-dashboard-bc446cc64-kvr4w_kubernetes-dashboard(ce65243c-131f-46f9-9fc7-49fd3959815e)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-bc446cc64-kvr4w_kubernetes-dashboard(ce65243c-131f-46f9-9fc7-49fd3959815e)" Apr 25 22:30:23 minikube kubelet[2195]: I0425 22:30:23.725917 2195 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: f2775ac142f6e48aa056e09b6bf3e8aeafd58f47b7edc1efaa3086835760d898 Apr 25 22:30:31 minikube kubelet[2195]: I0425 22:30:31.725882 2195 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: c4031bbc66ec32681cf5fc776a4591c857dfe27be22f3c2820e6922c4a9af3eb Apr 25 22:30:32 minikube kubelet[2195]: W0425 22:30:32.671910 2195 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-bc446cc64-kvr4w through plugin: invalid network status for ==> kubernetes-dashboard [ba7abb45cd1d] <== 2020/04/25 22:30:32 Starting overwatch 2020/04/25 22:30:32 Using namespace: kubernetes-dashboard 2020/04/25 22:30:32 Using in-cluster config to connect to apiserver 2020/04/25 22:30:32 Using secret token for csrf signing 2020/04/25 22:30:32 Initializing csrf token from kubernetes-dashboard-csrf secret 2020/04/25 22:30:32 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf 2020/04/25 22:30:32 Successful initial request to the apiserver, version: v1.18.0 2020/04/25 22:30:32 Generating JWE encryption key 2020/04/25 22:30:32 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting 2020/04/25 22:30:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard 2020/04/25 22:30:32 Initializing JWE encryption key from synchronized object 2020/04/25 22:30:32 Creating in-cluster Sidecar client 2020/04/25 22:30:32 Successful request to sidecar 2020/04/25 22:30:32 Serving insecurely on HTTP port: 9090 ==> kubernetes-dashboard [c4031bbc66ec] <== 2020/04/25 22:29:40 Using namespace: kubernetes-dashboard 2020/04/25 22:29:40 Starting overwatch 2020/04/25 22:29:40 Using in-cluster config to connect to apiserver 2020/04/25 22:29:40 Using secret token for csrf signing 2020/04/25 22:29:40 Initializing csrf token from kubernetes-dashboard-csrf secret panic: Get https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf: dial tcp 10.96.0.1:443: i/o timeout goroutine 1 [running]: github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc00000c760) /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:40 +0x3b0 github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...) /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:65 github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc0000fe500) /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:499 +0xc6 github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc0000fe500) /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:467 +0x47 github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...) /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:548 main.main() /home/travis/build/kubernetes/dashboard/src/app/backend/dashboard.go:105 +0x20d ==> storage-provisioner [010f9f4554e7] <== ==> storage-provisioner [f2775ac142f6] <== F0425 22:30:09.599002 1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
cmeans commented 4 years ago
cmeans commented 4 years ago

Found some help on the minikube slack. Now using:

minikube start --driver=docker

And it's working for me. However, it is slower...so would like a better solution, but am functional at the moment. OK, to close if you prefer.

medyagh commented 4 years ago

@cmeans I am curious what was the driver you were using before docker ? And also could your slower experience be because your docker has a lot of abandoned images ?

Could you try to prune your docker images and see if that makes it better ?

cmeans commented 4 years ago

@medyagh I was not specifying a driver before. In the time I've been using Minikube, which is only a few weeks now, I believe the default may have changed.

I have pruned recently (necessary to deal with another issue), but will try that again (and maybe shut down Folding@Home as well :) ).

cmeans commented 4 years ago

@medyagh Hyperkit

medyagh commented 4 years ago

@cmeans if u are on VPN i reocmmend using our docker driver instead of hyperkit. and for pushing images, have seen this docs ? https://minikube.sigs.k8s.io/docs/handbook/pushing/

does that answer your quesiton?

medyagh commented 4 years ago

@cmeans did using docker driver help ?

medyagh commented 4 years ago

I haven't heard back from you, I wonder if you still have this issue? Regrettably, there isn't enough information in this issue to make it actionable, and a long enough duration has passed, so this issue is likely difficult to replicate.

I will close this issue for now but please feel free to reopen whenever you feel ready to provide more information.

cmeans commented 4 years ago

Sorry...still having the issue, but we've ditched trying to use minikube for the moment, and will live with docker-compose only. I will revisit once docker-compose is insufficient for our needs.