I'm using the documentation to install minikube. When I try to start minikube using KVM I get an error saying 'VM is unable to access k8s.gcr.io, you may need to configure a proxy or set --image-repository', and the VM is unable to download the image from k8s.gcr.io, despite the address being reachable from the host.
user@server:~$ kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-minikube-64b64df8c9-k8mnm 0/1 ErrImagePull 0 38s
user@server:~$
Full output of minikube start command used, if not already included:
user@server:~$ minikube start --driver=kvm2
π minikube 1.9.2 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.9.2
π‘ To disable this notice, run: 'minikube config set WantUpdateNotification false'
π minikube v1.9.0 on Ubuntu 18.04
β¨ Using the kvm2 driver based on user configuration
πΎ Downloading driver docker-machine-driver-kvm2:
> docker-machine-driver-kvm2.sha256: 65 B / 65 B [-------] 100.00% ? p/s 0s
> docker-machine-driver-kvm2: 13.88 MiB / 13.88 MiB 100.00% 3.57 MiB p/s 4
πΏ Downloading VM boot image ...
> minikube-v1.9.0.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s
> minikube-v1.9.0.iso: 174.93 MiB / 174.93 MiB [] 100.00% 11.25 MiB p/s 16s
πΎ Downloading Kubernetes v1.18.0 preload ...
> preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB
π₯ Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
β Node may be unable to resolve external DNS records
β VM is unable to access k8s.gcr.io, you may need to configure a proxy or set --image-repository
π³ Preparing Kubernetes v1.18.0 on Docker 19.03.8 ...
π Enabling addons: default-storageclass, storage-provisioner
π Done! kubectl is now configured to use "minikube"
user@server:~$
Optional: Full output of minikube logs command:
user@server:~$ minikube logs
==> Docker <==
-- Logs begin at Tue 2020-04-14 07:36:11 UTC, end at Tue 2020-04-14 08:31:35 UTC. --
Apr 14 07:37:37 minikube dockerd[2176]: time="2020-04-14T07:37:37.303377412Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/160e21150e77c2bde7edfd0f11ab85cc528f28505982369b8f6a3a07da83647e/shim.sock" debug=false pid=4398
Apr 14 07:37:37 minikube dockerd[2176]: time="2020-04-14T07:37:37.799716945Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4a28eb17d5df64a7483ededaf4c739f3c3cbf6fb7a3e2baaaa7517e0abefafe3/shim.sock" debug=false pid=4452
Apr 14 07:37:50 minikube dockerd[2176]: time="2020-04-14T07:37:50.423362992Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/294935a91bd6f21ead68ffff7d4ab226c534402eaafa0b30a9c09308a46743e1/shim.sock" debug=false pid=4572
Apr 14 07:37:51 minikube dockerd[2176]: time="2020-04-14T07:37:51.012211013Z" level=info msg="shim reaped" id=294935a91bd6f21ead68ffff7d4ab226c534402eaafa0b30a9c09308a46743e1
Apr 14 07:37:51 minikube dockerd[2176]: time="2020-04-14T07:37:51.022900318Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 14 07:38:15 minikube dockerd[2176]: time="2020-04-14T07:38:15.429892372Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/97e23d40ae9caf091bbbf543ff4e8b1383d552ca5bc5b28bbe4a10bb8295c2bd/shim.sock" debug=false pid=4736
Apr 14 07:38:15 minikube dockerd[2176]: time="2020-04-14T07:38:15.885093566Z" level=info msg="shim reaped" id=97e23d40ae9caf091bbbf543ff4e8b1383d552ca5bc5b28bbe4a10bb8295c2bd
Apr 14 07:38:15 minikube dockerd[2176]: time="2020-04-14T07:38:15.895709913Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 14 07:39:05 minikube dockerd[2176]: time="2020-04-14T07:39:05.404909217Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/55f8467f416b98747507317af0b17fc34fa7eee8fc1d790ce6e4d1f1cabf71a4/shim.sock" debug=false pid=4992
Apr 14 07:39:05 minikube dockerd[2176]: time="2020-04-14T07:39:05.865167924Z" level=info msg="shim reaped" id=55f8467f416b98747507317af0b17fc34fa7eee8fc1d790ce6e4d1f1cabf71a4
Apr 14 07:39:05 minikube dockerd[2176]: time="2020-04-14T07:39:05.875085647Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 14 07:40:36 minikube dockerd[2176]: time="2020-04-14T07:40:36.391226826Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3c0350db62bbf12b2a5c0f6f4d9801d150fe426336161568c0643e9d0ba50a12/shim.sock" debug=false pid=5377
Apr 14 07:40:36 minikube dockerd[2176]: time="2020-04-14T07:40:36.880121707Z" level=info msg="shim reaped" id=3c0350db62bbf12b2a5c0f6f4d9801d150fe426336161568c0643e9d0ba50a12
Apr 14 07:40:36 minikube dockerd[2176]: time="2020-04-14T07:40:36.890637173Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 14 07:43:21 minikube dockerd[2176]: time="2020-04-14T07:43:21.421903087Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f051cca335cc8ebd6fedeb5ba65649952df153acad3af2e89c8f31ebe1af4804/shim.sock" debug=false pid=6017
Apr 14 07:43:21 minikube dockerd[2176]: time="2020-04-14T07:43:21.917572895Z" level=info msg="shim reaped" id=f051cca335cc8ebd6fedeb5ba65649952df153acad3af2e89c8f31ebe1af4804
Apr 14 07:43:21 minikube dockerd[2176]: time="2020-04-14T07:43:21.935332797Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 14 07:48:22 minikube dockerd[2176]: time="2020-04-14T07:48:22.403491210Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7b53ab216a309a0614a0e5b12198f6a521d59380ed9b9fec6f8826d889c28200/shim.sock" debug=false pid=7161
Apr 14 07:48:22 minikube dockerd[2176]: time="2020-04-14T07:48:22.873940004Z" level=info msg="shim reaped" id=7b53ab216a309a0614a0e5b12198f6a521d59380ed9b9fec6f8826d889c28200
Apr 14 07:48:22 minikube dockerd[2176]: time="2020-04-14T07:48:22.884351644Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 14 07:53:35 minikube dockerd[2176]: time="2020-04-14T07:53:35.398886823Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dfce5b822c0838101e5502fa9cad004db85a437f1856e68d4d93b3eb98bf03f7/shim.sock" debug=false pid=8356
Apr 14 07:53:35 minikube dockerd[2176]: time="2020-04-14T07:53:35.877206125Z" level=info msg="shim reaped" id=dfce5b822c0838101e5502fa9cad004db85a437f1856e68d4d93b3eb98bf03f7
Apr 14 07:53:35 minikube dockerd[2176]: time="2020-04-14T07:53:35.887530930Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 14 07:58:43 minikube dockerd[2176]: time="2020-04-14T07:58:43.417896924Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/818d3dc84567737ea5b9e2d72068cf3536ee962251b4f1277db2f63df2663791/shim.sock" debug=false pid=9518
Apr 14 07:58:43 minikube dockerd[2176]: time="2020-04-14T07:58:43.867335022Z" level=info msg="shim reaped" id=818d3dc84567737ea5b9e2d72068cf3536ee962251b4f1277db2f63df2663791
Apr 14 07:58:43 minikube dockerd[2176]: time="2020-04-14T07:58:43.877879819Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 14 08:03:45 minikube dockerd[2176]: time="2020-04-14T08:03:45.396968261Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/84d73f85eb68dc271ef6c64c987e89a04e79ea558153c01e14baca2de4be6590/shim.sock" debug=false pid=10658
Apr 14 08:03:45 minikube dockerd[2176]: time="2020-04-14T08:03:45.863725226Z" level=info msg="shim reaped" id=84d73f85eb68dc271ef6c64c987e89a04e79ea558153c01e14baca2de4be6590
Apr 14 08:03:45 minikube dockerd[2176]: time="2020-04-14T08:03:45.874377679Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 14 08:08:46 minikube dockerd[2176]: time="2020-04-14T08:08:46.417861960Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5bd11c9c7715d8f5ae290de1aba9aa8cf4bca3d6352269873108e33dabfcb347/shim.sock" debug=false pid=11807
Apr 14 08:08:46 minikube dockerd[2176]: time="2020-04-14T08:08:46.930598519Z" level=info msg="shim reaped" id=5bd11c9c7715d8f5ae290de1aba9aa8cf4bca3d6352269873108e33dabfcb347
Apr 14 08:08:46 minikube dockerd[2176]: time="2020-04-14T08:08:46.940904442Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 14 08:13:52 minikube dockerd[2176]: time="2020-04-14T08:13:52.421026736Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5babc1a23d85b8b508ff9967570647a126bae028df7b04c8b6c3cf08765f2e8a/shim.sock" debug=false pid=12951
Apr 14 08:13:53 minikube dockerd[2176]: time="2020-04-14T08:13:53.023072108Z" level=info msg="shim reaped" id=5babc1a23d85b8b508ff9967570647a126bae028df7b04c8b6c3cf08765f2e8a
Apr 14 08:13:53 minikube dockerd[2176]: time="2020-04-14T08:13:53.033297279Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 14 08:18:54 minikube dockerd[2176]: time="2020-04-14T08:18:54.408427446Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/185274527deeba3b7082d94577482b224c5969f1f45d4fd3a7497603376f2904/shim.sock" debug=false pid=14116
Apr 14 08:18:54 minikube dockerd[2176]: time="2020-04-14T08:18:54.867190579Z" level=info msg="shim reaped" id=185274527deeba3b7082d94577482b224c5969f1f45d4fd3a7497603376f2904
Apr 14 08:18:54 minikube dockerd[2176]: time="2020-04-14T08:18:54.880011395Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 14 08:24:04 minikube dockerd[2176]: time="2020-04-14T08:24:04.399188203Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/452f5c3c825c426f586c10e1f50841836a418922e1ecc795ab334e9c6efdb4a8/shim.sock" debug=false pid=15267
Apr 14 08:24:04 minikube dockerd[2176]: time="2020-04-14T08:24:04.868636237Z" level=info msg="shim reaped" id=452f5c3c825c426f586c10e1f50841836a418922e1ecc795ab334e9c6efdb4a8
Apr 14 08:24:04 minikube dockerd[2176]: time="2020-04-14T08:24:04.878698721Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 14 08:27:58 minikube dockerd[2176]: time="2020-04-14T08:27:58.711188716Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2f7b6919f2d09cfe93f9d3f370b9840c59b19b8d117d1d45dda639a104cacbe3/shim.sock" debug=false pid=16188
Apr 14 08:27:59 minikube dockerd[2176]: time="2020-04-14T08:27:59.044173802Z" level=warning msg="Error getting v2 registry: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [2001:4860:4860::8888]:53: dial udp [2001:4860:4860::8888]:53: connect: cannot assign requested address"
Apr 14 08:27:59 minikube dockerd[2176]: time="2020-04-14T08:27:59.044250665Z" level=info msg="Attempting next endpoint for pull after error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [2001:4860:4860::8888]:53: dial udp [2001:4860:4860::8888]:53: connect: cannot assign requested address"
Apr 14 08:27:59 minikube dockerd[2176]: time="2020-04-14T08:27:59.044351553Z" level=error msg="Handler for POST /images/create returned error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [2001:4860:4860::8888]:53: dial udp [2001:4860:4860::8888]:53: connect: cannot assign requested address"
Apr 14 08:28:16 minikube dockerd[2176]: time="2020-04-14T08:28:16.276704026Z" level=warning msg="Error getting v2 registry: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [2001:4860:4860::8888]:53: dial udp [2001:4860:4860::8888]:53: connect: cannot assign requested address"
Apr 14 08:28:16 minikube dockerd[2176]: time="2020-04-14T08:28:16.276876174Z" level=info msg="Attempting next endpoint for pull after error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [2001:4860:4860::8888]:53: dial udp [2001:4860:4860::8888]:53: connect: cannot assign requested address"
Apr 14 08:28:16 minikube dockerd[2176]: time="2020-04-14T08:28:16.277015022Z" level=error msg="Handler for POST /images/create returned error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [2001:4860:4860::8888]:53: dial udp [2001:4860:4860::8888]:53: connect: cannot assign requested address"
Apr 14 08:28:44 minikube dockerd[2176]: time="2020-04-14T08:28:44.281196981Z" level=warning msg="Error getting v2 registry: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [2001:4860:4860::8888]:53: dial udp [2001:4860:4860::8888]:53: connect: cannot assign requested address"
Apr 14 08:28:44 minikube dockerd[2176]: time="2020-04-14T08:28:44.282355200Z" level=info msg="Attempting next endpoint for pull after error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [2001:4860:4860::8888]:53: dial udp [2001:4860:4860::8888]:53: connect: cannot assign requested address"
Apr 14 08:28:44 minikube dockerd[2176]: time="2020-04-14T08:28:44.282523830Z" level=error msg="Handler for POST /images/create returned error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [2001:4860:4860::8888]:53: dial udp [2001:4860:4860::8888]:53: connect: cannot assign requested address"
Apr 14 08:29:09 minikube dockerd[2176]: time="2020-04-14T08:29:09.425417710Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9f521eef73d5aa19c797e39d55bf541af8c8e936cb9b7f8fe1fb209db26562c5/shim.sock" debug=false pid=16502
Apr 14 08:29:09 minikube dockerd[2176]: time="2020-04-14T08:29:09.919072303Z" level=info msg="shim reaped" id=9f521eef73d5aa19c797e39d55bf541af8c8e936cb9b7f8fe1fb209db26562c5
Apr 14 08:29:09 minikube dockerd[2176]: time="2020-04-14T08:29:09.929955733Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 14 08:29:32 minikube dockerd[2176]: time="2020-04-14T08:29:32.279535680Z" level=warning msg="Error getting v2 registry: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [2001:4860:4860::8888]:53: dial udp [2001:4860:4860::8888]:53: connect: cannot assign requested address"
Apr 14 08:29:32 minikube dockerd[2176]: time="2020-04-14T08:29:32.279602851Z" level=info msg="Attempting next endpoint for pull after error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [2001:4860:4860::8888]:53: dial udp [2001:4860:4860::8888]:53: connect: cannot assign requested address"
Apr 14 08:29:32 minikube dockerd[2176]: time="2020-04-14T08:29:32.279676944Z" level=error msg="Handler for POST /images/create returned error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [2001:4860:4860::8888]:53: dial udp [2001:4860:4860::8888]:53: connect: cannot assign requested address"
Apr 14 08:30:59 minikube dockerd[2176]: time="2020-04-14T08:30:59.272550308Z" level=warning msg="Error getting v2 registry: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [2001:4860:4860::8888]:53: dial udp [2001:4860:4860::8888]:53: connect: cannot assign requested address"
Apr 14 08:30:59 minikube dockerd[2176]: time="2020-04-14T08:30:59.273435203Z" level=info msg="Attempting next endpoint for pull after error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [2001:4860:4860::8888]:53: dial udp [2001:4860:4860::8888]:53: connect: cannot assign requested address"
Apr 14 08:30:59 minikube dockerd[2176]: time="2020-04-14T08:30:59.273519083Z" level=error msg="Handler for POST /images/create returned error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [2001:4860:4860::8888]:53: dial udp [2001:4860:4860::8888]:53: connect: cannot assign requested address"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
9f521eef73d5a 4689081edb103 2 minutes ago Exited storage-provisioner 15 e4d05c3844abb
4a28eb17d5df6 67da37a9a360e 53 minutes ago Running coredns 0 160e21150e77c
656d72a52f129 67da37a9a360e 54 minutes ago Running coredns 0 44f8c3a22a566
fdcdd519d5dd7 43940c34f24f3 54 minutes ago Running kube-proxy 0 a6ffa83230ea9
1067d08efb113 a31f78c7c8ce1 54 minutes ago Running kube-scheduler 0 d7bb5caf775d5
5bbb2283394b3 d3e55153f52fb 54 minutes ago Running kube-controller-manager 0 4a4a4f4ebbc7e
a00f64e90fde1 74060cea7f704 54 minutes ago Running kube-apiserver 0 8d8e67832e4b4
3a601f872dfd2 303ce5db0e90d 54 minutes ago Running etcd 0 0d904ea6f1b09
==> coredns [4a28eb17d5df] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b
[ERROR] plugin/errors: 2 6808370645730720992.1731143422603899002. HINFO: read udp 172.17.0.3:46185->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 6808370645730720992.1731143422603899002. HINFO: read udp 172.17.0.3:56474->8.8.4.4:53: i/o timeout
[ERROR] plugin/errors: 2 6808370645730720992.1731143422603899002. HINFO: read udp 172.17.0.3:42628->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 6808370645730720992.1731143422603899002. HINFO: read udp 172.17.0.3:59740->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 6808370645730720992.1731143422603899002. HINFO: read udp 172.17.0.3:49966->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 6808370645730720992.1731143422603899002. HINFO: read udp 172.17.0.3:57281->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 6808370645730720992.1731143422603899002. HINFO: read udp 172.17.0.3:59748->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 6808370645730720992.1731143422603899002. HINFO: read udp 172.17.0.3:39256->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 6808370645730720992.1731143422603899002. HINFO: read udp 172.17.0.3:49057->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 6808370645730720992.1731143422603899002. HINFO: read udp 172.17.0.3:35337->8.8.8.8:53: i/o timeout
==> coredns [656d72a52f12] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b
[ERROR] plugin/errors: 2 4747226472794787082.6398629073703594293. HINFO: read udp 172.17.0.2:59957->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 4747226472794787082.6398629073703594293. HINFO: read udp 172.17.0.2:42320->8.8.4.4:53: i/o timeout
[ERROR] plugin/errors: 2 4747226472794787082.6398629073703594293. HINFO: read udp 172.17.0.2:46501->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 4747226472794787082.6398629073703594293. HINFO: read udp 172.17.0.2:40916->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 4747226472794787082.6398629073703594293. HINFO: read udp 172.17.0.2:55649->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 4747226472794787082.6398629073703594293. HINFO: read udp 172.17.0.2:52335->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 4747226472794787082.6398629073703594293. HINFO: read udp 172.17.0.2:46405->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 4747226472794787082.6398629073703594293. HINFO: read udp 172.17.0.2:44465->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 4747226472794787082.6398629073703594293. HINFO: read udp 172.17.0.2:49815->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 4747226472794787082.6398629073703594293. HINFO: read udp 172.17.0.2:53715->8.8.8.8:53: i/o timeout
==> describe nodes <==
Name: minikube
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
minikube.k8s.io/commit=48fefd43444d2f8852f527c78f0141b377b1e42a
minikube.k8s.io/name=minikube
minikube.k8s.io/updated_at=2020_04_14T15_37_20_0700
minikube.k8s.io/version=v1.9.0
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 14 Apr 2020 07:37:16 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: minikube
AcquireTime: <unset>
RenewTime: Tue, 14 Apr 2020 08:31:33 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 14 Apr 2020 08:27:46 +0000 Tue, 14 Apr 2020 07:37:08 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 14 Apr 2020 08:27:46 +0000 Tue, 14 Apr 2020 07:37:08 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 14 Apr 2020 08:27:46 +0000 Tue, 14 Apr 2020 07:37:08 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 14 Apr 2020 08:27:46 +0000 Tue, 14 Apr 2020 07:37:31 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.86
Hostname: minikube
Capacity:
cpu: 2
ephemeral-storage: 16954224Ki
hugepages-2Mi: 0
memory: 5674540Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 15625012813
hugepages-2Mi: 0
memory: 5572140Ki
pods: 110
System Info:
Machine ID: 97525a0d1e6544c0a0887547b087a074
System UUID: 97525a0d-1e65-44c0-a088-7547b087a074
Boot ID: 3b04e901-9155-4fc4-8bb9-fe317a1ae34e
Kernel Version: 4.19.107
OS Image: Buildroot 2019.02.10
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.8
Kubelet Version: v1.18.0
Kube-Proxy Version: v1.18.0
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-minikube-64b64df8c9-k8mnm 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m37s
kube-system coredns-66bff467f8-924p4 100m (5%) 0 (0%) 70Mi (1%) 170Mi (3%) 54m
kube-system coredns-66bff467f8-ws2vf 100m (5%) 0 (0%) 70Mi (1%) 170Mi (3%) 54m
kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 54m
kube-system kube-apiserver-minikube 250m (12%) 0 (0%) 0 (0%) 0 (0%) 54m
kube-system kube-controller-manager-minikube 200m (10%) 0 (0%) 0 (0%) 0 (0%) 54m
kube-system kube-proxy-qsqgm 0 (0%) 0 (0%) 0 (0%) 0 (0%) 54m
kube-system kube-scheduler-minikube 100m (5%) 0 (0%) 0 (0%) 0 (0%) 54m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 54m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%) 0 (0%)
memory 140Mi (2%) 340Mi (6%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 54m (x5 over 54m) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 54m (x5 over 54m) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 54m (x5 over 54m) kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal Starting 54m kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 54m kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 54m kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 54m kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeNotReady 54m kubelet, minikube Node minikube status is now: NodeNotReady
Normal NodeAllocatableEnforced 54m kubelet, minikube Updated Node Allocatable limit across pods
Normal Starting 54m kube-proxy, minikube Starting kube-proxy.
Normal NodeReady 54m kubelet, minikube Node minikube status is now: NodeReady
==> dmesg <==
[Apr14 07:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.055144] core: CPUID marked event: 'bus cycles' unavailable
[ +0.022976] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +3.340144] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +0.999575] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
[ +0.013747] systemd-fstab-generator[1131]: Ignoring "noauto" for root device
[ +0.004952] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[ +0.000004] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[ +1.617844] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[ +0.495308] vboxguest: loading out-of-tree module taints kernel.
[ +0.012423] vboxguest: PCI device not found, probably running on physical hardware.
[ +5.954918] systemd-fstab-generator[1981]: Ignoring "noauto" for root device
[ +36.012892] kauditd_printk_skb: 65 callbacks suppressed
[ +0.840871] systemd-fstab-generator[2387]: Ignoring "noauto" for root device
[ +3.563719] systemd-fstab-generator[2628]: Ignoring "noauto" for root device
[Apr14 07:37] kauditd_printk_skb: 107 callbacks suppressed
[ +14.494644] systemd-fstab-generator[3681]: Ignoring "noauto" for root device
[ +7.785073] kauditd_printk_skb: 32 callbacks suppressed
[ +7.282573] kauditd_printk_skb: 38 callbacks suppressed
[Apr14 07:38] NFSD: Unable to end grace period: -110
==> etcd [3a601f872dfd] <==
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-04-14 07:37:07.968821 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-04-14 07:37:07.970016 I | embed: name = minikube
2020-04-14 07:37:07.970057 I | embed: data dir = /var/lib/minikube/etcd
2020-04-14 07:37:07.970073 I | embed: member dir = /var/lib/minikube/etcd/member
2020-04-14 07:37:07.970086 I | embed: heartbeat = 100ms
2020-04-14 07:37:07.970094 I | embed: election = 1000ms
2020-04-14 07:37:07.970100 I | embed: snapshot count = 10000
2020-04-14 07:37:07.970123 I | embed: advertise client URLs = https://192.168.39.86:2379
2020-04-14 07:37:08.003761 I | etcdserver: starting member 5e65f7c667250dae in cluster 1e2108b476944475
raft2020/04/14 07:37:08 INFO: 5e65f7c667250dae switched to configuration voters=()
raft2020/04/14 07:37:08 INFO: 5e65f7c667250dae became follower at term 0
raft2020/04/14 07:37:08 INFO: newRaft 5e65f7c667250dae [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
raft2020/04/14 07:37:08 INFO: 5e65f7c667250dae became follower at term 1
raft2020/04/14 07:37:08 INFO: 5e65f7c667250dae switched to configuration voters=(6802115243719069102)
2020-04-14 07:37:08.042659 W | auth: simple token is not cryptographically signed
2020-04-14 07:37:08.074539 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
2020-04-14 07:37:08.080779 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-04-14 07:37:08.081645 I | embed: listening for metrics on http://127.0.0.1:2381
2020-04-14 07:37:08.082038 I | embed: listening for peers on 192.168.39.86:2380
2020-04-14 07:37:08.082208 I | etcdserver: 5e65f7c667250dae as single-node; fast-forwarding 9 ticks (election ticks 10)
raft2020/04/14 07:37:08 INFO: 5e65f7c667250dae switched to configuration voters=(6802115243719069102)
2020-04-14 07:37:08.083026 I | etcdserver/membership: added member 5e65f7c667250dae [https://192.168.39.86:2380] to cluster 1e2108b476944475
raft2020/04/14 07:37:08 INFO: 5e65f7c667250dae is starting a new election at term 1
raft2020/04/14 07:37:08 INFO: 5e65f7c667250dae became candidate at term 2
raft2020/04/14 07:37:08 INFO: 5e65f7c667250dae received MsgVoteResp from 5e65f7c667250dae at term 2
raft2020/04/14 07:37:08 INFO: 5e65f7c667250dae became leader at term 2
raft2020/04/14 07:37:08 INFO: raft.node: 5e65f7c667250dae elected leader 5e65f7c667250dae at term 2
2020-04-14 07:37:08.579352 I | etcdserver: setting up the initial cluster version to 3.4
2020-04-14 07:37:08.676066 N | etcdserver/membership: set the initial cluster version to 3.4
2020-04-14 07:37:08.676178 I | etcdserver/api: enabled capabilities for version 3.4
2020-04-14 07:37:08.676240 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.39.86:2379]} to cluster 1e2108b476944475
2020-04-14 07:37:08.786232 I | embed: ready to serve client requests
2020-04-14 07:37:08.856909 I | embed: ready to serve client requests
2020-04-14 07:37:10.168041 I | embed: serving client requests on 192.168.39.86:2379
2020-04-14 07:37:10.183195 I | embed: serving client requests on 127.0.0.1:2379
2020-04-14 07:37:10.968385 W | wal: sync duration of 2.267486265s, expected less than 1s
2020-04-14 07:37:16.947278 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:0 size:4" took too long (247.664377ms) to execute
2020-04-14 07:37:16.955498 W | etcdserver: read-only range request "key:\"/registry/csinodes/minikube\" " with result "range_response_count:0 size:4" took too long (167.891068ms) to execute
2020-04-14 07:37:16.957886 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:0 size:4" took too long (174.482103ms) to execute
2020-04-14 07:37:16.958361 W | etcdserver: read-only range request "key:\"/registry/ranges/serviceips\" " with result "range_response_count:0 size:4" took too long (251.687327ms) to execute
2020-04-14 07:37:56.502740 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "range_response_count:1 size:575" took too long (124.779147ms) to execute
2020-04-14 07:47:10.978500 I | mvcc: store.index: compact 1131
2020-04-14 07:47:11.008430 I | mvcc: finished scheduled compaction at 1131 (took 28.912307ms)
2020-04-14 07:52:10.985869 I | mvcc: store.index: compact 1818
2020-04-14 07:52:11.001754 I | mvcc: finished scheduled compaction at 1818 (took 15.241144ms)
2020-04-14 07:57:10.993338 I | mvcc: store.index: compact 2506
2020-04-14 07:57:11.009357 I | mvcc: finished scheduled compaction at 2506 (took 15.170135ms)
2020-04-14 08:02:11.001115 I | mvcc: store.index: compact 3191
2020-04-14 08:02:11.018325 I | mvcc: finished scheduled compaction at 3191 (took 16.56625ms)
2020-04-14 08:07:11.011664 I | mvcc: store.index: compact 3876
2020-04-14 08:07:11.027522 I | mvcc: finished scheduled compaction at 3876 (took 15.034999ms)
2020-04-14 08:12:11.019994 I | mvcc: store.index: compact 4564
2020-04-14 08:12:11.036948 I | mvcc: finished scheduled compaction at 4564 (took 15.848778ms)
2020-04-14 08:17:11.030390 I | mvcc: store.index: compact 5243
2020-04-14 08:17:11.032945 I | mvcc: finished scheduled compaction at 5243 (took 1.675165ms)
2020-04-14 08:22:11.045935 I | mvcc: store.index: compact 5914
2020-04-14 08:22:11.047613 I | mvcc: finished scheduled compaction at 5914 (took 1.288768ms)
2020-04-14 08:27:11.054262 I | mvcc: store.index: compact 6580
2020-04-14 08:27:11.059377 I | mvcc: finished scheduled compaction at 6580 (took 3.620807ms)
==> kernel <==
08:31:35 up 55 min, 0 users, load average: 0.42, 0.31, 0.32
Linux minikube 4.19.107 #1 SMP Thu Mar 26 11:33:10 PDT 2020 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2019.02.10"
==> kube-apiserver [a00f64e90fde] <==
W0414 07:37:12.948281 1 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0414 07:37:12.967747 1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0414 07:37:13.004840 1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0414 07:37:13.011071 1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0414 07:37:13.036460 1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0414 07:37:13.129763 1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources.
W0414 07:37:13.129918 1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
I0414 07:37:13.144401 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0414 07:37:13.144692 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0414 07:37:13.147358 1 client.go:361] parsed scheme: "endpoint"
I0414 07:37:13.147552 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}]
I0414 07:37:13.166511 1 client.go:361] parsed scheme: "endpoint"
I0414 07:37:13.166570 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}]
I0414 07:37:16.448185 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0414 07:37:16.448515 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0414 07:37:16.449154 1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I0414 07:37:16.449597 1 secure_serving.go:178] Serving securely on [::]:8443
I0414 07:37:16.449715 1 crd_finalizer.go:266] Starting CRDFinalizer
I0414 07:37:16.449744 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0414 07:37:16.450378 1 available_controller.go:387] Starting AvailableConditionController
I0414 07:37:16.450411 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0414 07:37:16.450437 1 controller.go:81] Starting OpenAPI AggregationController
I0414 07:37:16.451734 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0414 07:37:16.451766 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0414 07:37:16.451850 1 autoregister_controller.go:141] Starting autoregister controller
I0414 07:37:16.451882 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0414 07:37:16.452471 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0414 07:37:16.452504 1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller
I0414 07:37:16.453811 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0414 07:37:16.453843 1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
I0414 07:37:16.453908 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0414 07:37:16.454005 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0414 07:37:16.459013 1 controller.go:86] Starting OpenAPI controller
I0414 07:37:16.459392 1 customresource_discovery_controller.go:209] Starting DiscoveryController
I0414 07:37:16.459676 1 naming_controller.go:291] Starting NamingConditionController
I0414 07:37:16.459942 1 establishing_controller.go:76] Starting EstablishingController
I0414 07:37:16.460314 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I0414 07:37:16.460541 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
E0414 07:37:16.596213 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.86, ResourceVersion: 0, AdditionalErrorMsg:
I0414 07:37:16.651623 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0414 07:37:16.653710 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0414 07:37:16.653770 1 cache.go:39] Caches are synced for autoregister controller
I0414 07:37:16.654189 1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller
I0414 07:37:16.655716 1 shared_informer.go:230] Caches are synced for crd-autoregister
I0414 07:37:17.448686 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0414 07:37:17.449241 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0414 07:37:17.464711 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I0414 07:37:17.474235 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I0414 07:37:17.474272 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I0414 07:37:18.521144 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0414 07:37:18.640851 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0414 07:37:18.801360 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.86]
I0414 07:37:18.802701 1 controller.go:606] quota admission added evaluator for: endpoints
I0414 07:37:18.828646 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0414 07:37:18.916480 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0414 07:37:19.208001 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0414 07:37:20.500566 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0414 07:37:20.607353 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0414 07:37:26.577749 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0414 07:37:26.881918 1 controller.go:606] quota admission added evaluator for: replicasets.apps
==> kube-controller-manager [5bbb2283394b] <==
I0414 07:37:25.414176 1 controllermanager.go:533] Started "csrcleaner"
I0414 07:37:25.414279 1 cleaner.go:82] Starting CSR cleaner controller
I0414 07:37:25.665623 1 controllermanager.go:533] Started "persistentvolume-binder"
I0414 07:37:25.665729 1 pv_controller_base.go:295] Starting persistent volume controller
I0414 07:37:25.666209 1 shared_informer.go:223] Waiting for caches to sync for persistent volume
I0414 07:37:25.915605 1 controllermanager.go:533] Started "endpointslice"
I0414 07:37:25.916219 1 endpointslice_controller.go:213] Starting endpoint slice controller
I0414 07:37:25.916271 1 shared_informer.go:223] Waiting for caches to sync for endpoint_slice
I0414 07:37:26.166245 1 controllermanager.go:533] Started "deployment"
I0414 07:37:26.166560 1 deployment_controller.go:153] Starting deployment controller
I0414 07:37:26.167499 1 shared_informer.go:223] Waiting for caches to sync for deployment
I0414 07:37:26.170430 1 shared_informer.go:223] Waiting for caches to sync for garbage collector
I0414 07:37:26.210943 1 shared_informer.go:230] Caches are synced for PV protection
W0414 07:37:26.230104 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0414 07:37:26.243435 1 shared_informer.go:230] Caches are synced for namespace
I0414 07:37:26.264243 1 shared_informer.go:230] Caches are synced for service account
I0414 07:37:26.265543 1 shared_informer.go:230] Caches are synced for certificate-csrsigning
I0414 07:37:26.271918 1 shared_informer.go:230] Caches are synced for bootstrap_signer
I0414 07:37:26.308073 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator
I0414 07:37:26.320100 1 shared_informer.go:230] Caches are synced for certificate-csrapproving
I0414 07:37:26.320835 1 shared_informer.go:230] Caches are synced for TTL
E0414 07:37:26.351605 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
E0414 07:37:26.362348 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
E0414 07:37:26.372686 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0414 07:37:26.421002 1 shared_informer.go:230] Caches are synced for GC
I0414 07:37:26.463417 1 shared_informer.go:230] Caches are synced for ReplicationController
I0414 07:37:26.463897 1 shared_informer.go:230] Caches are synced for HPA
I0414 07:37:26.489439 1 shared_informer.go:230] Caches are synced for job
I0414 07:37:26.513865 1 shared_informer.go:230] Caches are synced for endpoint
I0414 07:37:26.514859 1 shared_informer.go:230] Caches are synced for taint
I0414 07:37:26.515031 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone:
W0414 07:37:26.515473 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0414 07:37:26.515602 1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0414 07:37:26.516583 1 taint_manager.go:187] Starting NoExecuteTaintManager
I0414 07:37:26.518579 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"0a662c04-ceca-4c81-a414-72f904c60057", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I0414 07:37:26.519129 1 shared_informer.go:230] Caches are synced for endpoint_slice
I0414 07:37:26.552102 1 shared_informer.go:230] Caches are synced for PVC protection
I0414 07:37:26.566093 1 shared_informer.go:230] Caches are synced for daemon sets
I0414 07:37:26.566551 1 shared_informer.go:230] Caches are synced for expand
I0414 07:37:26.567892 1 shared_informer.go:223] Waiting for caches to sync for resource quota
I0414 07:37:26.568196 1 shared_informer.go:230] Caches are synced for persistent volume
I0414 07:37:26.595975 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"dadcf9a0-43af-41d2-9498-c66d70a06e38", APIVersion:"apps/v1", ResourceVersion:"233", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-qsqgm
I0414 07:37:26.616713 1 shared_informer.go:230] Caches are synced for attach detach
I0414 07:37:26.618265 1 shared_informer.go:230] Caches are synced for stateful set
E0414 07:37:26.645387 1 daemon_controller.go:292] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"dadcf9a0-43af-41d2-9498-c66d70a06e38", ResourceVersion:"233", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63722446640, loc:(*time.Location)(0x6d021e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0019f67c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0019f67e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0019f6800), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0019890c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0019f6820), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0019f6840), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0019f6880)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001a2a000), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0019c4e48), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0001001c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0000b2e20)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0019c4e98)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I0414 07:37:26.770075 1 shared_informer.go:230] Caches are synced for resource quota
I0414 07:37:26.777421 1 shared_informer.go:230] Caches are synced for garbage collector
I0414 07:37:26.817335 1 shared_informer.go:230] Caches are synced for ReplicaSet
I0414 07:37:26.820728 1 shared_informer.go:230] Caches are synced for resource quota
I0414 07:37:26.859951 1 shared_informer.go:230] Caches are synced for garbage collector
I0414 07:37:26.859984 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0414 07:37:26.866747 1 shared_informer.go:230] Caches are synced for disruption
I0414 07:37:26.866824 1 disruption.go:339] Sending events to api server.
I0414 07:37:26.873742 1 shared_informer.go:230] Caches are synced for deployment
I0414 07:37:26.886532 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"4942b2b0-b1b8-4600-aad7-6b12d629dfd5", APIVersion:"apps/v1", ResourceVersion:"224", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
I0414 07:37:26.932675 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"824baf0c-c149-496a-abab-a9317ed992ce", APIVersion:"apps/v1", ResourceVersion:"377", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-ws2vf
I0414 07:37:26.950291 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"824baf0c-c149-496a-abab-a9317ed992ce", APIVersion:"apps/v1", ResourceVersion:"377", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-924p4
I0414 07:37:36.516561 1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0414 08:27:58.181229 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-minikube", UID:"36b84130-12f6-49f4-be40-1b87d3f48c27", APIVersion:"apps/v1", ResourceVersion:"7351", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-minikube-64b64df8c9 to 1
I0414 08:27:58.194535 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-minikube-64b64df8c9", UID:"31f6ea89-509c-4350-9c8f-3d6b4b872c92", APIVersion:"apps/v1", ResourceVersion:"7352", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-minikube-64b64df8c9-k8mnm
==> kube-proxy [fdcdd519d5dd] <==
W0414 07:37:27.918506 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
I0414 07:37:27.940481 1 node.go:136] Successfully retrieved node IP: 192.168.39.86
I0414 07:37:27.940746 1 server_others.go:186] Using iptables Proxier.
W0414 07:37:27.940903 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
I0414 07:37:27.940994 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
I0414 07:37:27.941606 1 server.go:583] Version: v1.18.0
I0414 07:37:27.942530 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0414 07:37:27.942619 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0414 07:37:27.942947 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0414 07:37:27.943097 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0414 07:37:27.948161 1 config.go:315] Starting service config controller
I0414 07:37:27.948273 1 shared_informer.go:223] Waiting for caches to sync for service config
I0414 07:37:27.948587 1 config.go:133] Starting endpoints config controller
I0414 07:37:27.949081 1 shared_informer.go:223] Waiting for caches to sync for endpoints config
I0414 07:37:28.053195 1 shared_informer.go:230] Caches are synced for endpoints config
I0414 07:37:28.053198 1 shared_informer.go:230] Caches are synced for service config
==> kube-scheduler [1067d08efb11] <==
I0414 07:37:08.375484 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0414 07:37:08.375608 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0414 07:37:09.646996 1 serving.go:313] Generated self-signed cert in-memory
W0414 07:37:16.622312 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0414 07:37:16.622584 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0414 07:37:16.622691 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
W0414 07:37:16.622857 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0414 07:37:16.663981 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0414 07:37:16.664191 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
W0414 07:37:16.666456 1 authorization.go:47] Authorization is disabled
W0414 07:37:16.666613 1 authentication.go:40] Authentication is disabled
I0414 07:37:16.666808 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0414 07:37:16.668964 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0414 07:37:16.673979 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0414 07:37:16.675379 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0414 07:37:16.674864 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
I0414 07:37:16.676274 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0414 07:37:16.677014 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0414 07:37:16.674968 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0414 07:37:16.675059 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0414 07:37:16.675153 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0414 07:37:16.675235 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0414 07:37:16.675331 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0414 07:37:16.678666 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0414 07:37:16.679415 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0414 07:37:16.679781 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0414 07:37:16.680933 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0414 07:37:16.682983 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0414 07:37:16.683362 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0414 07:37:16.686145 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0414 07:37:16.686468 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0414 07:37:16.687381 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0414 07:37:16.688158 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0414 07:37:16.689409 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0414 07:37:18.494473 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0414 07:37:19.670353 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler...
I0414 07:37:19.696246 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
E0414 07:37:22.410735 1 factory.go:503] pod: kube-system/storage-provisioner is already present in unschedulable queue
I0414 07:37:23.175979 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0414 07:37:27.064665 1 factory.go:503] pod: kube-system/coredns-66bff467f8-924p4 is already present in the active queue
==> kubelet <==
-- Logs begin at Tue 2020-04-14 07:36:11 UTC, end at Tue 2020-04-14 08:31:36 UTC. --
Apr 14 08:29:32 minikube kubelet[3690]: I0414 08:29:32.265667 3690 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9f521eef73d5aa19c797e39d55bf541af8c8e936cb9b7f8fe1fb209db26562c5
Apr 14 08:29:32 minikube kubelet[3690]: E0414 08:29:32.265926 3690 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
Apr 14 08:29:32 minikube kubelet[3690]: E0414 08:29:32.266177 3690 pod_workers.go:191] Error syncing pod 79e35d5f-c2eb-4093-b120-81511b5f90a4 ("storage-provisioner_kube-system(79e35d5f-c2eb-4093-b120-81511b5f90a4)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(79e35d5f-c2eb-4093-b120-81511b5f90a4)"
Apr 14 08:29:32 minikube kubelet[3690]: E0414 08:29:32.280761 3690 remote_image.go:113] PullImage "k8s.gcr.io/echoserver:1.10" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [2001:4860:4860::8888]:53: dial udp [2001:4860:4860::8888]:53: connect: cannot assign requested address
Apr 14 08:29:32 minikube kubelet[3690]: E0414 08:29:32.280913 3690 kuberuntime_image.go:50] Pull image "k8s.gcr.io/echoserver:1.10" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [2001:4860:4860::8888]:53: dial udp [2001:4860:4860::8888]:53: connect: cannot assign requested address
Apr 14 08:29:32 minikube kubelet[3690]: E0414 08:29:32.281121 3690 kuberuntime_manager.go:801] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [2001:4860:4860::8888]:53: dial udp [2001:4860:4860::8888]:53: connect: cannot assign requested address
Apr 14 08:29:32 minikube kubelet[3690]: E0414 08:29:32.281246 3690 pod_workers.go:191] Error syncing pod 0fbc016a-fd4a-4e0a-bee6-e4ccd0cbc165 ("hello-minikube-64b64df8c9-k8mnm_default(0fbc016a-fd4a-4e0a-bee6-e4ccd0cbc165)"), skipping: failed to "StartContainer" for "echoserver" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [2001:4860:4860::8888]:53: dial udp [2001:4860:4860::8888]:53: connect: cannot assign requested address"
Apr 14 08:29:40 minikube kubelet[3690]: E0414 08:29:40.266528 3690 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
Apr 14 08:29:40 minikube kubelet[3690]: E0414 08:29:40.268224 3690 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
Apr 14 08:29:41 minikube kubelet[3690]: E0414 08:29:41.266784 3690 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
Apr 14 08:29:44 minikube kubelet[3690]: I0414 08:29:44.266204 3690 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9f521eef73d5aa19c797e39d55bf541af8c8e936cb9b7f8fe1fb209db26562c5
Apr 14 08:29:44 minikube kubelet[3690]: E0414 08:29:44.266645 3690 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
Apr 14 08:29:44 minikube kubelet[3690]: E0414 08:29:44.267128 3690 pod_workers.go:191] Error syncing pod 79e35d5f-c2eb-4093-b120-81511b5f90a4 ("storage-provisioner_kube-system(79e35d5f-c2eb-4093-b120-81511b5f90a4)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(79e35d5f-c2eb-4093-b120-81511b5f90a4)"
Apr 14 08:29:46 minikube kubelet[3690]: E0414 08:29:46.270845 3690 pod_workers.go:191] Error syncing pod 0fbc016a-fd4a-4e0a-bee6-e4ccd0cbc165 ("hello-minikube-64b64df8c9-k8mnm_default(0fbc016a-fd4a-4e0a-bee6-e4ccd0cbc165)"), skipping: failed to "StartContainer" for "echoserver" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/echoserver:1.10\""
Apr 14 08:29:54 minikube kubelet[3690]: E0414 08:29:54.266439 3690 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
Apr 14 08:29:56 minikube kubelet[3690]: E0414 08:29:56.266129 3690 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
Apr 14 08:29:57 minikube kubelet[3690]: I0414 08:29:57.265923 3690 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9f521eef73d5aa19c797e39d55bf541af8c8e936cb9b7f8fe1fb209db26562c5
Apr 14 08:29:57 minikube kubelet[3690]: E0414 08:29:57.266207 3690 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
Apr 14 08:29:57 minikube kubelet[3690]: E0414 08:29:57.266591 3690 pod_workers.go:191] Error syncing pod 79e35d5f-c2eb-4093-b120-81511b5f90a4 ("storage-provisioner_kube-system(79e35d5f-c2eb-4093-b120-81511b5f90a4)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(79e35d5f-c2eb-4093-b120-81511b5f90a4)"
Apr 14 08:30:00 minikube kubelet[3690]: E0414 08:30:00.268051 3690 pod_workers.go:191] Error syncing pod 0fbc016a-fd4a-4e0a-bee6-e4ccd0cbc165 ("hello-minikube-64b64df8c9-k8mnm_default(0fbc016a-fd4a-4e0a-bee6-e4ccd0cbc165)"), skipping: failed to "StartContainer" for "echoserver" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/echoserver:1.10\""
Apr 14 08:30:11 minikube kubelet[3690]: I0414 08:30:11.265989 3690 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9f521eef73d5aa19c797e39d55bf541af8c8e936cb9b7f8fe1fb209db26562c5
Apr 14 08:30:11 minikube kubelet[3690]: E0414 08:30:11.267142 3690 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
Apr 14 08:30:11 minikube kubelet[3690]: E0414 08:30:11.269272 3690 pod_workers.go:191] Error syncing pod 79e35d5f-c2eb-4093-b120-81511b5f90a4 ("storage-provisioner_kube-system(79e35d5f-c2eb-4093-b120-81511b5f90a4)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(79e35d5f-c2eb-4093-b120-81511b5f90a4)"
Apr 14 08:30:11 minikube kubelet[3690]: E0414 08:30:11.272136 3690 pod_workers.go:191] Error syncing pod 0fbc016a-fd4a-4e0a-bee6-e4ccd0cbc165 ("hello-minikube-64b64df8c9-k8mnm_default(0fbc016a-fd4a-4e0a-bee6-e4ccd0cbc165)"), skipping: failed to "StartContainer" for "echoserver" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/echoserver:1.10\""
Apr 14 08:30:23 minikube kubelet[3690]: E0414 08:30:23.266691 3690 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
Apr 14 08:30:23 minikube kubelet[3690]: E0414 08:30:23.277491 3690 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
Apr 14 08:30:23 minikube kubelet[3690]: I0414 08:30:23.285586 3690 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9f521eef73d5aa19c797e39d55bf541af8c8e936cb9b7f8fe1fb209db26562c5
Apr 14 08:30:23 minikube kubelet[3690]: E0414 08:30:23.285862 3690 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
Apr 14 08:30:23 minikube kubelet[3690]: E0414 08:30:23.286144 3690 pod_workers.go:191] Error syncing pod 79e35d5f-c2eb-4093-b120-81511b5f90a4 ("storage-provisioner_kube-system(79e35d5f-c2eb-4093-b120-81511b5f90a4)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(79e35d5f-c2eb-4093-b120-81511b5f90a4)"
Apr 14 08:30:25 minikube kubelet[3690]: E0414 08:30:25.268665 3690 pod_workers.go:191] Error syncing pod 0fbc016a-fd4a-4e0a-bee6-e4ccd0cbc165 ("hello-minikube-64b64df8c9-k8mnm_default(0fbc016a-fd4a-4e0a-bee6-e4ccd0cbc165)"), skipping: failed to "StartContainer" for "echoserver" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/echoserver:1.10\""
Apr 14 08:30:34 minikube kubelet[3690]: I0414 08:30:34.266160 3690 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9f521eef73d5aa19c797e39d55bf541af8c8e936cb9b7f8fe1fb209db26562c5
Apr 14 08:30:34 minikube kubelet[3690]: E0414 08:30:34.267135 3690 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
Apr 14 08:30:34 minikube kubelet[3690]: E0414 08:30:34.267398 3690 pod_workers.go:191] Error syncing pod 79e35d5f-c2eb-4093-b120-81511b5f90a4 ("storage-provisioner_kube-system(79e35d5f-c2eb-4093-b120-81511b5f90a4)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(79e35d5f-c2eb-4093-b120-81511b5f90a4)"
Apr 14 08:30:37 minikube kubelet[3690]: E0414 08:30:37.268103 3690 pod_workers.go:191] Error syncing pod 0fbc016a-fd4a-4e0a-bee6-e4ccd0cbc165 ("hello-minikube-64b64df8c9-k8mnm_default(0fbc016a-fd4a-4e0a-bee6-e4ccd0cbc165)"), skipping: failed to "StartContainer" for "echoserver" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/echoserver:1.10\""
Apr 14 08:30:48 minikube kubelet[3690]: E0414 08:30:48.268064 3690 pod_workers.go:191] Error syncing pod 0fbc016a-fd4a-4e0a-bee6-e4ccd0cbc165 ("hello-minikube-64b64df8c9-k8mnm_default(0fbc016a-fd4a-4e0a-bee6-e4ccd0cbc165)"), skipping: failed to "StartContainer" for "echoserver" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/echoserver:1.10\""
Apr 14 08:30:49 minikube kubelet[3690]: I0414 08:30:49.267391 3690 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9f521eef73d5aa19c797e39d55bf541af8c8e936cb9b7f8fe1fb209db26562c5
Apr 14 08:30:49 minikube kubelet[3690]: E0414 08:30:49.268062 3690 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
Apr 14 08:30:49 minikube kubelet[3690]: E0414 08:30:49.269100 3690 pod_workers.go:191] Error syncing pod 79e35d5f-c2eb-4093-b120-81511b5f90a4 ("storage-provisioner_kube-system(79e35d5f-c2eb-4093-b120-81511b5f90a4)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(79e35d5f-c2eb-4093-b120-81511b5f90a4)"
Apr 14 08:30:56 minikube kubelet[3690]: E0414 08:30:56.266062 3690 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
Apr 14 08:30:58 minikube kubelet[3690]: E0414 08:30:58.266233 3690 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
Apr 14 08:30:59 minikube kubelet[3690]: E0414 08:30:59.274118 3690 remote_image.go:113] PullImage "k8s.gcr.io/echoserver:1.10" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [2001:4860:4860::8888]:53: dial udp [2001:4860:4860::8888]:53: connect: cannot assign requested address
Apr 14 08:30:59 minikube kubelet[3690]: E0414 08:30:59.274783 3690 kuberuntime_image.go:50] Pull image "k8s.gcr.io/echoserver:1.10" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [2001:4860:4860::8888]:53: dial udp [2001:4860:4860::8888]:53: connect: cannot assign requested address
Apr 14 08:30:59 minikube kubelet[3690]: E0414 08:30:59.275045 3690 kuberuntime_manager.go:801] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [2001:4860:4860::8888]:53: dial udp [2001:4860:4860::8888]:53: connect: cannot assign requested address
Apr 14 08:30:59 minikube kubelet[3690]: E0414 08:30:59.275202 3690 pod_workers.go:191] Error syncing pod 0fbc016a-fd4a-4e0a-bee6-e4ccd0cbc165 ("hello-minikube-64b64df8c9-k8mnm_default(0fbc016a-fd4a-4e0a-bee6-e4ccd0cbc165)"), skipping: failed to "StartContainer" for "echoserver" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [2001:4860:4860::8888]:53: dial udp [2001:4860:4860::8888]:53: connect: cannot assign requested address"
Apr 14 08:31:01 minikube kubelet[3690]: I0414 08:31:01.265860 3690 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9f521eef73d5aa19c797e39d55bf541af8c8e936cb9b7f8fe1fb209db26562c5
Apr 14 08:31:01 minikube kubelet[3690]: E0414 08:31:01.266776 3690 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
Apr 14 08:31:01 minikube kubelet[3690]: E0414 08:31:01.267355 3690 pod_workers.go:191] Error syncing pod 79e35d5f-c2eb-4093-b120-81511b5f90a4 ("storage-provisioner_kube-system(79e35d5f-c2eb-4093-b120-81511b5f90a4)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(79e35d5f-c2eb-4093-b120-81511b5f90a4)"
Apr 14 08:31:04 minikube kubelet[3690]: E0414 08:31:04.267444 3690 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
Apr 14 08:31:07 minikube kubelet[3690]: E0414 08:31:07.267145 3690 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
Apr 14 08:31:12 minikube kubelet[3690]: E0414 08:31:12.267198 3690 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
Apr 14 08:31:12 minikube kubelet[3690]: E0414 08:31:12.271943 3690 pod_workers.go:191] Error syncing pod 0fbc016a-fd4a-4e0a-bee6-e4ccd0cbc165 ("hello-minikube-64b64df8c9-k8mnm_default(0fbc016a-fd4a-4e0a-bee6-e4ccd0cbc165)"), skipping: failed to "StartContainer" for "echoserver" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/echoserver:1.10\""
Apr 14 08:31:14 minikube kubelet[3690]: I0414 08:31:14.265915 3690 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9f521eef73d5aa19c797e39d55bf541af8c8e936cb9b7f8fe1fb209db26562c5
Apr 14 08:31:14 minikube kubelet[3690]: E0414 08:31:14.267245 3690 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
Apr 14 08:31:14 minikube kubelet[3690]: E0414 08:31:14.267751 3690 pod_workers.go:191] Error syncing pod 79e35d5f-c2eb-4093-b120-81511b5f90a4 ("storage-provisioner_kube-system(79e35d5f-c2eb-4093-b120-81511b5f90a4)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(79e35d5f-c2eb-4093-b120-81511b5f90a4)"
Apr 14 08:31:25 minikube kubelet[3690]: E0414 08:31:25.268332 3690 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
Apr 14 08:31:25 minikube kubelet[3690]: I0414 08:31:25.274133 3690 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9f521eef73d5aa19c797e39d55bf541af8c8e936cb9b7f8fe1fb209db26562c5
Apr 14 08:31:25 minikube kubelet[3690]: E0414 08:31:25.274333 3690 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
Apr 14 08:31:25 minikube kubelet[3690]: E0414 08:31:25.274943 3690 pod_workers.go:191] Error syncing pod 79e35d5f-c2eb-4093-b120-81511b5f90a4 ("storage-provisioner_kube-system(79e35d5f-c2eb-4093-b120-81511b5f90a4)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(79e35d5f-c2eb-4093-b120-81511b5f90a4)"
Apr 14 08:31:25 minikube kubelet[3690]: E0414 08:31:25.289771 3690 pod_workers.go:191] Error syncing pod 0fbc016a-fd4a-4e0a-bee6-e4ccd0cbc165 ("hello-minikube-64b64df8c9-k8mnm_default(0fbc016a-fd4a-4e0a-bee6-e4ccd0cbc165)"), skipping: failed to "StartContainer" for "echoserver" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/echoserver:1.10\""
Apr 14 08:31:28 minikube kubelet[3690]: E0414 08:31:28.266003 3690 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
==> storage-provisioner [9f521eef73d5] <==
F0414 08:29:09.837886 1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: connect: network is unreachable
user@server:~$
It seems that this was caused by some residual config left over from a previous libvirt installation. Purging all of the packages, reinstalling and restarting the service seemed to fix the problem.
Hello,
I'm using the documentation to install minikube. When I try to start minikube using KVM I get an error saying 'VM is unable to access k8s.gcr.io, you may need to configure a proxy or set --image-repository', and the VM is unable to download the image from k8s.gcr.io, despite the address being reachable from the host.
Steps to reproduce the issue:
minikube start --driver=kvm2
kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10
kubectl get pod
Full output of failed command:
Full output of
minikube start
command used, if not already included:Optional: Full output of
minikube logs
command:Thanks.