kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.28k stars 4.87k forks source link

Minikube attempts to connect to internet when using http_proxy/https_proxy #5793

Closed gclawes closed 4 years ago

gclawes commented 4 years ago

Minikube attempts to connect to the internet by running nslookup k8s.io against 8.8.8.8 and 1.1.1.1, and pinging 8.8.8.8 if those fail. In a corporate/closed network behind a proxy these DNS endpoints are not available. Minikube should use the local DNS resolvers, and possibly have a different behavior in the presence of http_proxy/HTTP_PROXY

The exact command to reproduce the issue: minikube start

The full output of the command that failed:

~ [⎈ minikube:default] $ minikube start
😄  minikube v1.5.1 on Darwin 10.14.6
✨  Automatically selected the 'hyperkit' driver (alternates: [virtualbox])
💾  Downloading driver docker-machine-driver-hyperkit:
    > docker-machine-driver-hyperkit.sha256: 65 B / 65 B [---] 100.00% ? p/s 0s
    > docker-machine-driver-hyperkit: 10.79 MiB / 10.79 MiB  100.00% 7.82 MiB p
🔑  The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

    $ sudo chown root:wheel /Users/gralaw/.minikube/bin/docker-machine-driver-hyperkit
    $ sudo chmod u+s /Users/gralaw/.minikube/bin/docker-machine-driver-hyperkit

💿  Downloading VM boot image ...
    > minikube-v1.5.1.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s
    > minikube-v1.5.1.iso: 143.76 MiB / 143.76 MiB [-] 100.00% 9.60 MiB p/s 15s
🔥  Creating hyperkit VM (CPUs=2, Memory=6144MB, Disk=20000MB) ...
🌐  Found network options:
    ▪ HTTP_PROXY=http://proxy-us02.org.nasdaqomx.com:8080
    ▪ HTTPS_PROXY=http://proxy-us02.org.nasdaqomx.com:8080
    ▪ NO_PROXY=localhost,127.0.0.1,localaddress,*localdomain.com,localdomain.com,192.168.0.0/16,10.0.0.0/8,172.16.0.0/12,*nasdaq.com,nasdaq.com,*nasdaqomx.com,nasdaqomx.com,*ften.com,ften.com,*om.com,om.com,*.om,.om,minikube
    ▪ http_proxy=http://proxy-us02.org.nasdaqomx.com:8080
    ▪ https_proxy=http://proxy-us02.org.nasdaqomx.com:8080
    ▪ no_proxy=localhost,127.0.0.1,localaddress,*localdomain.com,localdomain.com,192.168.0.0/16,10.0.0.0/8,172.16.0.0/12,*nasdaq.com,nasdaq.com,*nasdaqomx.com,nasdaqomx.com,*ften.com,ften.com,*om.com,om.com,*.om,.om,minikube
⚠️  VM is unable to directly connect to the internet: command failed: nslookup k8s.io 8.8.8.8 || nslookup k8s.io 1.1.1.1 || ping -c1 8.8.8.8
stdout: ;; connection timed out; no servers could be reached

;; connection timed out; no servers could be reached

PING 8.8.8.8 (8.8.8.8): 56 data bytes

--- 8.8.8.8 ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss

stderr: : Process exited with status 1
🐳  Preparing Kubernetes v1.16.2 on Docker 18.09.9 ...
    ▪ env HTTP_PROXY=http://proxy-us02.org.nasdaqomx.com:8080
    ▪ env HTTPS_PROXY=http://proxy-us02.org.nasdaqomx.com:8080
    ▪ env NO_PROXY=localhost,127.0.0.1,localaddress,*localdomain.com,localdomain.com,192.168.0.0/16,10.0.0.0/8,172.16.0.0/12,*nasdaq.com,nasdaq.com,*nasdaqomx.com,nasdaqomx.com,*ften.com,ften.com,*om.com,om.com,*.om,.om,minikube
    ▪ env HTTP_PROXY=http://proxy-us02.org.nasdaqomx.com:8080
    ▪ env HTTPS_PROXY=http://proxy-us02.org.nasdaqomx.com:8080
    ▪ env NO_PROXY=localhost,127.0.0.1,localaddress,*localdomain.com,localdomain.com,192.168.0.0/16,10.0.0.0/8,172.16.0.0/12,*nasdaq.com,nasdaq.com,*nasdaqomx.com,nasdaqomx.com,*ften.com,ften.com,*om.com,om.com,*.om,.om,minikube
💾  Downloading kubeadm v1.16.2
💾  Downloading kubelet v1.16.2
🚜  Pulling images ...
🚀  Launching Kubernetes ...
⌛  Waiting for: apiserver
🏄  Done! kubectl is now configured to use "minikube"

The output of the minikube logs command:

==> Docker <==
-- Logs begin at Wed 2019-10-30 18:19:49 UTC, end at Wed 2019-10-30 18:24:12 UTC. --
Oct 30 18:20:14 minikube dockerd[1973]: time="2019-10-30T18:20:14.488284481Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0006f6cd0, READY" module=grpc
Oct 30 18:20:14 minikube dockerd[1973]: time="2019-10-30T18:20:14.512377805Z" level=info msg="Graph migration to content-addressability took 0.00 seconds"
Oct 30 18:20:14 minikube dockerd[1973]: time="2019-10-30T18:20:14.512701647Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Oct 30 18:20:14 minikube dockerd[1973]: time="2019-10-30T18:20:14.512760933Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Oct 30 18:20:14 minikube dockerd[1973]: time="2019-10-30T18:20:14.512804229Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
Oct 30 18:20:14 minikube dockerd[1973]: time="2019-10-30T18:20:14.512845776Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
Oct 30 18:20:14 minikube dockerd[1973]: time="2019-10-30T18:20:14.512894278Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
Oct 30 18:20:14 minikube dockerd[1973]: time="2019-10-30T18:20:14.512935890Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
Oct 30 18:20:14 minikube dockerd[1973]: time="2019-10-30T18:20:14.513330666Z" level=info msg="Loading containers: start."
Oct 30 18:20:14 minikube dockerd[1973]: time="2019-10-30T18:20:14.592216234Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Oct 30 18:20:14 minikube dockerd[1973]: time="2019-10-30T18:20:14.643125147Z" level=info msg="Loading containers: done."
Oct 30 18:20:14 minikube dockerd[1973]: time="2019-10-30T18:20:14.649482198Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n"
Oct 30 18:20:14 minikube dockerd[1973]: time="2019-10-30T18:20:14.661201864Z" level=info msg="Docker daemon" commit=039a7df9ba graphdriver(s)=overlay2 version=18.09.9
Oct 30 18:20:14 minikube dockerd[1973]: time="2019-10-30T18:20:14.661343150Z" level=info msg="Daemon has completed initialization"
Oct 30 18:20:14 minikube dockerd[1973]: time="2019-10-30T18:20:14.671573420Z" level=info msg="API listen on /var/run/docker.sock"
Oct 30 18:20:14 minikube dockerd[1973]: time="2019-10-30T18:20:14.671625663Z" level=info msg="API listen on [::]:2376"
Oct 30 18:20:14 minikube systemd[1]: Started Docker Application Container Engine.
Oct 30 18:21:19 minikube dockerd[1973]: time="2019-10-30T18:21:19.998359284Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n"
Oct 30 18:21:20 minikube dockerd[1973]: time="2019-10-30T18:21:20.058413998Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n"
Oct 30 18:21:20 minikube dockerd[1973]: time="2019-10-30T18:21:20.109177886Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n"
Oct 30 18:21:20 minikube dockerd[1973]: time="2019-10-30T18:21:20.397367335Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n"
Oct 30 18:21:23 minikube dockerd[1973]: time="2019-10-30T18:21:23.418193729Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n"
Oct 30 18:21:23 minikube dockerd[1973]: time="2019-10-30T18:21:23.436952707Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n"
Oct 30 18:21:23 minikube dockerd[1973]: time="2019-10-30T18:21:23.484193470Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n"
Oct 30 18:21:23 minikube dockerd[1973]: time="2019-10-30T18:21:23.496635943Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n"
Oct 30 18:21:31 minikube dockerd[1973]: time="2019-10-30T18:21:31.675000272Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n"
Oct 30 18:21:32 minikube dockerd[1973]: time="2019-10-30T18:21:32.508260465Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8d4f699e03539c98fffe5a6b06e94df6e59e54a05b6d483f4daeb4fbfea8d1d9/shim.sock" debug=false pid=3083
Oct 30 18:21:32 minikube dockerd[1973]: time="2019-10-30T18:21:32.539641586Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b8d58d5dfc64a44edd21b58b8cdf4642f580011f6ad8d09f71ac011dfa55436d/shim.sock" debug=false pid=3102
Oct 30 18:21:32 minikube dockerd[1973]: time="2019-10-30T18:21:32.541882789Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b7916f683736ff7138051450891de0d89143afe48459a9c283a63e618589b6de/shim.sock" debug=false pid=3103
Oct 30 18:21:32 minikube dockerd[1973]: time="2019-10-30T18:21:32.558610579Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d0b78144124d3547b321c19e8a361763e5f104d1848cfb886866addc2e96d81e/shim.sock" debug=false pid=3114
Oct 30 18:21:32 minikube dockerd[1973]: time="2019-10-30T18:21:32.562180274Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/26908ce311dd12fcef3a056cbf9c2b7878a36477f7f1849cff78ba67ef942ec0/shim.sock" debug=false pid=3123
Oct 30 18:21:32 minikube dockerd[1973]: time="2019-10-30T18:21:32.864272860Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/368c642012b8e2b6661352fe8eacb63b86f2be44cd3eb8d757219fcf2a0594f0/shim.sock" debug=false pid=3265
Oct 30 18:21:32 minikube dockerd[1973]: time="2019-10-30T18:21:32.985268208Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/58cc148f9efd15a2d8995e8b3db4b1f14282d3c01e6a8ec9058bf3cb71c1aa64/shim.sock" debug=false pid=3331
Oct 30 18:21:33 minikube dockerd[1973]: time="2019-10-30T18:21:33.057039217Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1b92ed1e3bd99f5991b75dc2001be14ab09695268e9fcb332a739bd892b4800f/shim.sock" debug=false pid=3355
Oct 30 18:21:33 minikube dockerd[1973]: time="2019-10-30T18:21:33.099960550Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b2ec5b2240d80573cfd0e888af1b09fb83ddd68ce975355afd9190eda63db99b/shim.sock" debug=false pid=3380
Oct 30 18:21:41 minikube dockerd[1973]: time="2019-10-30T18:21:41.156307714Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8e24274b95e88054291743af195308447174784ad178ae0420608f00b7254c3c/shim.sock" debug=false pid=3538
Oct 30 18:21:49 minikube dockerd[1973]: time="2019-10-30T18:21:49.943121998Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/df98d8485de6e23063048989179cdb145cd7135fcf127eab0f9e8dd27a54245a/shim.sock" debug=false pid=3725
Oct 30 18:21:50 minikube dockerd[1973]: time="2019-10-30T18:21:50.028268480Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8efb5980db43b49950c22be5d0a0413d21cd63424e07082b41d440aae7e8fbcd/shim.sock" debug=false pid=3761
Oct 30 18:21:50 minikube dockerd[1973]: time="2019-10-30T18:21:50.633958012Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5ab7cf59d077bc535bcc45e73cba0d86088979bc5e83ea8e0e3daeb90dc3da30/shim.sock" debug=false pid=3877
Oct 30 18:21:50 minikube dockerd[1973]: time="2019-10-30T18:21:50.661089198Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b367b7ebc2042eba9e99254ff4a26351a5351275969a40fc5f1a96f13a207f4e/shim.sock" debug=false pid=3889
Oct 30 18:21:50 minikube dockerd[1973]: time="2019-10-30T18:21:50.696110515Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/83591015c653056fc0512c0beeb768a295bfdbf9d425c66d67f4cf19c4e15118/shim.sock" debug=false pid=3902
Oct 30 18:21:51 minikube dockerd[1973]: time="2019-10-30T18:21:51.180235848Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/aebf56be99bfc80d06246f63e474707c388849f3da9629d622b9c870a62dc4d9/shim.sock" debug=false pid=4036
Oct 30 18:21:51 minikube dockerd[1973]: time="2019-10-30T18:21:51.890904972Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/985bcc81978f2178419b9b828948d78b16f09e50e72ae61d55fc0aad88c24591/shim.sock" debug=false pid=4139
Oct 30 18:21:52 minikube dockerd[1973]: time="2019-10-30T18:21:52.086082701Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/39756431f83a98682c682351be6356f2adabe7f554c9393871421ea3c4271721/shim.sock" debug=false pid=4169
Oct 30 18:21:52 minikube dockerd[1973]: time="2019-10-30T18:21:52.592183408Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/301a3ab9622b62eaa25746d8dd382deae3ef4a9a58a34baf3962d7f180ac7dca/shim.sock" debug=false pid=4255
Oct 30 18:21:53 minikube dockerd[1973]: time="2019-10-30T18:21:53.163990901Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1f1304a08ae44181cb712d1619b1322bae4e3bbf5cb4c2c088a087da623040c3/shim.sock" debug=false pid=4363
Oct 30 18:21:53 minikube dockerd[1973]: time="2019-10-30T18:21:53.255467528Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/58f7239f51a475e24863aa33fff1549a10359c430b6bc130ee4b6188b0690821/shim.sock" debug=false pid=4379
Oct 30 18:21:54 minikube dockerd[1973]: time="2019-10-30T18:21:54.024180434Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/83e284929761b056ace13a9678d188419c8e54b31e2f6db42c1bf20f1d28c06e/shim.sock" debug=false pid=4587
Oct 30 18:21:54 minikube dockerd[1973]: time="2019-10-30T18:21:54.512085460Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f526095bd82a039b865983ef7844d4acf53cb0a4bc8b41c57a7018e199c4aadf/shim.sock" debug=false pid=4655
Oct 30 18:21:54 minikube dockerd[1973]: time="2019-10-30T18:21:54.779420771Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/604d07ddd048616c7828cbcad9f02ce48c45041c7ef31d005abb2beb00a264e6/shim.sock" debug=false pid=4704
Oct 30 18:21:54 minikube dockerd[1973]: time="2019-10-30T18:21:54.909503964Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c5adcfa8ccf6f9ed46ba1196b85d1db24082da1b7ef2473e938dc69189fa78e9/shim.sock" debug=false pid=4734
Oct 30 18:21:55 minikube dockerd[1973]: time="2019-10-30T18:21:55.132991022Z" level=info msg="shim reaped" id=c5adcfa8ccf6f9ed46ba1196b85d1db24082da1b7ef2473e938dc69189fa78e9
Oct 30 18:21:55 minikube dockerd[1973]: time="2019-10-30T18:21:55.143272027Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 30 18:21:55 minikube dockerd[1973]: time="2019-10-30T18:21:55.143497182Z" level=warning msg="c5adcfa8ccf6f9ed46ba1196b85d1db24082da1b7ef2473e938dc69189fa78e9 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/c5adcfa8ccf6f9ed46ba1196b85d1db24082da1b7ef2473e938dc69189fa78e9/mounts/shm, flags: 0x2: no such file or directory"
Oct 30 18:22:05 minikube dockerd[1973]: time="2019-10-30T18:22:05.690871190Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5ca4069a6617df8c1a4f34da0566d8089f98b098d1a2843b0bc34d4a1bbea174/shim.sock" debug=false pid=4939
Oct 30 18:22:43 minikube dockerd[1973]: time="2019-10-30T18:22:43.350198417Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5390e3a220d18632460dcee091c27f7433218f4e29967062bd614ddaccf6fd09/shim.sock" debug=false pid=5466
Oct 30 18:22:47 minikube dockerd[1973]: time="2019-10-30T18:22:47.848489397Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3ea4a282f3c7d3f6b3412ca6f5548d0bf44cc34303aa18716feb063e01ffceb9/shim.sock" debug=false pid=5883
Oct 30 18:23:12 minikube dockerd[1973]: time="2019-10-30T18:23:12.761222121Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f8669053c104835d0e7dc8a3c56a74314c6090115ea6702c814ed3edd1b3c871/shim.sock" debug=false pid=6215
Oct 30 18:23:14 minikube dockerd[1973]: time="2019-10-30T18:23:14.131231702Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ba8fff97c89f2e0ffddd21f8248f4883b019f4cf19d40e9ef564873b9f90e5b6/shim.sock" debug=false pid=6310
Oct 30 18:24:08 minikube dockerd[1973]: time="2019-10-30T18:24:08.965104307Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/20305d4447e368aeab53cddba0cc720d0b2e3de2a32571714fc8e64a9929859a/shim.sock" debug=false pid=6954

==> container status <==
CONTAINER           IMAGE                                                                                                                                    CREATED              STATE               NAME                         ATTEMPT             POD ID
20305d4447e36       k8s.gcr.io/elasticsearch@sha256:7e95b32a7a2aad0c0db5c881e4a1ce8b7e53236144ae9d9cfb5fbe5608af4ab2                                         4 seconds ago        Running             elasticsearch-logging        0                   985bcc81978f2
ba8fff97c89f2       ivans3/minikube-log-viewer@sha256:75854f45305cc47d17b04c6c588fa60777391761f951e3a34161ddf1f1b06405                                       58 seconds ago       Running             logviewer                    0                   83e284929761b
f8669053c1048       quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:d0b22f715fcea5598ef7f869d308b55289a3daaa12922fa52a1abf17703c88e7   About a minute ago   Running             nginx-ingress-controller     0                   58f7239f51a47
3ea4a282f3c7d       k8s.gcr.io/metrics-server-amd64@sha256:49a9f12f7067d11f42c803dbe61ed2c1299959ad85cb315b25ff7eef8e6b8892                                  About a minute ago   Running             metrics-server               0                   1f1304a08ae44
5390e3a220d18       docker.elastic.co/kibana/kibana@sha256:cd948a9bda4622f1437afc4a3e78be6c8c25fc62f40aa0376f3d690f2436568f                                  About a minute ago   Running             kibana-logging               0                   301a3ab9622b6
5ca4069a6617d       k8s.gcr.io/fluentd-elasticsearch@sha256:d0480bbf2d0de2344036fa3f7034cf7b4b98025a89c71d7f1f1845ac0e7d5a97                                 2 minutes ago        Running             fluentd-es                   0                   39756431f83a9
c5adcfa8ccf6f       registry.hub.docker.com/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475                           2 minutes ago        Exited              elasticsearch-logging-init   0                   985bcc81978f2
604d07ddd0486       4689081edb103                                                                                                                            2 minutes ago        Running             storage-provisioner          0                   f526095bd82a0
aebf56be99bfc       8454cbe08dc9f                                                                                                                            2 minutes ago        Running             kube-proxy                   0                   b367b7ebc2042
83591015c6530       bf261d1579144                                                                                                                            2 minutes ago        Running             coredns                      0                   8efb5980db43b
5ab7cf59d077b       bf261d1579144                                                                                                                            2 minutes ago        Running             coredns                      0                   df98d8485de6e
8e24274b95e88       k8s.gcr.io/kube-addon-manager@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616                                    2 minutes ago        Running             kube-addon-manager           0                   b7916f683736f
b2ec5b2240d80       b2756210eeabf                                                                                                                            2 minutes ago        Running             etcd                         0                   d0b78144124d3
1b92ed1e3bd99       c2c9a0406787c                                                                                                                            2 minutes ago        Running             kube-apiserver               0                   26908ce311dd1
58cc148f9efd1       6e4bffa46d70b                                                                                                                            2 minutes ago        Running             kube-controller-manager      0                   b8d58d5dfc64a
368c642012b8e       ebac1ae204a2c                                                                                                                            2 minutes ago        Running             kube-scheduler               0                   8d4f699e03539

==> coredns [5ab7cf59d077] <==
2019-10-30T18:21:54.230Z [INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
2019-10-30T18:21:55.946Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
2019-10-30T18:21:55.946Z [INFO] CoreDNS-1.6.2
2019-10-30T18:21:55.946Z [INFO] linux/amd64, go1.12.8, 795a3eb
CoreDNS-1.6.2
linux/amd64, go1.12.8, 795a3eb
2019-10-30T18:22:04.230Z [INFO] plugin/ready: Still waiting on: "kubernetes"
2019-10-30T18:22:14.231Z [INFO] plugin/ready: Still waiting on: "kubernetes"
I1030 18:22:20.946871       1 trace.go:82] Trace[244345545]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-10-30 18:21:50.946433849 +0000 UTC m=+0.020194602) (total time: 30.000330168s):
Trace[244345545]: [30.000330168s] [30.000330168s] END
E1030 18:22:20.946968       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1030 18:22:20.946968       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1030 18:22:20.946968       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1030 18:22:20.946968       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:I443: i/o timeout103
E1030 18:22:20.946986       1 reflector.go:126] pkg/mod/k8s.io/cl0i 18:22:20.946790       1 trace.go:82] Trace[1437822607]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+ient-go@v11.0.0+inconcompatible/tools/mpatible/toolcs/cache/reflector.go:94: Failed to list *v1.Service: Get https:ache/reflector.go:94//10.96.0.1:4 ListAndWatch" (started: 2019-10-30 18:2143/api/v1/services?l:50.94610846 +0000 UTC m=+0.019869220) imit=500&resourceVer(total time: 30.000654841s):
sion=0: dial tcp 10.96.0.1:443: i/o timeout
TrE1030 18:22:20.947267       1 reflector.go:126] pkg/mod/k8s.io/cliace[1437822607]: [30.000654841s] [30.000ent-go@v11.0.0+incompatible/tools/ca654841s] END
E1030 18:22:20.946986       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0che/reflector.go:94:+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https: Failed to list *v1.Namespa//10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ce: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o tiE1030 18:22:20.946986  meout
     1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1030 18:22:20.946986       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I1030 18:22:20.947258       1 trace.go:82] Trace[103988784]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-10-30 18:21:50.946002038 +0000 UTC m=+0.019762828) (total time: 30.001244632s):
Trace[103988784]: [30.001244632s] [30.001244632s] END
E1030 18:22:20.947267       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1030 18:22:20.947267       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1030 18:22:20.947267       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

==> coredns [83591015c653] <==
E1030 18:22:21.042383       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Servic2019-10-30e: Get https://10.96.0.1:443T18:21:54.97/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
7Z [INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
2019-10-30T18:21:E1030 18:22:21.042484       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.56.042Z [INFO] plugin/reload: Running configuration MD5 = 0.0+incompatibf64cb9b977c7le/toolsdfca58c4fab108535a76
2019-10-30T18:21:/cache/reflector.go:94: Failed to list *v1.Namesp56.042Z [INFO] CoreDNS-1.6.2
2019-10-30T18:21:56.042Z [INFO] linux/amd64, gace: Get httpo1.12.8, 795a3eb
CoreDNS-1.6.2
linux/amd64, go1.12.8, 795a3eb
s://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeou2019-10-30T18:22:04.977Z [INFO] plugin/ready: Still waiting on: "kubernetes"
2019-10-30T18:22:14.97t
E1030 18:22:21.044641       1 refl7Z [INFO] plugin/ready: Still waiting on: ector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/re"kubernetes"
I1030 18:22:21.042341       1 trace.go:82] Trace[915532309]flector.go:94: Fail: "Reflector pkg/mod/k8s.io/client-go@ved to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443:11.0.0+incompatible/tools/cache/ i/o timeout
reflector.go:94 ListAndWatch" (started: 2019-10-30 18:21:51.041926778 +0000 UTC m=+0.040815603) (total time: 30.000374543s):
Trace[915532309]: [30.000374543s] [30.000374543s] END
E1030 18:22:21.042383       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1030 18:22:21.042383       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1030 18:22:21.042383       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I1030 18:22:21.042477       1 trace.go:82] Trace[355497315]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-10-30 18:21:51.041516234 +0000 UTC m=+0.040405089) (total time: 30.000951809s):
Trace[355497315]: [30.000951809s] [30.000951809s] END
E1030 18:22:21.042484       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1030 18:22:21.042484       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1030 18:22:21.042484       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I1030 18:22:21.044605       1 trace.go:82] Trace[1053335357]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-10-30 18:21:51.044230141 +0000 UTC m=+0.043118999) (total time: 30.000358959s):
Trace[1053335357]: [30.000358959s] [30.000358959s] END
E1030 18:22:21.044641       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1030 18:22:21.044641       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1030 18:22:21.044641       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

==> dmesg <==
[Oct30 18:19] ERROR: earlyprintk= earlyser already used
[  +0.000000] You have booted with nomodeset. This means your GPU drivers are DISABLED
[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
[  +0.157194] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xC0, should be 0x1D (20180810/tbprint-177)
[ +18.414127] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-184)
[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-620)
[  +0.007737] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[  +3.902259] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
[  +0.008103] systemd-fstab-generator[1083]: Ignoring "noauto" for root device
[  +0.001922] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[  +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[  +1.201610] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[  +0.576846] vboxguest: loading out-of-tree module taints kernel.
[  +0.004019] vboxguest: PCI device not found, probably running on physical hardware.
[  +1.943846] systemd-fstab-generator[1880]: Ignoring "noauto" for root device
[Oct30 18:21] systemd-fstab-generator[2558]: Ignoring "noauto" for root device
[ +14.673859] systemd-fstab-generator[2950]: Ignoring "noauto" for root device
[ +11.232481] kauditd_printk_skb: 62 callbacks suppressed
[ +18.193808] kauditd_printk_skb: 20 callbacks suppressed
[  +3.033504] NFSD: Unable to end grace period: -110
[Oct30 18:22] kauditd_printk_skb: 92 callbacks suppressed
[  +8.029280] kauditd_printk_skb: 8 callbacks suppressed
[Oct30 18:23] kauditd_printk_skb: 2 callbacks suppressed

==> kernel <==
 18:24:12 up 4 min,  0 users,  load average: 3.94, 1.88, 0.74
Linux minikube 4.19.76 #1 SMP Tue Oct 29 14:56:42 PDT 2019 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2019.02.6"

==> kube-addon-manager [8e24274b95e8] <==
clusterrole.rbac.authorization.k8s.io/cr-logviewer unchanged
clusterrolebinding.rbac.authorization.k8s.io/crb-logviewer unchanged
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged
deployment.apps/metrics-server unchanged
service/metrics-server unchanged
deployment.apps/registry-creds unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-30T18:24:02+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-10-30T18:24:06+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
replicationcontroller/elasticsearch-logging unchanged
service/elasticsearch-logging unchanged
configmap/fluentd-es-config unchanged
replicationcontroller/fluentd-es unchanged
deployment.apps/nginx-ingress-controller unchanged
serviceaccount/nginx-ingress unchanged
clusterrole.rbac.authorization.k8s.io/system:nginx-ingress unchanged
role.rbac.authorization.k8s.io/system::nginx-ingress-role unchanged
replicationcontroller/kibana-logging unchanged
service/kibana-logging unchanged
service/logviewer unchanged
deployment.apps/logviewer unchanged
serviceaccount/sa-logviewer unchanged
clusterrole.rbac.authorization.k8s.io/cr-logviewer unchanged
clusterrolebinding.rbac.authorization.k8s.io/crb-logviewer unchanged
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged
deployment.apps/metrics-server unchanged
service/metrics-server unchanged
deployment.apps/registry-creds unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-30T18:24:08+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-10-30T18:24:11+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
replicationcontroller/elasticsearch-logging unchanged
service/elasticsearch-logging unchanged
configmap/fluentd-es-config unchanged
replicationcontroller/fluentd-es unchanged
deployment.apps/nginx-ingress-controller unchanged
serviceaccount/nginx-ingress unchanged
clusterrole.rbac.authorization.k8s.io/system:nginx-ingress unchanged
role.rbac.authorization.k8s.io/system::nginx-ingress-role unchanged
replicationcontroller/kibana-logging unchanged
service/kibana-logging unchanged
service/logviewer unchanged
deployment.apps/logviewer unchanged
serviceaccount/sa-logviewer unchanged
clusterrole.rbac.authorization.k8s.io/cr-logviewer unchanged
clusterrolebinding.rbac.authorization.k8s.io/crb-logviewer unchanged
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged
deployment.apps/metrics-server unchanged
service/metrics-server unchanged
deployment.apps/registry-creds unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-30T18:24:12+00:00 ==

==> kube-apiserver [1b92ed1e3bd9] <==
W1030 18:21:35.663402       1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1030 18:21:35.689683       1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1030 18:21:35.729062       1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W1030 18:21:35.729168       1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I1030 18:21:35.753588       1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I1030 18:21:35.753642       1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I1030 18:21:35.756243       1 client.go:361] parsed scheme: "endpoint"
I1030 18:21:35.756317       1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]
I1030 18:21:35.770078       1 client.go:361] parsed scheme: "endpoint"
I1030 18:21:35.770133       1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]
I1030 18:21:38.367968       1 secure_serving.go:123] Serving securely on [::]:8443
I1030 18:21:38.368049       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I1030 18:21:38.368064       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1030 18:21:38.369159       1 available_controller.go:383] Starting AvailableConditionController
I1030 18:21:38.369301       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1030 18:21:38.375408       1 crd_finalizer.go:274] Starting CRDFinalizer
I1030 18:21:38.375462       1 autoregister_controller.go:140] Starting autoregister controller
I1030 18:21:38.375469       1 cache.go:32] Waiting for caches to sync for autoregister controller
I1030 18:21:38.375483       1 controller.go:81] Starting OpenAPI AggregationController
I1030 18:21:38.421367       1 controller.go:85] Starting OpenAPI controller
I1030 18:21:38.421446       1 customresource_discovery_controller.go:208] Starting DiscoveryController
I1030 18:21:38.421467       1 naming_controller.go:288] Starting NamingConditionController
I1030 18:21:38.421502       1 establishing_controller.go:73] Starting EstablishingController
I1030 18:21:38.421540       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I1030 18:21:38.421552       1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1030 18:21:38.421573       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I1030 18:21:38.421579       1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
E1030 18:21:38.444756       1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.64.55, ResourceVersion: 0, AdditionalErrorMsg:
I1030 18:21:38.489845       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1030 18:21:38.489902       1 cache.go:39] Caches are synced for autoregister controller
I1030 18:21:38.550422       1 shared_informer.go:204] Caches are synced for crd-autoregister
I1030 18:21:38.569682       1 cache.go:39] Caches are synced for AvailableConditionController controller
I1030 18:21:39.368221       1 controller.go:107] OpenAPI AggregationController: Processing item
I1030 18:21:39.368241       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1030 18:21:39.368262       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1030 18:21:39.380569       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I1030 18:21:39.393088       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I1030 18:21:39.393125       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I1030 18:21:40.023351       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1030 18:21:40.095638       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W1030 18:21:40.274558       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.64.55]
I1030 18:21:40.275125       1 controller.go:606] quota admission added evaluator for: endpoints
I1030 18:21:41.528220       1 controller.go:606] quota admission added evaluator for: serviceaccounts
I1030 18:21:41.548252       1 controller.go:606] quota admission added evaluator for: deployments.apps
I1030 18:21:41.856573       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I1030 18:21:41.894985       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I1030 18:21:49.280436       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I1030 18:21:49.318803       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
I1030 18:21:49.459289       1 controller.go:606] quota admission added evaluator for: replicasets.apps
I1030 18:21:54.095421       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
W1030 18:21:54.095481       1 handler_proxy.go:99] no RequestInfo found in the context
E1030 18:21:54.095556       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I1030 18:21:54.095564       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I1030 18:22:50.629281       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
E1030 18:22:50.648380       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I1030 18:22:50.648454       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I1030 18:23:50.649087       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
E1030 18:23:50.673723       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I1030 18:23:50.673744       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.

==> kube-controller-manager [58cc148f9efd] <==
I1030 18:21:49.230633       1 shared_informer.go:197] Waiting for caches to sync for garbage collector
W1030 18:21:49.238738       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I1030 18:21:49.273915       1 shared_informer.go:204] Caches are synced for daemon sets
I1030 18:21:49.277015       1 shared_informer.go:204] Caches are synced for TTL
I1030 18:21:49.278821       1 shared_informer.go:204] Caches are synced for HPA
I1030 18:21:49.278933       1 shared_informer.go:204] Caches are synced for taint
I1030 18:21:49.278974       1 node_lifecycle_controller.go:1208] Initializing eviction metric for zone:
I1030 18:21:49.279452       1 taint_manager.go:186] Starting NoExecuteTaintManager
I1030 18:21:49.279623       1 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"9cef9d0b-9486-4794-9f9e-dfa88b57320b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
W1030 18:21:49.281806       1 node_lifecycle_controller.go:903] Missing timestamp for Node minikube. Assuming now as a timestamp.
I1030 18:21:49.283721       1 node_lifecycle_controller.go:1108] Controller detected that zone  is now in state Normal.
I1030 18:21:49.288128       1 shared_informer.go:204] Caches are synced for endpoint
I1030 18:21:49.297608       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"c244450f-3600-44b1-ba96-59c71020d09e", APIVersion:"apps/v1", ResourceVersion:"186", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-lb9kg
I1030 18:21:49.316160       1 shared_informer.go:204] Caches are synced for ReplicationController
I1030 18:21:49.324467       1 shared_informer.go:204] Caches are synced for certificate
I1030 18:21:49.325340       1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator
I1030 18:21:49.326124       1 shared_informer.go:204] Caches are synced for GC
I1030 18:21:49.326645       1 shared_informer.go:204] Caches are synced for ReplicaSet
I1030 18:21:49.327202       1 shared_informer.go:204] Caches are synced for bootstrap_signer
I1030 18:21:49.327674       1 shared_informer.go:204] Caches are synced for certificate
I1030 18:21:49.349827       1 log.go:172] [INFO] signed certificate with serial number 405968615559783219729652983926054762429509778031
I1030 18:21:49.455717       1 shared_informer.go:204] Caches are synced for deployment
I1030 18:21:49.465995       1 shared_informer.go:204] Caches are synced for disruption
I1030 18:21:49.466049       1 disruption.go:341] Sending events to api server.
I1030 18:21:49.467534       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"a8049939-f35c-4891-8536-143d90baa151", APIVersion:"apps/v1", ResourceVersion:"178", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5644d7b6d9 to 2
I1030 18:21:49.488814       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"c9c15352-8a6e-4568-a9bd-078da72ec873", APIVersion:"apps/v1", ResourceVersion:"329", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-wdf6m
I1030 18:21:49.512683       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"c9c15352-8a6e-4568-a9bd-078da72ec873", APIVersion:"apps/v1", ResourceVersion:"329", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-gz7sg
I1030 18:21:49.525121       1 shared_informer.go:204] Caches are synced for PVC protection
I1030 18:21:49.574509       1 shared_informer.go:204] Caches are synced for stateful set
I1030 18:21:49.626226       1 shared_informer.go:204] Caches are synced for expand
I1030 18:21:49.675900       1 shared_informer.go:204] Caches are synced for PV protection
I1030 18:21:49.676438       1 shared_informer.go:204] Caches are synced for job
I1030 18:21:49.677166       1 shared_informer.go:204] Caches are synced for attach detach
I1030 18:21:49.725782       1 shared_informer.go:204] Caches are synced for persistent volume
I1030 18:21:49.739755       1 shared_informer.go:204] Caches are synced for resource quota
I1030 18:21:49.789214       1 shared_informer.go:204] Caches are synced for namespace
I1030 18:21:49.824192       1 shared_informer.go:204] Caches are synced for resource quota
I1030 18:21:49.830903       1 shared_informer.go:204] Caches are synced for garbage collector
I1030 18:21:49.832084       1 shared_informer.go:204] Caches are synced for garbage collector
I1030 18:21:49.832124       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1030 18:21:49.849831       1 shared_informer.go:204] Caches are synced for service account
I1030 18:21:51.380112       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"kube-system", Name:"elasticsearch-logging", UID:"9d492ffe-63a1-4130-b36d-ea57d13a87c5", APIVersion:"v1", ResourceVersion:"381", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: elasticsearch-logging-b9j9d
I1030 18:21:51.480274       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"kube-system", Name:"fluentd-es", UID:"686c57f2-ad47-45b4-945f-bf91cdbd4992", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: fluentd-es-mbn5r
I1030 18:21:51.530398       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"nginx-ingress-controller", UID:"6e833e8b-1cad-474a-98a9-b16e50a5cbe8", APIVersion:"apps/v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-ingress-controller-6fc5bcc8c9 to 1
I1030 18:21:51.541291       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"nginx-ingress-controller-6fc5bcc8c9", UID:"6f199f61-0018-4d74-9d5e-2709259fb5b4", APIVersion:"apps/v1", ResourceVersion:"401", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "nginx-ingress-controller-6fc5bcc8c9-" is forbidden: error looking up service account kube-system/nginx-ingress: serviceaccount "nginx-ingress" not found
E1030 18:21:51.552954       1 replica_set.go:450] Sync "kube-system/nginx-ingress-controller-6fc5bcc8c9" failed with pods "nginx-ingress-controller-6fc5bcc8c9-" is forbidden: error looking up service account kube-system/nginx-ingress: serviceaccount "nginx-ingress" not found
E1030 18:21:51.558486       1 replica_set.go:450] Sync "kube-system/nginx-ingress-controller-6fc5bcc8c9" failed with pods "nginx-ingress-controller-6fc5bcc8c9-" is forbidden: error looking up service account kube-system/nginx-ingress: serviceaccount "nginx-ingress" not found
I1030 18:21:51.559430       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"nginx-ingress-controller-6fc5bcc8c9", UID:"6f199f61-0018-4d74-9d5e-2709259fb5b4", APIVersion:"apps/v1", ResourceVersion:"404", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "nginx-ingress-controller-6fc5bcc8c9-" is forbidden: error looking up service account kube-system/nginx-ingress: serviceaccount "nginx-ingress" not found
E1030 18:21:51.570753       1 replica_set.go:450] Sync "kube-system/nginx-ingress-controller-6fc5bcc8c9" failed with pods "nginx-ingress-controller-6fc5bcc8c9-" is forbidden: error looking up service account kube-system/nginx-ingress: serviceaccount "nginx-ingress" not found
I1030 18:21:51.570869       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"nginx-ingress-controller-6fc5bcc8c9", UID:"6f199f61-0018-4d74-9d5e-2709259fb5b4", APIVersion:"apps/v1", ResourceVersion:"404", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "nginx-ingress-controller-6fc5bcc8c9-" is forbidden: error looking up service account kube-system/nginx-ingress: serviceaccount "nginx-ingress" not found
I1030 18:21:51.599562       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"nginx-ingress-controller-6fc5bcc8c9", UID:"6f199f61-0018-4d74-9d5e-2709259fb5b4", APIVersion:"apps/v1", ResourceVersion:"404", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-ingress-controller-6fc5bcc8c9-gjqgr
I1030 18:21:51.653794       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"kube-system", Name:"kibana-logging", UID:"1851b089-6fe2-4e04-acf1-f9b94098524e", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kibana-logging-cgq4f
I1030 18:21:51.855947       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"logviewer", UID:"d10c5683-6ce0-4b15-8074-fcce67b34115", APIVersion:"apps/v1", ResourceVersion:"439", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set logviewer-5594c699dd to 1
I1030 18:21:52.328981       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"metrics-server", UID:"fa2b9de0-be3e-4c7c-af85-b001ffe61723", APIVersion:"apps/v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set metrics-server-587f876775 to 1
I1030 18:21:52.403210       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-587f876775", UID:"5cfad804-95a0-4a37-9af7-6b7199246d31", APIVersion:"apps/v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-587f876775-cv5kh
I1030 18:21:52.856879       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"registry-creds", UID:"a97fe25c-5e01-476a-8e4c-f6c8a655c3d3", APIVersion:"apps/v1", ResourceVersion:"472", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set registry-creds-69f86f67f7 to 1
I1030 18:21:52.903921       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"registry-creds-69f86f67f7", UID:"98a7c17f-bc11-495f-9c93-03bcca36c5e1", APIVersion:"apps/v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: registry-creds-69f86f67f7-842pz
I1030 18:21:52.986523       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"logviewer-5594c699dd", UID:"b66a73fe-2884-47d4-95fa-9d090f7aee9e", APIVersion:"apps/v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: logviewer-5594c699dd-l5sdg
E1030 18:22:20.075576       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W1030 18:22:21.835303       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]

==> kube-proxy [aebf56be99bf] <==
W1030 18:21:51.469052       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
I1030 18:21:51.521615       1 node.go:135] Successfully retrieved node IP: 192.168.64.55
I1030 18:21:51.522772       1 server_others.go:149] Using iptables Proxier.
W1030 18:21:51.523387       1 proxier.go:287] clusterCIDR not specified, unable to distinguish between internal and external traffic
I1030 18:21:51.524063       1 server.go:529] Version: v1.16.2
I1030 18:21:51.530964       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I1030 18:21:51.531301       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1030 18:21:51.532957       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I1030 18:21:51.533229       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I1030 18:21:51.536180       1 config.go:313] Starting service config controller
I1030 18:21:51.536227       1 shared_informer.go:197] Waiting for caches to sync for service config
I1030 18:21:51.537609       1 config.go:131] Starting endpoints config controller
I1030 18:21:51.537678       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I1030 18:21:51.636887       1 shared_informer.go:204] Caches are synced for service config
I1030 18:21:51.640285       1 shared_informer.go:204] Caches are synced for endpoints config

==> kube-scheduler [368c642012b8] <==
I1030 18:21:34.037923       1 serving.go:319] Generated self-signed cert in-memory
W1030 18:21:38.468435       1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1030 18:21:38.468524       1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1030 18:21:38.468537       1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
W1030 18:21:38.468542       1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1030 18:21:38.491657       1 server.go:143] Version: v1.16.2
I1030 18:21:38.493640       1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W1030 18:21:38.496389       1 authorization.go:47] Authorization is disabled
W1030 18:21:38.496424       1 authentication.go:79] Authentication is disabled
I1030 18:21:38.496433       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I1030 18:21:38.497693       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
E1030 18:21:38.579207       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1030 18:21:38.579255       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1030 18:21:38.579650       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1030 18:21:38.579895       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1030 18:21:38.580280       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1030 18:21:38.580454       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1030 18:21:38.580614       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1030 18:21:38.580727       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1030 18:21:38.581244       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1030 18:21:38.581613       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1030 18:21:38.582165       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1030 18:21:39.580520       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1030 18:21:39.585280       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1030 18:21:39.585290       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1030 18:21:39.585982       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1030 18:21:39.589552       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1030 18:21:39.592590       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1030 18:21:39.593779       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1030 18:21:39.596088       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1030 18:21:39.597858       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1030 18:21:39.602377       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1030 18:21:39.605786       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
I1030 18:21:40.699098       1 leaderelection.go:241] attempting to acquire leader lease  kube-system/kube-scheduler...
I1030 18:21:40.709267       1 leaderelection.go:251] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Wed 2019-10-30 18:19:49 UTC, end at Wed 2019-10-30 18:24:13 UTC. --
Oct 30 18:21:51 minikube kubelet[2996]: I1030 18:21:51.813916    2996 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-c7427" (UniqueName: "kubernetes.io/secret/3cf43430-3d6d-4746-b7af-dc491a95106c-default-token-c7427") pod "kibana-logging-cgq4f" (UID: "3cf43430-3d6d-4746-b7af-dc491a95106c")
Oct 30 18:21:52 minikube kubelet[2996]: I1030 18:21:52.553361    2996 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-c7427" (UniqueName: "kubernetes.io/secret/50803786-35dd-427b-abfd-efbdba26500b-default-token-c7427") pod "metrics-server-587f876775-cv5kh" (UID: "50803786-35dd-427b-abfd-efbdba26500b")
Oct 30 18:21:53 minikube kubelet[2996]: I1030 18:21:53.067746    2996 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-c7427" (UniqueName: "kubernetes.io/secret/efb28b39-0968-4f5e-9cb3-bb30112d402f-default-token-c7427") pod "registry-creds-69f86f67f7-842pz" (UID: "efb28b39-0968-4f5e-9cb3-bb30112d402f")
Oct 30 18:21:53 minikube kubelet[2996]: I1030 18:21:53.067783    2996 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/efb28b39-0968-4f5e-9cb3-bb30112d402f-gcr-creds") pod "registry-creds-69f86f67f7-842pz" (UID: "efb28b39-0968-4f5e-9cb3-bb30112d402f")
Oct 30 18:21:53 minikube kubelet[2996]: I1030 18:21:53.168928    2996 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "logs-containers-mnt-sda1" (UniqueName: "kubernetes.io/host-path/6c5b220e-b21c-4690-b614-0eca4e9b64b9-logs-containers-mnt-sda1") pod "logviewer-5594c699dd-l5sdg" (UID: "6c5b220e-b21c-4690-b614-0eca4e9b64b9")
Oct 30 18:21:53 minikube kubelet[2996]: I1030 18:21:53.169290    2996 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "sa-logviewer-token-v97jp" (UniqueName: "kubernetes.io/secret/6c5b220e-b21c-4690-b614-0eca4e9b64b9-sa-logviewer-token-v97jp") pod "logviewer-5594c699dd-l5sdg" (UID: "6c5b220e-b21c-4690-b614-0eca4e9b64b9")
Oct 30 18:21:53 minikube kubelet[2996]: I1030 18:21:53.169354    2996 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "logs" (UniqueName: "kubernetes.io/host-path/6c5b220e-b21c-4690-b614-0eca4e9b64b9-logs") pod "logviewer-5594c699dd-l5sdg" (UID: "6c5b220e-b21c-4690-b614-0eca4e9b64b9")
Oct 30 18:21:53 minikube kubelet[2996]: I1030 18:21:53.169375    2996 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "logs-containers" (UniqueName: "kubernetes.io/host-path/6c5b220e-b21c-4690-b614-0eca4e9b64b9-logs-containers") pod "logviewer-5594c699dd-l5sdg" (UID: "6c5b220e-b21c-4690-b614-0eca4e9b64b9")
Oct 30 18:21:53 minikube kubelet[2996]: I1030 18:21:53.169398    2996 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "logs-pods" (UniqueName: "kubernetes.io/host-path/6c5b220e-b21c-4690-b614-0eca4e9b64b9-logs-pods") pod "logviewer-5594c699dd-l5sdg" (UID: "6c5b220e-b21c-4690-b614-0eca4e9b64b9")
Oct 30 18:21:53 minikube kubelet[2996]: W1030 18:21:53.268210    2996 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/elasticsearch-logging-b9j9d through plugin: invalid network status for
Oct 30 18:21:53 minikube kubelet[2996]: I1030 18:21:53.370112    2996 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-rppnb" (UniqueName: "kubernetes.io/secret/ffefc30c-342b-441d-bb6b-65d4017b8796-storage-provisioner-token-rppnb") pod "storage-provisioner" (UID: "ffefc30c-342b-441d-bb6b-65d4017b8796")
Oct 30 18:21:53 minikube kubelet[2996]: I1030 18:21:53.370224    2996 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/ffefc30c-342b-441d-bb6b-65d4017b8796-tmp") pod "storage-provisioner" (UID: "ffefc30c-342b-441d-bb6b-65d4017b8796")
Oct 30 18:21:53 minikube kubelet[2996]: W1030 18:21:53.399008    2996 pod_container_deletor.go:75] Container "39756431f83a98682c682351be6356f2adabe7f554c9393871421ea3c4271721" not found in pod's containers
Oct 30 18:21:53 minikube kubelet[2996]: W1030 18:21:53.401306    2996 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/fluentd-es-mbn5r through plugin: invalid network status for
Oct 30 18:21:53 minikube kubelet[2996]: I1030 18:21:53.409204    2996 transport.go:132] certificate rotation detected, shutting down client connections to start using new credentials
Oct 30 18:21:53 minikube kubelet[2996]: W1030 18:21:53.409248    2996 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-wdf6m through plugin: invalid network status for
Oct 30 18:21:53 minikube kubelet[2996]: W1030 18:21:53.412009    2996 reflector.go:299] object-"kube-system"/"registry-creds-ecr": watch of *v1.Secret ended with: very short watch: object-"kube-system"/"registry-creds-ecr": Unexpected watch close - watch lasted less than a second and no items received
Oct 30 18:21:53 minikube kubelet[2996]: W1030 18:21:53.455628    2996 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-gz7sg through plugin: invalid network status for
Oct 30 18:21:53 minikube kubelet[2996]: W1030 18:21:53.462744    2996 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/elasticsearch-logging-b9j9d through plugin: invalid network status for
Oct 30 18:21:53 minikube kubelet[2996]: W1030 18:21:53.463503    2996 pod_container_deletor.go:75] Container "985bcc81978f2178419b9b828948d78b16f09e50e72ae61d55fc0aad88c24591" not found in pod's containers
Oct 30 18:21:53 minikube kubelet[2996]: W1030 18:21:53.610001    2996 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/kibana-logging-cgq4f through plugin: invalid network status for
Oct 30 18:21:53 minikube kubelet[2996]: W1030 18:21:53.700651    2996 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/metrics-server-587f876775-cv5kh through plugin: invalid network status for
Oct 30 18:21:53 minikube kubelet[2996]: E1030 18:21:53.710167    2996 secret.go:198] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
Oct 30 18:21:53 minikube kubelet[2996]: E1030 18:21:53.710506    2996 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/efb28b39-0968-4f5e-9cb3-bb30112d402f-gcr-creds\" (\"efb28b39-0968-4f5e-9cb3-bb30112d402f\")" failed. No retries permitted until 2019-10-30 18:21:54.210480574 +0000 UTC m=+30.885265682 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"gcr-creds\" (UniqueName: \"kubernetes.io/secret/efb28b39-0968-4f5e-9cb3-bb30112d402f-gcr-creds\") pod \"registry-creds-69f86f67f7-842pz\" (UID: \"efb28b39-0968-4f5e-9cb3-bb30112d402f\") : secret \"registry-creds-gcr\" not found"
Oct 30 18:21:53 minikube kubelet[2996]: W1030 18:21:53.722004    2996 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/nginx-ingress-controller-6fc5bcc8c9-gjqgr through plugin: invalid network status for
Oct 30 18:21:54 minikube kubelet[2996]: W1030 18:21:54.262552    2996 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/logviewer-5594c699dd-l5sdg through plugin: invalid network status for
Oct 30 18:21:54 minikube kubelet[2996]: E1030 18:21:54.284423    2996 secret.go:198] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
Oct 30 18:21:54 minikube kubelet[2996]: E1030 18:21:54.284853    2996 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/efb28b39-0968-4f5e-9cb3-bb30112d402f-gcr-creds\" (\"efb28b39-0968-4f5e-9cb3-bb30112d402f\")" failed. No retries permitted until 2019-10-30 18:21:55.284819071 +0000 UTC m=+31.959604200 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume \"gcr-creds\" (UniqueName: \"kubernetes.io/secret/efb28b39-0968-4f5e-9cb3-bb30112d402f-gcr-creds\") pod \"registry-creds-69f86f67f7-842pz\" (UID: \"efb28b39-0968-4f5e-9cb3-bb30112d402f\") : secret \"registry-creds-gcr\" not found"
Oct 30 18:21:54 minikube kubelet[2996]: W1030 18:21:54.486434    2996 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/elasticsearch-logging-b9j9d through plugin: invalid network status for
Oct 30 18:21:54 minikube kubelet[2996]: W1030 18:21:54.699240    2996 pod_container_deletor.go:75] Container "f526095bd82a039b865983ef7844d4acf53cb0a4bc8b41c57a7018e199c4aadf" not found in pod's containers
Oct 30 18:21:54 minikube kubelet[2996]: W1030 18:21:54.711812    2996 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/fluentd-es-mbn5r through plugin: invalid network status for
Oct 30 18:21:54 minikube kubelet[2996]: W1030 18:21:54.722702    2996 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/nginx-ingress-controller-6fc5bcc8c9-gjqgr through plugin: invalid network status for
Oct 30 18:21:54 minikube kubelet[2996]: W1030 18:21:54.726994    2996 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/kibana-logging-cgq4f through plugin: invalid network status for
Oct 30 18:21:54 minikube kubelet[2996]: W1030 18:21:54.733566    2996 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/logviewer-5594c699dd-l5sdg through plugin: invalid network status for
Oct 30 18:21:54 minikube kubelet[2996]: W1030 18:21:54.747897    2996 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/metrics-server-587f876775-cv5kh through plugin: invalid network status for
Oct 30 18:21:55 minikube kubelet[2996]: E1030 18:21:55.308654    2996 secret.go:198] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
Oct 30 18:21:55 minikube kubelet[2996]: E1030 18:21:55.308796    2996 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/efb28b39-0968-4f5e-9cb3-bb30112d402f-gcr-creds\" (\"efb28b39-0968-4f5e-9cb3-bb30112d402f\")" failed. No retries permitted until 2019-10-30 18:21:57.308763141 +0000 UTC m=+33.983548272 (durationBeforeRetry 2s). Error: "MountVolume.SetUp failed for volume \"gcr-creds\" (UniqueName: \"kubernetes.io/secret/efb28b39-0968-4f5e-9cb3-bb30112d402f-gcr-creds\") pod \"registry-creds-69f86f67f7-842pz\" (UID: \"efb28b39-0968-4f5e-9cb3-bb30112d402f\") : secret \"registry-creds-gcr\" not found"
Oct 30 18:21:55 minikube kubelet[2996]: W1030 18:21:55.761855    2996 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/elasticsearch-logging-b9j9d through plugin: invalid network status for
Oct 30 18:21:57 minikube kubelet[2996]: E1030 18:21:57.321415    2996 secret.go:198] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
Oct 30 18:21:57 minikube kubelet[2996]: E1030 18:21:57.321591    2996 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/efb28b39-0968-4f5e-9cb3-bb30112d402f-gcr-creds\" (\"efb28b39-0968-4f5e-9cb3-bb30112d402f\")" failed. No retries permitted until 2019-10-30 18:22:01.321565644 +0000 UTC m=+37.996350763 (durationBeforeRetry 4s). Error: "MountVolume.SetUp failed for volume \"gcr-creds\" (UniqueName: \"kubernetes.io/secret/efb28b39-0968-4f5e-9cb3-bb30112d402f-gcr-creds\") pod \"registry-creds-69f86f67f7-842pz\" (UID: \"efb28b39-0968-4f5e-9cb3-bb30112d402f\") : secret \"registry-creds-gcr\" not found"
Oct 30 18:22:01 minikube kubelet[2996]: E1030 18:22:01.339360    2996 secret.go:198] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
Oct 30 18:22:01 minikube kubelet[2996]: E1030 18:22:01.339435    2996 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/efb28b39-0968-4f5e-9cb3-bb30112d402f-gcr-creds\" (\"efb28b39-0968-4f5e-9cb3-bb30112d402f\")" failed. No retries permitted until 2019-10-30 18:22:09.339416259 +0000 UTC m=+46.014201363 (durationBeforeRetry 8s). Error: "MountVolume.SetUp failed for volume \"gcr-creds\" (UniqueName: \"kubernetes.io/secret/efb28b39-0968-4f5e-9cb3-bb30112d402f-gcr-creds\") pod \"registry-creds-69f86f67f7-842pz\" (UID: \"efb28b39-0968-4f5e-9cb3-bb30112d402f\") : secret \"registry-creds-gcr\" not found"
Oct 30 18:22:05 minikube kubelet[2996]: W1030 18:22:05.884603    2996 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/fluentd-es-mbn5r through plugin: invalid network status for
Oct 30 18:22:09 minikube kubelet[2996]: E1030 18:22:09.382344    2996 secret.go:198] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
Oct 30 18:22:09 minikube kubelet[2996]: E1030 18:22:09.382457    2996 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/efb28b39-0968-4f5e-9cb3-bb30112d402f-gcr-creds\" (\"efb28b39-0968-4f5e-9cb3-bb30112d402f\")" failed. No retries permitted until 2019-10-30 18:22:25.382436387 +0000 UTC m=+62.057221493 (durationBeforeRetry 16s). Error: "MountVolume.SetUp failed for volume \"gcr-creds\" (UniqueName: \"kubernetes.io/secret/efb28b39-0968-4f5e-9cb3-bb30112d402f-gcr-creds\") pod \"registry-creds-69f86f67f7-842pz\" (UID: \"efb28b39-0968-4f5e-9cb3-bb30112d402f\") : secret \"registry-creds-gcr\" not found"
Oct 30 18:22:25 minikube kubelet[2996]: E1030 18:22:25.481144    2996 secret.go:198] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
Oct 30 18:22:25 minikube kubelet[2996]: E1030 18:22:25.481240    2996 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/efb28b39-0968-4f5e-9cb3-bb30112d402f-gcr-creds\" (\"efb28b39-0968-4f5e-9cb3-bb30112d402f\")" failed. No retries permitted until 2019-10-30 18:22:57.481213576 +0000 UTC m=+94.155998683 (durationBeforeRetry 32s). Error: "MountVolume.SetUp failed for volume \"gcr-creds\" (UniqueName: \"kubernetes.io/secret/efb28b39-0968-4f5e-9cb3-bb30112d402f-gcr-creds\") pod \"registry-creds-69f86f67f7-842pz\" (UID: \"efb28b39-0968-4f5e-9cb3-bb30112d402f\") : secret \"registry-creds-gcr\" not found"
Oct 30 18:22:43 minikube kubelet[2996]: W1030 18:22:43.347398    2996 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/kibana-logging-cgq4f through plugin: invalid network status for
Oct 30 18:22:44 minikube kubelet[2996]: W1030 18:22:44.496371    2996 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/kibana-logging-cgq4f through plugin: invalid network status for
Oct 30 18:22:48 minikube kubelet[2996]: W1030 18:22:48.553760    2996 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/metrics-server-587f876775-cv5kh through plugin: invalid network status for
Oct 30 18:22:57 minikube kubelet[2996]: E1030 18:22:57.483569    2996 secret.go:198] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
Oct 30 18:22:57 minikube kubelet[2996]: E1030 18:22:57.483707    2996 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/efb28b39-0968-4f5e-9cb3-bb30112d402f-gcr-creds\" (\"efb28b39-0968-4f5e-9cb3-bb30112d402f\")" failed. No retries permitted until 2019-10-30 18:24:01.483678619 +0000 UTC m=+158.158463731 (durationBeforeRetry 1m4s). Error: "MountVolume.SetUp failed for volume \"gcr-creds\" (UniqueName: \"kubernetes.io/secret/efb28b39-0968-4f5e-9cb3-bb30112d402f-gcr-creds\") pod \"registry-creds-69f86f67f7-842pz\" (UID: \"efb28b39-0968-4f5e-9cb3-bb30112d402f\") : secret \"registry-creds-gcr\" not found"
Oct 30 18:23:12 minikube kubelet[2996]: W1030 18:23:12.993768    2996 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/nginx-ingress-controller-6fc5bcc8c9-gjqgr through plugin: invalid network status for
Oct 30 18:23:14 minikube kubelet[2996]: W1030 18:23:14.056267    2996 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/nginx-ingress-controller-6fc5bcc8c9-gjqgr through plugin: invalid network status for
Oct 30 18:23:15 minikube kubelet[2996]: W1030 18:23:15.078216    2996 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/logviewer-5594c699dd-l5sdg through plugin: invalid network status for
Oct 30 18:23:56 minikube kubelet[2996]: E1030 18:23:56.035052    2996 kubelet.go:1682] Unable to attach or mount volumes for pod "registry-creds-69f86f67f7-842pz_kube-system(efb28b39-0968-4f5e-9cb3-bb30112d402f)": unmounted volumes=[gcr-creds], unattached volumes=[gcr-creds default-token-c7427]: timed out waiting for the condition; skipping pod
Oct 30 18:23:56 minikube kubelet[2996]: E1030 18:23:56.035121    2996 pod_workers.go:191] Error syncing pod efb28b39-0968-4f5e-9cb3-bb30112d402f ("registry-creds-69f86f67f7-842pz_kube-system(efb28b39-0968-4f5e-9cb3-bb30112d402f)"), skipping: unmounted volumes=[gcr-creds], unattached volumes=[gcr-creds default-token-c7427]: timed out waiting for the condition
Oct 30 18:24:01 minikube kubelet[2996]: E1030 18:24:01.526323    2996 secret.go:198] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
Oct 30 18:24:01 minikube kubelet[2996]: E1030 18:24:01.526400    2996 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/efb28b39-0968-4f5e-9cb3-bb30112d402f-gcr-creds\" (\"efb28b39-0968-4f5e-9cb3-bb30112d402f\")" failed. No retries permitted until 2019-10-30 18:26:03.52637805 +0000 UTC m=+280.201163156 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"gcr-creds\" (UniqueName: \"kubernetes.io/secret/efb28b39-0968-4f5e-9cb3-bb30112d402f-gcr-creds\") pod \"registry-creds-69f86f67f7-842pz\" (UID: \"efb28b39-0968-4f5e-9cb3-bb30112d402f\") : secret \"registry-creds-gcr\" not found"
Oct 30 18:24:09 minikube kubelet[2996]: W1030 18:24:09.409601    2996 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/elasticsearch-logging-b9j9d through plugin: invalid network status for

==> storage-provisioner [604d07ddd048] <==

The operating system version: macOS Mojave 10.14.6

medyagh commented 4 years ago

@gclawes

That is a bug ! you found ! you are right we need to mind the people in corp network !

I would be happy to review any PR that would address this. either by providing a way for people to disable the check or provide the custom DNS

medyagh commented 4 years ago

@gclawes on your corp setup, if we wanted to auto-detect the nameserver for look up how would we do it ? could you kindly provide us more info, how we could auto-detect this and use the crop provided ns?

gclawes commented 4 years ago

What's the purpose of this check? To verify that DNS resolution is working?

I think the most platform independent should be able to do it in go with net.LookupHost: https://golang.org/pkg/net/?m=all#hdr-Name_Resolution https://golang.org/pkg/net/?m=all#LookupHost

tstromberg commented 4 years ago

The purpose is to warn users if the VM is unable to directly connect to the internet. An app that they run in the VM won't be able to directly contact an external IP, without using a proxy server.

I'd be open to improving the appearance of the check, removing it, and/or adding a flag to prevent this connection check. Thoughts?

gclawes commented 4 years ago

It looks like minikube is getting it's /etc/resolv.conf set to the gateway for the hypervisor's (in this case hyperkit) network. I think making the check to resolve k8s.io is fine, but it shouldn't be necessary to hard-code the DNS resolvers. nslookup k8s.io or dig k8s.io should be sufficient.

Liveware-Problem :: ~ % minikube ssh
                         _             _
            _         _ ( )           ( )
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ cat /etc/resolv.conf
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients directly to
# all known uplink DNS servers. This file lists all configured search domains.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.

nameserver 192.168.65.1
$ ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 62:13:6f:a2:5b:5d brd ff:ff:ff:ff:ff:ff
    inet 192.168.65.17/24 brd 192.168.65.255 scope global dynamic eth0
       valid_lft 85023sec preferred_lft 85023sec
    inet6 fe80::6013:6fff:fea2:5b5d/64 scope link
       valid_lft forever preferred_lft forever
Liveware-Problem :: ~ % ifconfig bridge100
bridge100: flags=8a63<UP,BROADCAST,SMART,RUNNING,ALLMULTI,SIMPLEX,MULTICAST> mtu 1500
    options=3<RXCSUM,TXCSUM>
    ether a6:5e:60:dd:ef:64
    inet 192.168.65.1 netmask 0xffffff00 broadcast 192.168.65.255
    inet6 fe80::44a:652b:7d69:92b7%bridge100 prefixlen 64 secured scopeid 0x12
    Configuration:
        id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
        maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
        root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
        ipfilter disabled flags 0x2
    member: en11 flags=3<LEARNING,DISCOVER>
            ifmaxaddr 0 port 17 priority 0 path cost 0
    Address cache:
        62:13:6f:a2:5b:5d Vlan1 en11 1198 flags=0<>
    nd6 options=201<PERFORMNUD,DAD>
    media: autoselect
    status: active
tstromberg commented 4 years ago

@gclawes - I encourage you to take a look if #5802 seems like more appropriate behavior.

PS - Thank you for opening this bug. I'm sure others have seen this unusual new behavior and just decided that it was annoying without letting us in on the issue.

tstromberg commented 4 years ago

Fixed in v1.5.2. Thank you for reporting this issue!

nomansadiq11 commented 4 years ago

restart my network adapter and its works.