kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.35k stars 4.88k forks source link

windows tunnel: error adding route: The object exists already. #8507

Closed doublefx closed 3 years ago

doublefx commented 4 years ago

Steps to reproduce the issue:

  1. minikube delete
  2. minikube start --addons ambassador
  3. wait until install completed
  4. minikube.exe tunnel --alsologtostderr

Full output of failed command:

I0617 21:36:02.725276 18768 mustload.go:64] Loading cluster: minikube I0617 21:36:02.727274 18768 main.go:110] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0617 21:36:03.226240 18768 main.go:110] libmachine: [stdout =====>] : Running

I0617 21:36:03.226883 18768 main.go:110] libmachine: [stderr =====>] : I0617 21:36:03.226883 18768 host.go:65] Checking if "minikube" exists ... I0617 21:36:03.227884 18768 api_server.go:145] Checking apiserver status ... I0617 21:36:03.249918 18768 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.minikube. I0617 21:36:03.250914 18768 main.go:110] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0617 21:36:03.790202 18768 main.go:110] libmachine: [stdout =====>] : Running

I0617 21:36:03.790202 18768 main.go:110] libmachine: [stderr =====>] : I0617 21:36:03.791169 18768 main.go:110] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0617 21:36:04.473864 18768 main.go:110] libmachine: [stdout =====>] : 172.30.159.127

I0617 21:36:04.474869 18768 main.go:110] libmachine: [stderr =====>] : I0617 21:36:04.475866 18768 sshutil.go:44] new ssh client: &{IP:172.30.159.127 Port:22 SSHKeyPath:C:\Users\doublefx.minikube\machines\minikube\id_rsa Username:docker} I0617 21:36:04.596486 18768 ssh_runner.go:188] Completed: sudo pgrep -xnf kube-apiserver.minikube.: (1.3455722s) I0617 21:36:04.621490 18768 ssh_runner.go:148] Run: sudo egrep ^[0-9]+:freezer: /proc/4014/cgroup I0617 21:36:04.627525 18768 api_server.go:161] apiserver freezer: "9:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f362b6d7d89dab1420a57093421b29a.slice/docker-ae0c1011cdb729d4cf9446cf65a586fef062b84531dfba37cd33f0fa84e2dd89.scope" I0617 21:36:04.655503 18768 ssh_runner.go:148] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f362b6d7d89dab1420a57093421b29a.slice/docker-ae0c1011cdb729d4cf9446cf65a586fef062b84531dfba37cd33f0fa84e2dd89.scope/freezer.state I0617 21:36:04.664490 18768 api_server.go:183] freezer state: "THAWED" I0617 21:36:04.664490 18768 api_server.go:193] Checking apiserver healthz at https://172.30.159.127:8443/healthz ... I0617 21:36:04.675498 18768 api_server.go:213] https://172.30.159.127:8443/healthz returned 200: ok I0617 21:36:04.677492 18768 tunnel.go:56] Checking for tunnels to cleanup... I0617 21:36:04.690525 18768 host.go:65] Checking if "minikube" exists ... I0617 21:36:04.691489 18768 main.go:110] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0617 21:36:05.189036 18768 main.go:110] libmachine: [stdout =====>] : Running

I0617 21:36:05.189668 18768 main.go:110] libmachine: [stderr =====>] : I0617 21:36:05.190670 18768 main.go:110] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0617 21:36:05.708256 18768 main.go:110] libmachine: [stdout =====>] : Running

I0617 21:36:05.708256 18768 main.go:110] libmachine: [stderr =====>] : I0617 21:36:05.709256 18768 main.go:110] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0617 21:36:06.369761 18768 main.go:110] libmachine: [stdout =====>] : 172.30.159.127

I0617 21:36:06.369761 18768 main.go:110] libmachine: [stderr =====>] : I0617 21:36:06.370752 18768 tunnel_manager.go:71] Setting up tunnel... I0617 21:36:06.370752 18768 tunnel_manager.go:81] Started minikube tunnel. I0617 21:36:11.376932 18768 host.go:65] Checking if "minikube" exists ... I0617 21:36:11.377880 18768 main.go:110] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0617 21:36:11.896197 18768 main.go:110] libmachine: [stdout =====>] : Running

I0617 21:36:11.896197 18768 main.go:110] libmachine: [stderr =====>] : I0617 21:36:12.435075 18768 route_windows.go:47] Adding route for CIDR 10.96.0.0/12 to gateway 172.30.159.127 I0617 21:36:12.448977 18768 route_windows.go:49] About to run command: [route ADD 10.96.0.0 MASK 255.240.0.0 172.30.159.127] Status: machine: minikube pid: 18768 route: 10.96.0.0/12 -> 172.30.159.127 minikube: Running services: [] errors: minikube: no errors router: error adding route: L'ajout de l'itin�raire a �chou��: L'objet existe d�j�.

, 3 loadbalancer emulator: no errors I0617 21:36:17.406065 18768 loadbalancer_patcher.go:79] ambassador is type LoadBalancer.

Note: "L'ajout de l'itin�raire a �chou��: L'objet existe d�j�" means: Adding the route has failed: The oject exists already.

Optional: Full output of minikube logs command:

* ==> Docker <== * -- Logs begin at Wed 2020-06-17 20:19:03 UTC, end at Wed 2020-06-17 21:19:01 UTC. -- * Jun 17 20:19:28 minikube dockerd[2753]: time="2020-06-17T20:19:28.888707503Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1 * Jun 17 20:19:28 minikube dockerd[2753]: time="2020-06-17T20:19:28.888714303Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1 * Jun 17 20:19:28 minikube dockerd[2753]: time="2020-06-17T20:19:28.888720803Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1 * Jun 17 20:19:28 minikube dockerd[2753]: time="2020-06-17T20:19:28.888727803Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1 * Jun 17 20:19:28 minikube dockerd[2753]: time="2020-06-17T20:19:28.888734603Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1 * Jun 17 20:19:28 minikube dockerd[2753]: time="2020-06-17T20:19:28.888741303Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1 * Jun 17 20:19:28 minikube dockerd[2753]: time="2020-06-17T20:19:28.888747903Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1 * Jun 17 20:19:28 minikube dockerd[2753]: time="2020-06-17T20:19:28.888769703Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1 * Jun 17 20:19:28 minikube dockerd[2753]: time="2020-06-17T20:19:28.888777903Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1 * Jun 17 20:19:28 minikube dockerd[2753]: time="2020-06-17T20:19:28.888784703Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1 * Jun 17 20:19:28 minikube dockerd[2753]: time="2020-06-17T20:19:28.888792403Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1 * Jun 17 20:19:28 minikube dockerd[2753]: time="2020-06-17T20:19:28.888950703Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock" * Jun 17 20:19:28 minikube dockerd[2753]: time="2020-06-17T20:19:28.889041503Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock" * Jun 17 20:19:28 minikube dockerd[2753]: time="2020-06-17T20:19:28.889064003Z" level=info msg="containerd successfully booted in 0.003117s" * Jun 17 20:19:28 minikube dockerd[2753]: time="2020-06-17T20:19:28.898678803Z" level=info msg="parsed scheme: \"unix\"" module=grpc * Jun 17 20:19:28 minikube dockerd[2753]: time="2020-06-17T20:19:28.898717803Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * Jun 17 20:19:28 minikube dockerd[2753]: time="2020-06-17T20:19:28.898738503Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc * Jun 17 20:19:28 minikube dockerd[2753]: time="2020-06-17T20:19:28.898750203Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * Jun 17 20:19:28 minikube dockerd[2753]: time="2020-06-17T20:19:28.899724203Z" level=info msg="parsed scheme: \"unix\"" module=grpc * Jun 17 20:19:28 minikube dockerd[2753]: time="2020-06-17T20:19:28.899750103Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * Jun 17 20:19:28 minikube dockerd[2753]: time="2020-06-17T20:19:28.899786803Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc * Jun 17 20:19:28 minikube dockerd[2753]: time="2020-06-17T20:19:28.899797703Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * Jun 17 20:19:29 minikube dockerd[2753]: time="2020-06-17T20:19:29.585744103Z" level=warning msg="Your kernel does not support cgroup blkio weight" * Jun 17 20:19:29 minikube dockerd[2753]: time="2020-06-17T20:19:29.585794103Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" * Jun 17 20:19:29 minikube dockerd[2753]: time="2020-06-17T20:19:29.585801103Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" * Jun 17 20:19:29 minikube dockerd[2753]: time="2020-06-17T20:19:29.585805103Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" * Jun 17 20:19:29 minikube dockerd[2753]: time="2020-06-17T20:19:29.585809003Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" * Jun 17 20:19:29 minikube dockerd[2753]: time="2020-06-17T20:19:29.585815103Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" * Jun 17 20:19:29 minikube dockerd[2753]: time="2020-06-17T20:19:29.586014903Z" level=info msg="Loading containers: start." * Jun 17 20:19:29 minikube dockerd[2753]: time="2020-06-17T20:19:29.654497603Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" * Jun 17 20:19:29 minikube dockerd[2753]: time="2020-06-17T20:19:29.683297103Z" level=info msg="Loading containers: done." * Jun 17 20:19:29 minikube dockerd[2753]: time="2020-06-17T20:19:29.702410703Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8 * Jun 17 20:19:29 minikube dockerd[2753]: time="2020-06-17T20:19:29.702462103Z" level=info msg="Daemon has completed initialization" * Jun 17 20:19:29 minikube dockerd[2753]: time="2020-06-17T20:19:29.716247903Z" level=info msg="API listen on /var/run/docker.sock" * Jun 17 20:19:29 minikube dockerd[2753]: time="2020-06-17T20:19:29.716325803Z" level=info msg="API listen on [::]:2376" * Jun 17 20:19:29 minikube systemd[1]: Started Docker Application Container Engine. * Jun 17 20:19:42 minikube dockerd[2753]: time="2020-06-17T20:19:42.272070603Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9917b8fd6184d1d036278a91522c10cd03bd48c1311fc6eaecbcda571727a176/shim.sock" debug=false pid=3792 * Jun 17 20:19:42 minikube dockerd[2753]: time="2020-06-17T20:19:42.285690203Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/df4a0bffd35549a38b61721436c2fdf6699efbda54d17f8d27de87a16d605b68/shim.sock" debug=false pid=3810 * Jun 17 20:19:42 minikube dockerd[2753]: time="2020-06-17T20:19:42.297191903Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b923131255d9a11df7fd12334cfff023cc0346c0062e9eb12f34db66b2c873ba/shim.sock" debug=false pid=3828 * Jun 17 20:19:42 minikube dockerd[2753]: time="2020-06-17T20:19:42.315573103Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/026497111a544c4c93566e8ba4dea9035b4fd2d33b8e01d11a5a9c6b08e89e80/shim.sock" debug=false pid=3856 * Jun 17 20:19:42 minikube dockerd[2753]: time="2020-06-17T20:19:42.448547503Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b6bac64ef9d7cdbfe111d161feec2918b369d7da0b610794214d41501990e88e/shim.sock" debug=false pid=3932 * Jun 17 20:19:42 minikube dockerd[2753]: time="2020-06-17T20:19:42.495789703Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b4723dcd34fdc571bf902b8be21baff8b4b7ee231035a0fa3ea8e7cffa803180/shim.sock" debug=false pid=3962 * Jun 17 20:19:42 minikube dockerd[2753]: time="2020-06-17T20:19:42.496159903Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ae0c1011cdb729d4cf9446cf65a586fef062b84531dfba37cd33f0fa84e2dd89/shim.sock" debug=false pid=3963 * Jun 17 20:19:42 minikube dockerd[2753]: time="2020-06-17T20:19:42.501041303Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/662fac5bf6dace1ee818d6d99a13479afc7d644e02f2d11756e1d38607540e11/shim.sock" debug=false pid=3977 * Jun 17 20:19:55 minikube dockerd[2753]: time="2020-06-17T20:19:55.855680115Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a3c93fb9ec8cd00a1d2a2acefb6fa23c24e52b82b10ba1783560a20f7f03ddb0/shim.sock" debug=false pid=4772 * Jun 17 20:19:56 minikube dockerd[2753]: time="2020-06-17T20:19:56.046274449Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/18c29f890f786ef7e42de9f2127b00b43ccbac40c40a069c46661ee00264a125/shim.sock" debug=false pid=4812 * Jun 17 20:19:56 minikube dockerd[2753]: time="2020-06-17T20:19:56.512526023Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d2e6ba7e82932d7b8c6db94c5c423feedf08cb39236a2910a8e5287a29142aed/shim.sock" debug=false pid=4930 * Jun 17 20:19:56 minikube dockerd[2753]: time="2020-06-17T20:19:56.707192714Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b15f30b16ab24890bca3f170a0c9af2cb5f038eeba2dd33711b7846aa37b9915/shim.sock" debug=false pid=4967 * Jun 17 20:19:57 minikube dockerd[2753]: time="2020-06-17T20:19:57.126589168Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dfa62c2c846de68917539976f866c53335ed73e523a920b8749a2d9dc8263503/shim.sock" debug=false pid=5033 * Jun 17 20:19:57 minikube dockerd[2753]: time="2020-06-17T20:19:57.127732276Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fadc0860c08264aba614f052461d51f61e26816a6cd73e4dcf7238b5459a7841/shim.sock" debug=false pid=5037 * Jun 17 20:19:57 minikube dockerd[2753]: time="2020-06-17T20:19:57.295446781Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f2d1d3c52fac4d066f7b0b981fba9a185455405778609e6f2e791f5d933c6b38/shim.sock" debug=false pid=5143 * Jun 17 20:19:57 minikube dockerd[2753]: time="2020-06-17T20:19:57.365907688Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/89c063fb09af24df9471009cd506a2b4eedf2a890f4d678a8553639d47326c4d/shim.sock" debug=false pid=5184 * Jun 17 20:19:57 minikube dockerd[2753]: time="2020-06-17T20:19:57.392469179Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d5ff5abb361668a1edc6e60dae933585f595f1d90b33b75f75a4649438173ff7/shim.sock" debug=false pid=5214 * Jun 17 20:21:55 minikube dockerd[2753]: time="2020-06-17T20:21:55.333640898Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/93c8323a5e3bedc273713cbb16f579ddad342476c1237d40b3d1f3d3f209443e/shim.sock" debug=false pid=5880 * Jun 17 20:22:04 minikube dockerd[2753]: time="2020-06-17T20:22:04.949480437Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4533500b2daa7a41e575677380dcd66d3191d00f3fd6a7fed220415f2a3a3130/shim.sock" debug=false pid=6069 * Jun 17 20:22:05 minikube dockerd[2753]: time="2020-06-17T20:22:05.072559412Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0bddcd4b643352aba207d1d8151b98dfea5dbd7ab0f7ba1ea971bf8d45f4fcc8/shim.sock" debug=false pid=6122 * Jun 17 20:22:05 minikube dockerd[2753]: time="2020-06-17T20:22:05.074227421Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b6e5d1a0e98596bf4de5e4105cccc039f1fa4e860b30f47e6f87834fa085735c/shim.sock" debug=false pid=6129 * Jun 17 20:24:47 minikube dockerd[2753]: time="2020-06-17T20:24:47.432728173Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d268c0181b54d849849668d5670189140ef36a1e701cb529b4f52fd7e5df828e/shim.sock" debug=false pid=7098 * Jun 17 20:24:48 minikube dockerd[2753]: time="2020-06-17T20:24:48.490037414Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dd102609843ffc36eac58630575566599c168a93f2e3a18b3d2bc43d0dd1c83d/shim.sock" debug=false pid=7194 * Jun 17 20:24:49 minikube dockerd[2753]: time="2020-06-17T20:24:49.804201886Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1dd102785ae5be90c55788380d8d473ce655bb1b5c5a8aadebc6ba43ff822dde/shim.sock" debug=false pid=7338 * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID * 1dd102785ae5b quay.io/datawire/ambassador@sha256:83a5093bb9fa1f9f928d9b1dd2342c03f4c519b99ab7fe4047124fed170d1d38 17 minutes ago Running ambassador 0 0bddcd4b64335 * dd102609843ff quay.io/datawire/ambassador@sha256:83a5093bb9fa1f9f928d9b1dd2342c03f4c519b99ab7fe4047124fed170d1d38 17 minutes ago Running ambassador 0 b6e5d1a0e9859 * d268c0181b54d quay.io/datawire/ambassador@sha256:83a5093bb9fa1f9f928d9b1dd2342c03f4c519b99ab7fe4047124fed170d1d38 17 minutes ago Running ambassador 0 4533500b2daa7 * 93c8323a5e3be quay.io/datawire/ambassador-operator@sha256:492f33e0828a371aa23331d75c11c251b21499e31287f026269e3f6ec6da34ed 20 minutes ago Running ambassador-operator 0 f2d1d3c52fac4 * d5ff5abb36166 67da37a9a360e 21 minutes ago Running coredns 0 fadc0860c0826 * 89c063fb09af2 67da37a9a360e 21 minutes ago Running coredns 0 dfa62c2c846de * b15f30b16ab24 4689081edb103 22 minutes ago Running storage-provisioner 0 d2e6ba7e82932 * 18c29f890f786 3439b7546f29b 22 minutes ago Running kube-proxy 0 a3c93fb9ec8cd * ae0c1011cdb72 7e28efa976bd1 22 minutes ago Running kube-apiserver 0 026497111a544 * 662fac5bf6dac 76216c34ed0c7 22 minutes ago Running kube-scheduler 0 b923131255d9a * b4723dcd34fdc 303ce5db0e90d 22 minutes ago Running etcd 0 df4a0bffd3554 * b6bac64ef9d7c da26705ccb4b5 22 minutes ago Running kube-controller-manager 0 9917b8fd6184d * * ==> coredns [89c063fb09af] <== * .:53 * [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 * CoreDNS-1.6.7 * linux/amd64, go1.13.6, da7f65b * * ==> coredns [d5ff5abb3616] <== * .:53 * [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 * CoreDNS-1.6.7 * linux/amd64, go1.13.6, da7f65b * * ==> describe nodes <== * Name: minikube * Roles: master * Labels: beta.kubernetes.io/arch=amd64 * beta.kubernetes.io/os=linux * kubernetes.io/arch=amd64 * kubernetes.io/hostname=minikube * kubernetes.io/os=linux * minikube.k8s.io/commit=57e2f55f47effe9ce396cea42a1e0eb4f611ebbd * minikube.k8s.io/name=minikube * minikube.k8s.io/updated_at=2020_06_17T21_19_47_0700 * minikube.k8s.io/version=v1.11.0 * node-role.kubernetes.io/master= * Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock * node.alpha.kubernetes.io/ttl: 0 * volumes.kubernetes.io/controller-managed-attach-detach: true * CreationTimestamp: Wed, 17 Jun 2020 20:19:45 +0000 * Taints: * Unschedulable: false * Lease: * HolderIdentity: minikube * AcquireTime: * RenewTime: Wed, 17 Jun 2020 20:41:55 +0000 * Conditions: * Type Status LastHeartbeatTime LastTransitionTime Reason Message * ---- ------ ----------------- ------------------ ------ ------- * MemoryPressure False Wed, 17 Jun 2020 20:39:59 +0000 Wed, 17 Jun 2020 20:19:43 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available * DiskPressure False Wed, 17 Jun 2020 20:39:59 +0000 Wed, 17 Jun 2020 20:19:43 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure * PIDPressure False Wed, 17 Jun 2020 20:39:59 +0000 Wed, 17 Jun 2020 20:19:43 +0000 KubeletHasSufficientPID kubelet has sufficient PID available * Ready True Wed, 17 Jun 2020 20:39:59 +0000 Wed, 17 Jun 2020 20:19:45 +0000 KubeletReady kubelet is posting ready status * Addresses: * InternalIP: 172.30.159.127 * Hostname: minikube * Capacity: * cpu: 6 * ephemeral-storage: 17784752Ki * hugepages-2Mi: 0 * memory: 6098856Ki * pods: 110 * Allocatable: * cpu: 6 * ephemeral-storage: 17784752Ki * hugepages-2Mi: 0 * memory: 6098856Ki * pods: 110 * System Info: * Machine ID: f4b44a52d15c4222972cc066c0fc96ad * System UUID: 1f58659d-b59f-bb44-9524-e5a315b091b1 * Boot ID: b27a8600-5747-4f5a-8162-4dc731e75a41 * Kernel Version: 4.19.107 * OS Image: Buildroot 2019.02.10 * Operating System: linux * Architecture: amd64 * Container Runtime Version: docker://19.3.8 * Kubelet Version: v1.18.3 * Kube-Proxy Version: v1.18.3 * Non-terminated Pods: (12 in total) * Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE * --------- ---- ------------ ---------- --------------- ------------- --- * ambassador ambassador-86c6c47459-lj9hb 200m (3%) 1 (16%) 300Mi (5%) 600Mi (10%) 19m * ambassador ambassador-86c6c47459-wmph4 200m (3%) 1 (16%) 300Mi (5%) 600Mi (10%) 19m * ambassador ambassador-86c6c47459-z5rhp 200m (3%) 1 (16%) 300Mi (5%) 600Mi (10%) 19m * ambassador ambassador-operator-764fcb8c6b-dhxg9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22m * kube-system coredns-66bff467f8-9fc2v 100m (1%) 0 (0%) 70Mi (1%) 170Mi (2%) 22m * kube-system coredns-66bff467f8-grvl8 100m (1%) 0 (0%) 70Mi (1%) 170Mi (2%) 22m * kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22m * kube-system kube-apiserver-minikube 250m (4%) 0 (0%) 0 (0%) 0 (0%) 22m * kube-system kube-controller-manager-minikube 200m (3%) 0 (0%) 0 (0%) 0 (0%) 22m * kube-system kube-proxy-7klqw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22m * kube-system kube-scheduler-minikube 100m (1%) 0 (0%) 0 (0%) 0 (0%) 22m * kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22m * Allocated resources: * (Total limits may be over 100 percent, i.e., overcommitted.) * Resource Requests Limits * -------- -------- ------ * cpu 1350m (22%) 3 (50%) * memory 1040Mi (17%) 2140Mi (35%) * ephemeral-storage 0 (0%) 0 (0%) * hugepages-2Mi 0 (0%) 0 (0%) * Events: * Type Reason Age From Message * ---- ------ ---- ---- ------- * Normal NodeHasSufficientMemory 22m (x5 over 22m) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory * Normal NodeHasNoDiskPressure 22m (x5 over 22m) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure * Normal NodeHasSufficientPID 22m (x4 over 22m) kubelet, minikube Node minikube status is now: NodeHasSufficientPID * Normal Starting 22m kubelet, minikube Starting kubelet. * Normal NodeAllocatableEnforced 22m kubelet, minikube Updated Node Allocatable limit across pods * Normal NodeHasSufficientMemory 22m kubelet, minikube Node minikube status is now: NodeHasSufficientMemory * Normal NodeHasNoDiskPressure 22m kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure * Normal NodeHasSufficientPID 22m kubelet, minikube Node minikube status is now: NodeHasSufficientPID * Normal Starting 22m kube-proxy, minikube Starting kube-proxy. * * ==> dmesg <== * [Jun17 20:19] You have booted with nomodeset. This means your GPU drivers are DISABLED * [ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly * [ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it * [ +0.055134] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. * [ +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. * [ +0.000047] #2 #3 #4 #5 * [ +0.031669] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. * [ +0.006909] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug, * * this clock source is slow. Consider trying other clock sources * [ +2.060120] Unstable clock detected, switching default tracing clock to "global" * If you want to keep using the local clock, then add: * "trace_clock=local" * on the kernel command line * [ +0.000029] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 * [ +0.423815] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons * [ +0.767020] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument * [ +0.002949] systemd-fstab-generator[1280]: Ignoring "noauto" for root device * [ +0.003499] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling. * [ +0.000001] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.) * [ +0.917243] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack. * [ +0.255732] vboxguest: loading out-of-tree module taints kernel. * [ +0.003383] vboxguest: PCI device not found, probably running on physical hardware. * [ +10.853471] systemd-fstab-generator[2468]: Ignoring "noauto" for root device * [ +0.060034] systemd-fstab-generator[2478]: Ignoring "noauto" for root device * [ +11.533937] systemd-fstab-generator[2741]: Ignoring "noauto" for root device * [ +1.863528] kauditd_printk_skb: 65 callbacks suppressed * [ +0.353620] systemd-fstab-generator[2906]: Ignoring "noauto" for root device * [ +0.635351] systemd-fstab-generator[2988]: Ignoring "noauto" for root device * [ +1.578689] systemd-fstab-generator[3241]: Ignoring "noauto" for root device * [ +9.259275] kauditd_printk_skb: 107 callbacks suppressed * [ +6.566570] systemd-fstab-generator[4381]: Ignoring "noauto" for root device * [ +8.197936] kauditd_printk_skb: 32 callbacks suppressed * [Jun17 20:20] kauditd_printk_skb: 47 callbacks suppressed * [Jun17 20:21] NFSD: Unable to end grace period: -110 * [ +50.527833] kauditd_printk_skb: 2 callbacks suppressed * [Jun17 20:22] kauditd_printk_skb: 2 callbacks suppressed * [Jun17 20:25] kauditd_printk_skb: 11 callbacks suppressed * [Jun17 20:27] kauditd_printk_skb: 26 callbacks suppressed * * ==> etcd [b4723dcd34fd] <== * [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead * 2020-06-17 20:19:42.616314 I | etcdmain: etcd Version: 3.4.3 * 2020-06-17 20:19:42.616352 I | etcdmain: Git SHA: 3cf2f69b5 * 2020-06-17 20:19:42.616355 I | etcdmain: Go Version: go1.12.12 * 2020-06-17 20:19:42.616357 I | etcdmain: Go OS/Arch: linux/amd64 * 2020-06-17 20:19:42.616364 I | etcdmain: setting maximum number of CPUs to 6, total number of available CPUs is 6 * [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead * 2020-06-17 20:19:42.616439 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = * 2020-06-17 20:19:42.617044 I | embed: name = minikube * 2020-06-17 20:19:42.617066 I | embed: data dir = /var/lib/minikube/etcd * 2020-06-17 20:19:42.617071 I | embed: member dir = /var/lib/minikube/etcd/member * 2020-06-17 20:19:42.617075 I | embed: heartbeat = 100ms * 2020-06-17 20:19:42.617078 I | embed: election = 1000ms * 2020-06-17 20:19:42.617080 I | embed: snapshot count = 10000 * 2020-06-17 20:19:42.617087 I | embed: advertise client URLs = https://172.30.159.127:2379 * 2020-06-17 20:19:42.630612 I | etcdserver: starting member 93c85ed28d368501 in cluster 65c73624dcdf4448 * raft2020/06/17 20:19:42 INFO: 93c85ed28d368501 switched to configuration voters=() * raft2020/06/17 20:19:42 INFO: 93c85ed28d368501 became follower at term 0 * raft2020/06/17 20:19:42 INFO: newRaft 93c85ed28d368501 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] * raft2020/06/17 20:19:42 INFO: 93c85ed28d368501 became follower at term 1 * raft2020/06/17 20:19:42 INFO: 93c85ed28d368501 switched to configuration voters=(10648865577322841345) * 2020-06-17 20:19:42.640985 W | auth: simple token is not cryptographically signed * 2020-06-17 20:19:42.655091 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided] * 2020-06-17 20:19:42.655450 I | etcdserver: 93c85ed28d368501 as single-node; fast-forwarding 9 ticks (election ticks 10) * 2020-06-17 20:19:42.656968 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = * 2020-06-17 20:19:42.657098 I | embed: listening for peers on 172.30.159.127:2380 * 2020-06-17 20:19:42.657155 I | embed: listening for metrics on http://127.0.0.1:2381 * raft2020/06/17 20:19:42 INFO: 93c85ed28d368501 switched to configuration voters=(10648865577322841345) * 2020-06-17 20:19:42.657315 I | etcdserver/membership: added member 93c85ed28d368501 [https://172.30.159.127:2380] to cluster 65c73624dcdf4448 * raft2020/06/17 20:19:43 INFO: 93c85ed28d368501 is starting a new election at term 1 * raft2020/06/17 20:19:43 INFO: 93c85ed28d368501 became candidate at term 2 * raft2020/06/17 20:19:43 INFO: 93c85ed28d368501 received MsgVoteResp from 93c85ed28d368501 at term 2 * raft2020/06/17 20:19:43 INFO: 93c85ed28d368501 became leader at term 2 * raft2020/06/17 20:19:43 INFO: raft.node: 93c85ed28d368501 elected leader 93c85ed28d368501 at term 2 * 2020-06-17 20:19:43.231627 I | etcdserver: published {Name:minikube ClientURLs:[https://172.30.159.127:2379]} to cluster 65c73624dcdf4448 * 2020-06-17 20:19:43.231727 I | etcdserver: setting up the initial cluster version to 3.4 * 2020-06-17 20:19:43.231831 I | embed: ready to serve client requests * 2020-06-17 20:19:43.231974 I | embed: ready to serve client requests * 2020-06-17 20:19:43.233425 I | embed: serving client requests on 127.0.0.1:2379 * 2020-06-17 20:19:43.233624 I | embed: serving client requests on 172.30.159.127:2379 * 2020-06-17 20:19:43.235199 N | etcdserver/membership: set the initial cluster version to 3.4 * 2020-06-17 20:19:43.235285 I | etcdserver/api: enabled capabilities for version 3.4 * 2020-06-17 20:29:43.244803 I | mvcc: store.index: compact 1202 * 2020-06-17 20:29:43.262792 I | mvcc: finished scheduled compaction at 1202 (took 17.560115ms) * 2020-06-17 20:34:43.250618 I | mvcc: store.index: compact 1907 * 2020-06-17 20:34:43.265846 I | mvcc: finished scheduled compaction at 1907 (took 14.663368ms) * 2020-06-17 20:39:43.258212 I | mvcc: store.index: compact 2567 * 2020-06-17 20:39:43.271705 I | mvcc: finished scheduled compaction at 2567 (took 13.094383ms) * * ==> kernel <== * 20:41:56 up 22 min, 0 users, load average: 0.80, 0.29, 0.20 * Linux minikube 4.19.107 #1 SMP Thu May 28 15:07:17 PDT 2020 x86_64 GNU/Linux * PRETTY_NAME="Buildroot 2019.02.10" * * ==> kube-apiserver [ae0c1011cdb7] <== * I0617 20:22:26.933961 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:26.940722 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:26.940741 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:26.978232 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:26.978269 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:26.987346 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:26.987567 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:27.013935 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:27.013975 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:27.020705 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:27.020726 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:27.044972 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:27.045007 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:27.051218 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:27.051269 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:27.074362 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:27.074399 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:27.079923 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:27.079955 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:27.104913 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:27.104945 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:27.110321 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:27.110396 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:27.135145 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:27.135180 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:27.140710 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:27.140940 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:27.165121 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:27.165152 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:27.171035 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:27.171092 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:27.191189 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:27.191223 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:27.216517 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:27.216553 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:27.222393 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:27.222413 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:27.243211 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:27.243244 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:27.267354 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:27.267388 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:27.273398 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:27.273442 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:27.294006 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:27.294048 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:27.317367 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:27.317401 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:27.323209 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:27.323256 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:27.347533 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:27.347566 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:27.353909 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:27.353956 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:27.378833 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:27.378890 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:27.384930 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:27.384963 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0617 20:22:27.405997 1 client.go:361] parsed scheme: "endpoint" * I0617 20:22:27.406029 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * W0617 20:34:45.602640 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted * * ==> kube-controller-manager [b6bac64ef9d7] <== * I0617 20:19:54.721222 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ambassador", Name:"ambassador-operator", UID:"be9d4bb9-0f3d-47f2-8f42-706f98c73126", APIVersion:"apps/v1", ResourceVersion:"286", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ambassador-operator-764fcb8c6b to 1 * I0617 20:19:54.724409 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"9e13b921-583e-4c4a-bece-8f22b38eb7cd", APIVersion:"apps/v1", ResourceVersion:"212", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2 * I0617 20:19:54.726645 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ambassador", Name:"ambassador-operator-764fcb8c6b", UID:"aee91240-edaf-47fc-8793-3085948e28d3", APIVersion:"apps/v1", ResourceVersion:"359", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ambassador-operator-764fcb8c6b-dhxg9 * I0617 20:19:54.728573 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"b587a3e9-dd9d-4b87-98b8-b952c0123445", APIVersion:"apps/v1", ResourceVersion:"360", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-grvl8 * I0617 20:19:54.734049 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"b587a3e9-dd9d-4b87-98b8-b952c0123445", APIVersion:"apps/v1", ResourceVersion:"360", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-9fc2v * I0617 20:19:54.978410 1 shared_informer.go:230] Caches are synced for PVC protection * I0617 20:19:55.022897 1 shared_informer.go:230] Caches are synced for expand * I0617 20:19:55.120320 1 shared_informer.go:230] Caches are synced for resource quota * I0617 20:19:55.122039 1 shared_informer.go:230] Caches are synced for garbage collector * I0617 20:19:55.122074 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage * W0617 20:19:55.122086 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist * I0617 20:19:55.122358 1 shared_informer.go:230] Caches are synced for garbage collector * I0617 20:19:55.140907 1 shared_informer.go:230] Caches are synced for daemon sets * I0617 20:19:55.149051 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"f903bd4b-d005-42e1-aa13-7e32064fcaa4", APIVersion:"apps/v1", ResourceVersion:"222", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-7klqw * E0617 20:19:55.159626 1 daemon_controller.go:292] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"f903bd4b-d005-42e1-aa13-7e32064fcaa4", ResourceVersion:"222", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63728021988, loc:(*time.Location)(0x6d09200)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0012232a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0012232c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0012232e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc000def680), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001223300), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001223320), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001223360)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000d2b180), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0005c58a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000490c40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000110a20)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0005c58f8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again * I0617 20:19:55.161790 1 shared_informer.go:230] Caches are synced for attach detach * I0617 20:19:55.167276 1 shared_informer.go:230] Caches are synced for TTL * I0617 20:19:55.167337 1 shared_informer.go:230] Caches are synced for disruption * I0617 20:19:55.167346 1 disruption.go:339] Sending events to api server. * I0617 20:19:55.168210 1 shared_informer.go:230] Caches are synced for persistent volume * I0617 20:19:55.168367 1 shared_informer.go:230] Caches are synced for stateful set * I0617 20:19:55.168398 1 shared_informer.go:230] Caches are synced for GC * I0617 20:19:55.171340 1 shared_informer.go:230] Caches are synced for resource quota * I0617 20:19:55.221459 1 shared_informer.go:230] Caches are synced for taint * I0617 20:19:55.221585 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: * W0617 20:19:55.221624 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp. * I0617 20:19:55.221660 1 node_lifecycle_controller.go:1249] Controller detected that zone is now in state Normal. * I0617 20:19:55.221687 1 taint_manager.go:187] Starting NoExecuteTaintManager * I0617 20:19:55.221719 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"9ecd0196-f6ce-499d-b17f-c9092807fc8c", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller * I0617 20:20:25.422157 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ambassadorinstallations.getambassador.io * I0617 20:20:25.422219 1 shared_informer.go:223] Waiting for caches to sync for resource quota * I0617 20:20:25.522446 1 shared_informer.go:230] Caches are synced for resource quota * I0617 20:20:25.726126 1 shared_informer.go:223] Waiting for caches to sync for garbage collector * I0617 20:20:25.726171 1 shared_informer.go:230] Caches are synced for garbage collector * I0617 20:22:04.478280 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ambassador", Name:"ambassador", UID:"7768c8dd-0eb1-4c01-9d80-82f01383fc56", APIVersion:"apps/v1", ResourceVersion:"826", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ambassador-86c6c47459 to 3 * I0617 20:22:04.486627 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ambassador", Name:"ambassador-86c6c47459", UID:"3822d053-6637-4ab9-856d-dfb26e2aed4b", APIVersion:"apps/v1", ResourceVersion:"828", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ambassador-86c6c47459-z5rhp * I0617 20:22:04.493835 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ambassador", Name:"ambassador-86c6c47459", UID:"3822d053-6637-4ab9-856d-dfb26e2aed4b", APIVersion:"apps/v1", ResourceVersion:"828", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ambassador-86c6c47459-lj9hb * I0617 20:22:04.496011 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ambassador", Name:"ambassador-86c6c47459", UID:"3822d053-6637-4ab9-856d-dfb26e2aed4b", APIVersion:"apps/v1", ResourceVersion:"828", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ambassador-86c6c47459-wmph4 * I0617 20:22:26.880093 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ratelimitservices.getambassador.io * I0617 20:22:26.880136 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for tracingservices.getambassador.io * I0617 20:22:26.880149 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for kubernetesserviceresolvers.getambassador.io * I0617 20:22:26.880163 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for projects.getambassador.io * I0617 20:22:26.880176 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for modules.getambassador.io * I0617 20:22:26.880193 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for authservices.getambassador.io * I0617 20:22:26.880206 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for consulresolvers.getambassador.io * I0617 20:22:26.880216 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for tlscontexts.getambassador.io * I0617 20:22:26.880229 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for filters.getambassador.io * I0617 20:22:26.880265 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for mappings.getambassador.io * I0617 20:22:26.880298 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for hosts.getambassador.io * I0617 20:22:26.880332 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for kubernetesendpointresolvers.getambassador.io * I0617 20:22:26.880344 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for projectrevisions.getambassador.io * I0617 20:22:26.880389 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ratelimits.getambassador.io * I0617 20:22:26.880436 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for projectcontrollers.getambassador.io * I0617 20:22:26.880467 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for tcpmappings.getambassador.io * I0617 20:22:26.880501 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for filterpolicies.getambassador.io * I0617 20:22:26.880530 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for logservices.getambassador.io * I0617 20:22:26.880742 1 shared_informer.go:223] Waiting for caches to sync for resource quota * I0617 20:22:27.481021 1 shared_informer.go:230] Caches are synced for resource quota * I0617 20:22:27.537536 1 shared_informer.go:223] Waiting for caches to sync for garbage collector * I0617 20:22:27.537587 1 shared_informer.go:230] Caches are synced for garbage collector * * ==> kube-proxy [18c29f890f78] <== * W0617 20:19:56.176355 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy * I0617 20:19:56.181138 1 node.go:136] Successfully retrieved node IP: 172.30.159.127 * I0617 20:19:56.181168 1 server_others.go:186] Using iptables Proxier. * W0617 20:19:56.181173 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined * I0617 20:19:56.181176 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local * I0617 20:19:56.181463 1 server.go:583] Version: v1.18.3 * I0617 20:19:56.181730 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 196608 * I0617 20:19:56.181749 1 conntrack.go:52] Setting nf_conntrack_max to 196608 * I0617 20:19:56.181842 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 * I0617 20:19:56.181911 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 * I0617 20:19:56.182363 1 config.go:133] Starting endpoints config controller * I0617 20:19:56.182396 1 shared_informer.go:223] Waiting for caches to sync for endpoints config * I0617 20:19:56.182738 1 config.go:315] Starting service config controller * I0617 20:19:56.182762 1 shared_informer.go:223] Waiting for caches to sync for service config * I0617 20:19:56.283065 1 shared_informer.go:230] Caches are synced for endpoints config * I0617 20:19:56.283195 1 shared_informer.go:230] Caches are synced for service config * * ==> kube-scheduler [662fac5bf6da] <== * I0617 20:19:42.645058 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0617 20:19:42.645123 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0617 20:19:43.069183 1 serving.go:313] Generated self-signed cert in-memory * W0617 20:19:45.184073 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' * W0617 20:19:45.184090 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" * W0617 20:19:45.184096 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous. * W0617 20:19:45.184100 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false * I0617 20:19:45.193903 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0617 20:19:45.194221 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * W0617 20:19:45.195270 1 authorization.go:47] Authorization is disabled * W0617 20:19:45.195276 1 authentication.go:40] Authentication is disabled * I0617 20:19:45.195283 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 * I0617 20:19:45.196130 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 * I0617 20:19:45.196225 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0617 20:19:45.196254 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0617 20:19:45.196265 1 tlsconfig.go:240] Starting DynamicServingCertificateController * E0617 20:19:45.198625 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" * E0617 20:19:45.202707 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope * E0617 20:19:45.202788 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope * E0617 20:19:45.210419 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope * E0617 20:19:45.210509 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope * E0617 20:19:45.210561 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope * E0617 20:19:45.210605 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope * E0617 20:19:45.210646 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope * E0617 20:19:45.210683 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope * E0617 20:19:46.121723 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope * E0617 20:19:46.168094 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope * E0617 20:19:46.240482 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope * E0617 20:19:46.398311 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" * I0617 20:19:49.297133 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler... * I0617 20:19:49.306377 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler * I0617 20:19:49.496507 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * E0617 20:19:51.023481 1 factory.go:503] pod: kube-system/storage-provisioner is already present in unschedulable queue * E0617 20:19:54.735086 1 factory.go:503] pod: ambassador/ambassador-operator-764fcb8c6b-dhxg9 is already present in the active queue * E0617 20:19:54.741635 1 factory.go:503] pod: kube-system/coredns-66bff467f8-grvl8 is already present in the active queue * E0617 20:19:54.750117 1 factory.go:503] pod: kube-system/coredns-66bff467f8-9fc2v is already present in the active queue * * ==> kubelet <== * -- Logs begin at Wed 2020-06-17 20:19:03 UTC, end at Wed 2020-06-17 21:19:01 UTC. -- * Jun 17 20:19:54 minikube kubelet[4390]: I0617 20:19:54.676800 4390 kubelet_node_status.go:73] Successfully registered node minikube * Jun 17 20:19:54 minikube kubelet[4390]: I0617 20:19:54.847964 4390 topology_manager.go:233] [topologymanager] Topology Admit Handler * Jun 17 20:19:54 minikube kubelet[4390]: I0617 20:19:54.851142 4390 topology_manager.go:233] [topologymanager] Topology Admit Handler * Jun 17 20:19:54 minikube kubelet[4390]: I0617 20:19:54.855577 4390 topology_manager.go:233] [topologymanager] Topology Admit Handler * Jun 17 20:19:54 minikube kubelet[4390]: I0617 20:19:54.859676 4390 topology_manager.go:233] [topologymanager] Topology Admit Handler * Jun 17 20:19:54 minikube kubelet[4390]: I0617 20:19:54.934200 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/cb4f09b70e95428483e2609cfbe01121-etcd-certs") pod "etcd-minikube" (UID: "cb4f09b70e95428483e2609cfbe01121") * Jun 17 20:19:54 minikube kubelet[4390]: I0617 20:19:54.934248 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/cb4f09b70e95428483e2609cfbe01121-etcd-data") pod "etcd-minikube" (UID: "cb4f09b70e95428483e2609cfbe01121") * Jun 17 20:19:54 minikube kubelet[4390]: I0617 20:19:54.934265 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/8f362b6d7d89dab1420a57093421b29a-ca-certs") pod "kube-apiserver-minikube" (UID: "8f362b6d7d89dab1420a57093421b29a") * Jun 17 20:19:55 minikube kubelet[4390]: I0617 20:19:55.034989 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/6188fbbe64e28a0413e239e610f71669-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "6188fbbe64e28a0413e239e610f71669") * Jun 17 20:19:55 minikube kubelet[4390]: I0617 20:19:55.035043 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/6188fbbe64e28a0413e239e610f71669-k8s-certs") pod "kube-controller-manager-minikube" (UID: "6188fbbe64e28a0413e239e610f71669") * Jun 17 20:19:55 minikube kubelet[4390]: I0617 20:19:55.035153 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/8f362b6d7d89dab1420a57093421b29a-k8s-certs") pod "kube-apiserver-minikube" (UID: "8f362b6d7d89dab1420a57093421b29a") * Jun 17 20:19:55 minikube kubelet[4390]: I0617 20:19:55.035185 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/8f362b6d7d89dab1420a57093421b29a-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "8f362b6d7d89dab1420a57093421b29a") * Jun 17 20:19:55 minikube kubelet[4390]: I0617 20:19:55.035202 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/6188fbbe64e28a0413e239e610f71669-ca-certs") pod "kube-controller-manager-minikube" (UID: "6188fbbe64e28a0413e239e610f71669") * Jun 17 20:19:55 minikube kubelet[4390]: I0617 20:19:55.035215 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/6188fbbe64e28a0413e239e610f71669-kubeconfig") pod "kube-controller-manager-minikube" (UID: "6188fbbe64e28a0413e239e610f71669") * Jun 17 20:19:55 minikube kubelet[4390]: I0617 20:19:55.035229 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/6188fbbe64e28a0413e239e610f71669-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "6188fbbe64e28a0413e239e610f71669") * Jun 17 20:19:55 minikube kubelet[4390]: I0617 20:19:55.035241 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/a8caea92c80c24c844216eb1d68fe417-kubeconfig") pod "kube-scheduler-minikube" (UID: "a8caea92c80c24c844216eb1d68fe417") * Jun 17 20:19:55 minikube kubelet[4390]: I0617 20:19:55.035247 4390 reconciler.go:157] Reconciler: start to sync state * Jun 17 20:19:55 minikube kubelet[4390]: I0617 20:19:55.153449 4390 topology_manager.go:233] [topologymanager] Topology Admit Handler * Jun 17 20:19:55 minikube kubelet[4390]: I0617 20:19:55.229998 4390 topology_manager.go:233] [topologymanager] Topology Admit Handler * Jun 17 20:19:55 minikube kubelet[4390]: I0617 20:19:55.236006 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/5d3f8233-ed7d-4070-86ba-a0addf8fab5a-xtables-lock") pod "kube-proxy-7klqw" (UID: "5d3f8233-ed7d-4070-86ba-a0addf8fab5a") * Jun 17 20:19:55 minikube kubelet[4390]: I0617 20:19:55.236041 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/3b6626de-bf9e-4da6-b816-7953c54632f6-tmp") pod "storage-provisioner" (UID: "3b6626de-bf9e-4da6-b816-7953c54632f6") * Jun 17 20:19:55 minikube kubelet[4390]: I0617 20:19:55.236077 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-vtss9" (UniqueName: "kubernetes.io/secret/3b6626de-bf9e-4da6-b816-7953c54632f6-storage-provisioner-token-vtss9") pod "storage-provisioner" (UID: "3b6626de-bf9e-4da6-b816-7953c54632f6") * Jun 17 20:19:55 minikube kubelet[4390]: I0617 20:19:55.236092 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/5d3f8233-ed7d-4070-86ba-a0addf8fab5a-kube-proxy") pod "kube-proxy-7klqw" (UID: "5d3f8233-ed7d-4070-86ba-a0addf8fab5a") * Jun 17 20:19:55 minikube kubelet[4390]: I0617 20:19:55.236123 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/5d3f8233-ed7d-4070-86ba-a0addf8fab5a-lib-modules") pod "kube-proxy-7klqw" (UID: "5d3f8233-ed7d-4070-86ba-a0addf8fab5a") * Jun 17 20:19:55 minikube kubelet[4390]: I0617 20:19:55.236158 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-gqn7c" (UniqueName: "kubernetes.io/secret/5d3f8233-ed7d-4070-86ba-a0addf8fab5a-kube-proxy-token-gqn7c") pod "kube-proxy-7klqw" (UID: "5d3f8233-ed7d-4070-86ba-a0addf8fab5a") * Jun 17 20:19:56 minikube kubelet[4390]: I0617 20:19:56.312022 4390 topology_manager.go:233] [topologymanager] Topology Admit Handler * Jun 17 20:19:56 minikube kubelet[4390]: I0617 20:19:56.315542 4390 topology_manager.go:233] [topologymanager] Topology Admit Handler * Jun 17 20:19:56 minikube kubelet[4390]: I0617 20:19:56.318219 4390 topology_manager.go:233] [topologymanager] Topology Admit Handler * Jun 17 20:19:56 minikube kubelet[4390]: I0617 20:19:56.339623 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c69a0e21-6660-4821-ae2f-52a730d9cb32-config-volume") pod "coredns-66bff467f8-9fc2v" (UID: "c69a0e21-6660-4821-ae2f-52a730d9cb32") * Jun 17 20:19:56 minikube kubelet[4390]: I0617 20:19:56.339651 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-jnf68" (UniqueName: "kubernetes.io/secret/c69a0e21-6660-4821-ae2f-52a730d9cb32-coredns-token-jnf68") pod "coredns-66bff467f8-9fc2v" (UID: "c69a0e21-6660-4821-ae2f-52a730d9cb32") * Jun 17 20:19:56 minikube kubelet[4390]: I0617 20:19:56.339688 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8864ad96-f012-4362-9986-35eddbb2860d-config-volume") pod "coredns-66bff467f8-grvl8" (UID: "8864ad96-f012-4362-9986-35eddbb2860d") * Jun 17 20:19:56 minikube kubelet[4390]: I0617 20:19:56.339709 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-jnf68" (UniqueName: "kubernetes.io/secret/8864ad96-f012-4362-9986-35eddbb2860d-coredns-token-jnf68") pod "coredns-66bff467f8-grvl8" (UID: "8864ad96-f012-4362-9986-35eddbb2860d") * Jun 17 20:19:56 minikube kubelet[4390]: I0617 20:19:56.339728 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ambassador-operator-token-zqpjt" (UniqueName: "kubernetes.io/secret/509c492d-9bd4-425b-a433-ab60d61d2c0e-ambassador-operator-token-zqpjt") pod "ambassador-operator-764fcb8c6b-dhxg9" (UID: "509c492d-9bd4-425b-a433-ab60d61d2c0e") * Jun 17 20:19:56 minikube kubelet[4390]: I0617 20:19:56.339779 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "static-helm-values" (UniqueName: "kubernetes.io/configmap/509c492d-9bd4-425b-a433-ab60d61d2c0e-static-helm-values") pod "ambassador-operator-764fcb8c6b-dhxg9" (UID: "509c492d-9bd4-425b-a433-ab60d61d2c0e") * Jun 17 20:19:56 minikube kubelet[4390]: W0617 20:19:56.648947 4390 pod_container_deletor.go:77] Container "d2e6ba7e82932d7b8c6db94c5c423feedf08cb39236a2910a8e5287a29142aed" not found in pod's containers * Jun 17 20:19:57 minikube kubelet[4390]: W0617 20:19:57.303124 4390 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-9fc2v through plugin: invalid network status for * Jun 17 20:19:57 minikube kubelet[4390]: W0617 20:19:57.320878 4390 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-grvl8 through plugin: invalid network status for * Jun 17 20:19:57 minikube kubelet[4390]: W0617 20:19:57.488684 4390 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for ambassador/ambassador-operator-764fcb8c6b-dhxg9 through plugin: invalid network status for * Jun 17 20:19:57 minikube kubelet[4390]: W0617 20:19:57.665530 4390 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for ambassador/ambassador-operator-764fcb8c6b-dhxg9 through plugin: invalid network status for * Jun 17 20:19:57 minikube kubelet[4390]: W0617 20:19:57.668568 4390 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-9fc2v through plugin: invalid network status for * Jun 17 20:19:57 minikube kubelet[4390]: W0617 20:19:57.673882 4390 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-grvl8 through plugin: invalid network status for * Jun 17 20:21:56 minikube kubelet[4390]: W0617 20:21:56.271187 4390 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for ambassador/ambassador-operator-764fcb8c6b-dhxg9 through plugin: invalid network status for * Jun 17 20:22:04 minikube kubelet[4390]: I0617 20:22:04.492932 4390 topology_manager.go:233] [topologymanager] Topology Admit Handler * Jun 17 20:22:04 minikube kubelet[4390]: I0617 20:22:04.500009 4390 topology_manager.go:233] [topologymanager] Topology Admit Handler * Jun 17 20:22:04 minikube kubelet[4390]: I0617 20:22:04.506744 4390 topology_manager.go:233] [topologymanager] Topology Admit Handler * Jun 17 20:22:04 minikube kubelet[4390]: I0617 20:22:04.663808 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ambassador-token-xfd8n" (UniqueName: "kubernetes.io/secret/a3418d76-505c-4359-af35-b8333e5f9625-ambassador-token-xfd8n") pod "ambassador-86c6c47459-lj9hb" (UID: "a3418d76-505c-4359-af35-b8333e5f9625") * Jun 17 20:22:04 minikube kubelet[4390]: I0617 20:22:04.664057 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ambassador-pod-info" (UniqueName: "kubernetes.io/downward-api/458d5af8-06ea-4250-bb1d-9b88af27d00a-ambassador-pod-info") pod "ambassador-86c6c47459-z5rhp" (UID: "458d5af8-06ea-4250-bb1d-9b88af27d00a") * Jun 17 20:22:04 minikube kubelet[4390]: I0617 20:22:04.664111 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ambassador-token-xfd8n" (UniqueName: "kubernetes.io/secret/a94395a1-e13b-475d-872f-65053ee75095-ambassador-token-xfd8n") pod "ambassador-86c6c47459-wmph4" (UID: "a94395a1-e13b-475d-872f-65053ee75095") * Jun 17 20:22:04 minikube kubelet[4390]: I0617 20:22:04.664147 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ambassador-pod-info" (UniqueName: "kubernetes.io/downward-api/a3418d76-505c-4359-af35-b8333e5f9625-ambassador-pod-info") pod "ambassador-86c6c47459-lj9hb" (UID: "a3418d76-505c-4359-af35-b8333e5f9625") * Jun 17 20:22:04 minikube kubelet[4390]: I0617 20:22:04.664183 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ambassador-pod-info" (UniqueName: "kubernetes.io/downward-api/a94395a1-e13b-475d-872f-65053ee75095-ambassador-pod-info") pod "ambassador-86c6c47459-wmph4" (UID: "a94395a1-e13b-475d-872f-65053ee75095") * Jun 17 20:22:04 minikube kubelet[4390]: I0617 20:22:04.664285 4390 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ambassador-token-xfd8n" (UniqueName: "kubernetes.io/secret/458d5af8-06ea-4250-bb1d-9b88af27d00a-ambassador-token-xfd8n") pod "ambassador-86c6c47459-z5rhp" (UID: "458d5af8-06ea-4250-bb1d-9b88af27d00a") * Jun 17 20:22:05 minikube kubelet[4390]: W0617 20:22:05.130552 4390 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for ambassador/ambassador-86c6c47459-z5rhp through plugin: invalid network status for * Jun 17 20:22:05 minikube kubelet[4390]: W0617 20:22:05.280804 4390 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for ambassador/ambassador-86c6c47459-lj9hb through plugin: invalid network status for * Jun 17 20:22:05 minikube kubelet[4390]: W0617 20:22:05.281082 4390 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for ambassador/ambassador-86c6c47459-wmph4 through plugin: invalid network status for * Jun 17 20:22:05 minikube kubelet[4390]: W0617 20:22:05.351220 4390 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for ambassador/ambassador-86c6c47459-wmph4 through plugin: invalid network status for * Jun 17 20:22:05 minikube kubelet[4390]: W0617 20:22:05.354977 4390 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for ambassador/ambassador-86c6c47459-z5rhp through plugin: invalid network status for * Jun 17 20:22:05 minikube kubelet[4390]: W0617 20:22:05.357347 4390 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for ambassador/ambassador-86c6c47459-lj9hb through plugin: invalid network status for * Jun 17 20:24:48 minikube kubelet[4390]: W0617 20:24:48.296283 4390 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for ambassador/ambassador-86c6c47459-z5rhp through plugin: invalid network status for * Jun 17 20:24:49 minikube kubelet[4390]: W0617 20:24:49.306879 4390 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for ambassador/ambassador-86c6c47459-lj9hb through plugin: invalid network status for * Jun 17 20:24:50 minikube kubelet[4390]: W0617 20:24:50.319462 4390 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for ambassador/ambassador-86c6c47459-wmph4 through plugin: invalid network status for * * ==> storage-provisioner [b15f30b16ab2] <==
dbabbitt commented 4 years ago

Yeah, weird. I rebooted after a security install and got this instead:

Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.

PS C:\Windows\system32> minikube delete
* Deleting "minikube" in docker ...
* Deleting container "minikube" ...
* Removing C:\Users\577342\.minikube\machines\minikube ...
* Removed all traces of the "minikube" cluster.
PS C:\Windows\system32> cd C:\Users\577342\.minikube
PS C:\Users\577342\.minikube> ls

    Directory: C:\Users\577342\.minikube

Mode                LastWriteTime         Length Name
----                -------------         ------ ----
d-----        6/18/2020   4:24 PM                addons
d-----        6/18/2020   5:40 PM                cache
d-----        6/18/2020   4:26 PM                certs
d-----        6/19/2020  11:18 AM                config
d-----        6/18/2020   4:24 PM                files
d-----        6/18/2020   4:24 PM                logs
d-----        6/18/2020   4:51 PM                machines
d-----        6/18/2020   4:25 PM                profiles

PS C:\Users\577342\.minikube> ls machines
PS C:\Users\577342\.minikube> minikube start --driver=docker --preload=false
* minikube v1.11.0 on Microsoft Windows 10 Enterprise 10.0.16299 Build 16299
* Using the docker driver based on user configuration
* Starting control plane node minikube in cluster minikube
* Creating docker container (CPUs=2, Memory=1967MB) ...
* Preparing Kubernetes v1.18.3 on Docker 19.03.2 ...
  - kubeadm.pod-network-cidr=10.244.0.0/16
    > kubelet.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s
    > kubectl.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s
    > kubeadm.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s
    > kubectl: 41.99 MiB / 41.99 MiB [----------------] 100.00% 5.55 MiB p/s 8s
    > kubeadm: 37.97 MiB / 37.97 MiB [---------------] 100.00% 4.04 MiB p/s 10s
    > kubelet: 108.04 MiB / 108.04 MiB [-------------] 100.00% 7.06 MiB p/s 16s
* Verifying Kubernetes components...
* Enabled addons: default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "minikube"
PS C:\Users\577342\.minikube>
tstromberg commented 4 years ago

@doublefx - Did the tunnel still work with this error message, or did it fail entirely?

doublefx commented 4 years ago

@tstromberg It did fail entirely, no tunnel was opened.

medyagh commented 4 years ago

@doublefx do you have this issue with docker driver and laters verison of minikube on windows?

and do you have a firewall enabled?

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 3 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

priyawadhwa commented 3 years ago

Hey @doublefx are you still seeing this issue? If so, could you provide the following info:

Thank you!

spowelljr commented 3 years ago

Hi @doublefx, we haven't heard back from you, do you still have this issue? There isn't enough information in this issue to make it actionable, and a long enough duration has passed, so this issue is likely difficult to replicate.

I will close this issue for now but feel free to reopen when you feel ready to provide more information.