kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.49k stars 4.89k forks source link

Enabling 'default-storageclass' returned an error: running callbacks (WSL2/Docker) #9303

Closed davidedmonds closed 3 years ago

davidedmonds commented 4 years ago

Steps to reproduce the issue:

  1. Using Docker Desktop and WSL2 on Windows 10, following the steps in https://kubernetes.io/blog/2020/05/21/wsl-docker-kubernetes-on-the-windows-desktop/ but swapping --driver=none for --driver=docker (as none driver expects to be able to start docker with systemd).
  2. In WSL2 Bash ``` dave@DESKTOP-RO4RC0J:~$ minikube start --driver=docker --kubernetes-version=v1.15.12 ๐Ÿ˜„ minikube v1.13.1 on Ubuntu 20.04 โœจ Using the docker driver based on user configuration ๐Ÿ‘ Starting control plane node minikube in cluster minikube ๐Ÿ’พ Downloading Kubernetes v1.15.12 preload ...

    preloaded-images-k8s-v6-v1.15.12-docker-overlay2-amd64.tar.lz4: 462.21 Mi ๐Ÿ”ฅ Creating docker container (CPUs=2, Memory=6300MB) ... ๐Ÿณ Preparing Kubernetes v1.15.12 on Docker 19.03.8 ... ๐Ÿ”Ž Verifying Kubernetes components... โ— Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://172.17.0.3:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 172.17.0.3:8443: i/o timeout] ๐ŸŒŸ Enabled addons: default-storageclass, storage-provisioner

โŒ Exiting due to GUEST_START: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: timed out waiting for the condition

๐Ÿ˜ฟ If the above advice does not help, please let us know: ๐Ÿ‘‰ https://github.com/kubernetes/minikube/issues/new/choose


(have also tried with latest Kubernetes (1.19.x) to the same effect).

<!--- TIP: Add the "--alsologtostderr" flag to the command-line for more logs --->
**Full output of failed command:** 
<details>
dave@DESKTOP-RO4RC0J:~$ minikube start --driver=docker --kubernetes-version=v1.15.12 --alsologtostderr
I0922 13:25:31.609372   10181 out.go:191] Setting JSON to false
I0922 13:25:31.610578   10181 start.go:102] hostinfo: {"hostname":"DESKTOP-RO4RC0J","uptime":8361,"bootTime":1600769170,"procs":77,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"4.19.128-microsoft-standard","virtualizationSystem":"","virtualizationRole":"","hostid":"044ea5fb-b2a7-482e-8c48-5e89480bdf0a"}
I0922 13:25:31.611017   10181 start.go:112] virtualization:
I0922 13:25:31.617876   10181 out.go:109] ๐Ÿ˜„  minikube v1.13.1 on Ubuntu 20.04
๐Ÿ˜„  minikube v1.13.1 on Ubuntu 20.04
I0922 13:25:31.618085   10181 notify.go:126] Checking for updates...
I0922 13:25:31.623968   10181 out.go:109] ๐Ÿ†•  Kubernetes 1.19.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.19.2
๐Ÿ†•  Kubernetes 1.19.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.19.2
I0922 13:25:31.624061   10181 driver.go:287] Setting default libvirt URI to qemu:///system
I0922 13:25:31.704411   10181 docker.go:98] docker version: linux-19.03.12
I0922 13:25:31.704468   10181 docker.go:130] overlay module found
I0922 13:25:31.712497   10181 out.go:109] โœจ  Using the docker driver based on existing profile
โœจ  Using the docker driver based on existing profile
I0922 13:25:31.712559   10181 start.go:246] selected driver: docker
I0922 13:25:31.712563   10181 start.go:653] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b Memory:6300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.15.12 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.15.12 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s}
I0922 13:25:31.712702   10181 start.go:664] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Fix: Doc:}
I0922 13:25:31.712774   10181 cli_runner.go:110] Run: docker system info --format "{{json .}}"
I0922 13:25:31.855808   10181 cli_runner.go:110] Run: docker system info --format "{{json .}}"
I0922 13:25:31.987831   10181 start_flags.go:348] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b Memory:6300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.15.12 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.15.12 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s}
I0922 13:25:32.002486   10181 out.go:109] ๐Ÿ‘  Starting control plane node minikube in cluster minikube
๐Ÿ‘  Starting control plane node minikube in cluster minikube
I0922 13:25:32.101993   10181 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b in local docker daemon, skipping pull
I0922 13:25:32.102038   10181 cache.go:115] gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b exists in daemon, skipping pull
I0922 13:25:32.102047   10181 preload.go:97] Checking if preload exists for k8s version v1.15.12 and runtime docker
I0922 13:25:32.102073   10181 preload.go:105] Found local preload: /home/dave/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.15.12-docker-overlay2-amd64.tar.lz4
I0922 13:25:32.102105   10181 cache.go:53] Caching tarball of preloaded images
I0922 13:25:32.102139   10181 preload.go:131] Found /home/dave/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.15.12-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0922 13:25:32.102159   10181 cache.go:56] Finished verifying existence of preloaded tar for  v1.15.12 on docker
I0922 13:25:32.102233   10181 profile.go:150] Saving config to /home/dave/.minikube/profiles/minikube/config.json ...
I0922 13:25:32.102392   10181 cache.go:182] Successfully downloaded all kic artifacts
I0922 13:25:32.102438   10181 start.go:314] acquiring machines lock for minikube: {Name:mk8d3c101eb81ea9e5d64f1a6fecc1c563549c87 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0922 13:25:32.102672   10181 start.go:318] acquired machines lock for "minikube" in 196.303ยตs
I0922 13:25:32.102712   10181 start.go:94] Skipping create...Using existing machine configuration
I0922 13:25:32.102738   10181 fix.go:54] fixHost starting:
I0922 13:25:32.102901   10181 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0922 13:25:32.204848   10181 fix.go:107] recreateIfNeeded on minikube: state=Running err=<nil>
W0922 13:25:32.204880   10181 fix.go:133] unexpected machine state, will restart: <nil>
I0922 13:25:32.216739   10181 out.go:109] ๐Ÿƒ  Updating the running docker "minikube" container ...
๐Ÿƒ  Updating the running docker "minikube" container ...
I0922 13:25:32.216807   10181 machine.go:88] provisioning docker machine ...
I0922 13:25:32.216836   10181 ubuntu.go:166] provisioning hostname "minikube"
I0922 13:25:32.216883   10181 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0922 13:25:32.323235   10181 main.go:115] libmachine: Using SSH client type: native
I0922 13:25:32.323385   10181 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b9850] 0x7b9820 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0922 13:25:32.323425   10181 main.go:115] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0922 13:25:32.461837   10181 main.go:115] libmachine: SSH cmd err, output: <nil>: minikube

I0922 13:25:32.461972   10181 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0922 13:25:32.585431   10181 main.go:115] libmachine: Using SSH client type: native
I0922 13:25:32.585639   10181 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b9850] 0x7b9820 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0922 13:25:32.585729   10181 main.go:115] libmachine: About to run SSH command:

                if ! grep -xq '.*\sminikube' /etc/hosts; then
                        if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
                        else
                                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts;
                        fi
                fi
I0922 13:25:32.710274   10181 main.go:115] libmachine: SSH cmd err, output: <nil>:
I0922 13:25:32.710343   10181 ubuntu.go:172] set auth options {CertDir:/home/dave/.minikube CaCertPath:/home/dave/.minikube/certs/ca.pem CaPrivateKeyPath:/home/dave/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/dave/.minikube/machines/server.pem ServerKeyPath:/home/dave/.minikube/machines/server-key.pem ClientKeyPath:/home/dave/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/dave/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/dave/.minikube}
I0922 13:25:32.710418   10181 ubuntu.go:174] setting up certificates
I0922 13:25:32.710456   10181 provision.go:82] configureAuth start
I0922 13:25:32.710545   10181 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0922 13:25:32.830425   10181 provision.go:131] copyHostCerts
I0922 13:25:32.830512   10181 exec_runner.go:91] found /home/dave/.minikube/key.pem, removing ...
I0922 13:25:32.830651   10181 exec_runner.go:98] cp: /home/dave/.minikube/certs/key.pem --> /home/dave/.minikube/key.pem (1679 bytes)
I0922 13:25:32.830756   10181 exec_runner.go:91] found /home/dave/.minikube/ca.pem, removing ...
I0922 13:25:32.830849   10181 exec_runner.go:98] cp: /home/dave/.minikube/certs/ca.pem --> /home/dave/.minikube/ca.pem (1029 bytes)
I0922 13:25:32.830946   10181 exec_runner.go:91] found /home/dave/.minikube/cert.pem, removing ...
I0922 13:25:32.831054   10181 exec_runner.go:98] cp: /home/dave/.minikube/certs/cert.pem --> /home/dave/.minikube/cert.pem (1070 bytes)
I0922 13:25:32.831177   10181 provision.go:105] generating server cert: /home/dave/.minikube/machines/server.pem ca-key=/home/dave/.minikube/certs/ca.pem private-key=/home/dave/.minikube/certs/ca-key.pem org=dave.minikube san=[172.17.0.3 localhost 127.0.0.1 minikube minikube]
I0922 13:25:33.141890   10181 provision.go:159] copyRemoteCerts
I0922 13:25:33.141948   10181 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0922 13:25:33.142014   10181 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0922 13:25:33.248743   10181 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/dave/.minikube/machines/minikube/id_rsa Username:docker}
I0922 13:25:33.345612   10181 ssh_runner.go:215] scp /home/dave/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0922 13:25:33.370133   10181 ssh_runner.go:215] scp /home/dave/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1029 bytes)
I0922 13:25:33.394713   10181 ssh_runner.go:215] scp /home/dave/.minikube/machines/server.pem --> /etc/docker/server.pem (1139 bytes)
I0922 13:25:33.418888   10181 provision.go:85] duration metric: configureAuth took 708.380382ms
I0922 13:25:33.418952   10181 ubuntu.go:190] setting minikube options for container-runtime
I0922 13:25:33.419146   10181 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0922 13:25:33.536421   10181 main.go:115] libmachine: Using SSH client type: native
I0922 13:25:33.536686   10181 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b9850] 0x7b9820 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0922 13:25:33.536748   10181 main.go:115] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0922 13:25:33.660617   10181 main.go:115] libmachine: SSH cmd err, output: <nil>: overlay

I0922 13:25:33.660674   10181 ubuntu.go:71] root file system type: overlay
I0922 13:25:33.661056   10181 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I0922 13:25:33.661151   10181 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0922 13:25:33.778042   10181 main.go:115] libmachine: Using SSH client type: native
I0922 13:25:33.778255   10181 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b9850] 0x7b9820 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0922 13:25:33.778442   10181 main.go:115] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0922 13:25:33.922452   10181 main.go:115] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0922 13:25:33.922581   10181 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0922 13:25:34.052508   10181 main.go:115] libmachine: Using SSH client type: native
I0922 13:25:34.052689   10181 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b9850] 0x7b9820 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0922 13:25:34.052762   10181 main.go:115] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0922 13:25:34.185383   10181 main.go:115] libmachine: SSH cmd err, output: <nil>:
I0922 13:25:34.185430   10181 machine.go:91] provisioned docker machine in 1.968601339s
I0922 13:25:34.185442   10181 start.go:268] post-start starting for "minikube" (driver="docker")
I0922 13:25:34.185449   10181 start.go:278] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0922 13:25:34.185491   10181 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0922 13:25:34.185634   10181 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0922 13:25:34.307572   10181 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/dave/.minikube/machines/minikube/id_rsa Username:docker}
I0922 13:25:34.404653   10181 ssh_runner.go:148] Run: cat /etc/os-release
I0922 13:25:34.408824   10181 main.go:115] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0922 13:25:34.408895   10181 main.go:115] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0922 13:25:34.408936   10181 main.go:115] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0922 13:25:34.408997   10181 info.go:100] Remote host: Ubuntu 20.04 LTS
I0922 13:25:34.409055   10181 filesync.go:118] Scanning /home/dave/.minikube/addons for local assets ...
I0922 13:25:34.409173   10181 filesync.go:118] Scanning /home/dave/.minikube/files for local assets ...
I0922 13:25:34.409248   10181 start.go:271] post-start completed in 223.797354ms
I0922 13:25:34.409328   10181 ssh_runner.go:148] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0922 13:25:34.409423   10181 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0922 13:25:34.513740   10181 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/dave/.minikube/machines/minikube/id_rsa Username:docker}
I0922 13:25:34.610963   10181 fix.go:56] fixHost completed within 2.508221743s
I0922 13:25:34.611031   10181 start.go:81] releasing machines lock for "minikube", held for 2.508321344s
I0922 13:25:34.611129   10181 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0922 13:25:34.717618   10181 ssh_runner.go:148] Run: systemctl --version
I0922 13:25:34.717657   10181 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I0922 13:25:34.717802   10181 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0922 13:25:34.717676   10181 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0922 13:25:34.838638   10181 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/dave/.minikube/machines/minikube/id_rsa Username:docker}
I0922 13:25:34.839227   10181 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/dave/.minikube/machines/minikube/id_rsa Username:docker}
I0922 13:25:35.068188   10181 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service containerd
I0922 13:25:35.083943   10181 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0922 13:25:35.097377   10181 cruntime.go:193] skipping containerd shutdown because we are bound to it
I0922 13:25:35.097488   10181 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I0922 13:25:35.109442   10181 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0922 13:25:35.119530   10181 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0922 13:25:35.179231   10181 ssh_runner.go:148] Run: sudo systemctl start docker
I0922 13:25:35.188518   10181 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
I0922 13:25:35.238209   10181 out.go:109] ๐Ÿณ  Preparing Kubernetes v1.15.12 on Docker 19.03.8 ...
๐Ÿณ  Preparing Kubernetes v1.15.12 on Docker 19.03.8 ...
I0922 13:25:35.238275   10181 cli_runner.go:110] Run: docker network ls --filter name=bridge --format {{.ID}}
I0922 13:25:35.337113   10181 cli_runner.go:110] Run: docker network inspect 401968dfedea -f "{{range $k, $v := .Containers}}{{$v.Name}} {{end}}"
I0922 13:25:35.438637   10181 cli_runner.go:110] Run: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" 401968dfedea
I0922 13:25:35.542507   10181 network.go:106] got host ip for mount in container by inspect docker network: 172.17.0.1
I0922 13:25:35.542611   10181 ssh_runner.go:148] Run: grep 172.17.0.1   host.minikube.internal$ /etc/hosts
I0922 13:25:35.546716   10181 preload.go:97] Checking if preload exists for k8s version v1.15.12 and runtime docker
I0922 13:25:35.546760   10181 preload.go:105] Found local preload: /home/dave/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.15.12-docker-overlay2-amd64.tar.lz4
I0922 13:25:35.546838   10181 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0922 13:25:35.576359   10181 docker.go:381] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v3
kubernetesui/dashboard:v2.0.3
k8s.gcr.io/kube-controller-manager:v1.15.12
k8s.gcr.io/kube-apiserver:v1.15.12
k8s.gcr.io/kube-proxy:v1.15.12
k8s.gcr.io/kube-scheduler:v1.15.12
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/coredns:1.3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/pause:3.1

-- /stdout --
I0922 13:25:35.576404   10181 docker.go:319] Images already preloaded, skipping extraction
I0922 13:25:35.576457   10181 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0922 13:25:35.605783   10181 docker.go:381] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v3
kubernetesui/dashboard:v2.0.3
k8s.gcr.io/kube-controller-manager:v1.15.12
k8s.gcr.io/kube-apiserver:v1.15.12
k8s.gcr.io/kube-proxy:v1.15.12
k8s.gcr.io/kube-scheduler:v1.15.12
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/coredns:1.3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/pause:3.1

-- /stdout --
I0922 13:25:35.605826   10181 cache_images.go:74] Images are preloaded, skipping loading
I0922 13:25:35.605889   10181 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
I0922 13:25:35.643960   10181 cni.go:74] Creating CNI manager for ""
I0922 13:25:35.644040   10181 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I0922 13:25:35.644046   10181 kubeadm.go:84] Using pod CIDR:
I0922 13:25:35.644098   10181 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:172.17.0.3 APIServerPort:8443 KubernetesVersion:v1.15.12 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.3"]]} {Component:controllerManager ExtraArgs:map[leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0922 13:25:35.644262   10181 kubeadm.go:154] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.17.0.3
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 172.17.0.3
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "172.17.0.3"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
  extraArgs:
    leader-elect: "false"
scheduler:
  extraArgs:
    leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: minikube
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
    extraArgs:
      listen-metrics-urls: http://127.0.0.1:2381,http://172.17.0.3:2381
kubernetesVersion: v1.15.12
networking:
  dnsDomain: cluster.local
  podSubnet: ""
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: ""
metricsBindAddress: 172.17.0.3:10249

I0922 13:25:35.644400   10181 kubeadm.go:805] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.15.12/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --client-ca-file=/var/lib/minikube/certs/ca.crt --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.3

[Install]
 config:
{KubernetesVersion:v1.15.12 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0922 13:25:35.644521   10181 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.15.12
I0922 13:25:35.651376   10181 binaries.go:43] Found k8s binaries, skipping transfer
I0922 13:25:35.651437   10181 ssh_runner.go:148] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0922 13:25:35.657862   10181 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
I0922 13:25:35.668319   10181 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (350 bytes)
I0922 13:25:35.679647   10181 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1820 bytes)
I0922 13:25:35.690832   10181 ssh_runner.go:148] Run: grep 172.17.0.3   control-plane.minikube.internal$ /etc/hosts
I0922 13:25:35.693533   10181 certs.go:52] Setting up /home/dave/.minikube/profiles/minikube for IP: 172.17.0.3
I0922 13:25:35.693574   10181 certs.go:169] skipping minikubeCA CA generation: /home/dave/.minikube/ca.key
I0922 13:25:35.693583   10181 certs.go:169] skipping proxyClientCA CA generation: /home/dave/.minikube/proxy-client-ca.key
I0922 13:25:35.693610   10181 certs.go:269] skipping minikube-user signed cert generation: /home/dave/.minikube/profiles/minikube/client.key
I0922 13:25:35.693659   10181 certs.go:269] skipping minikube signed cert generation: /home/dave/.minikube/profiles/minikube/apiserver.key.0f3e66d0
I0922 13:25:35.693699   10181 certs.go:269] skipping aggregator signed cert generation: /home/dave/.minikube/profiles/minikube/proxy-client.key
I0922 13:25:35.693779   10181 certs.go:348] found cert: /home/dave/.minikube/certs/home/dave/.minikube/certs/ca-key.pem (1675 bytes)
I0922 13:25:35.693841   10181 certs.go:348] found cert: /home/dave/.minikube/certs/home/dave/.minikube/certs/ca.pem (1029 bytes)
I0922 13:25:35.693883   10181 certs.go:348] found cert: /home/dave/.minikube/certs/home/dave/.minikube/certs/cert.pem (1070 bytes)
I0922 13:25:35.693932   10181 certs.go:348] found cert: /home/dave/.minikube/certs/home/dave/.minikube/certs/key.pem (1679 bytes)
I0922 13:25:35.694651   10181 ssh_runner.go:215] scp /home/dave/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1350 bytes)
I0922 13:25:35.709937   10181 ssh_runner.go:215] scp /home/dave/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0922 13:25:35.724351   10181 ssh_runner.go:215] scp /home/dave/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1103 bytes)
I0922 13:25:35.739348   10181 ssh_runner.go:215] scp /home/dave/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0922 13:25:35.754272   10181 ssh_runner.go:215] scp /home/dave/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes)
I0922 13:25:35.769493   10181 ssh_runner.go:215] scp /home/dave/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0922 13:25:35.784366   10181 ssh_runner.go:215] scp /home/dave/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes)
I0922 13:25:35.799864   10181 ssh_runner.go:215] scp /home/dave/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0922 13:25:35.815434   10181 ssh_runner.go:215] scp /home/dave/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes)
I0922 13:25:35.830087   10181 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I0922 13:25:35.841236   10181 ssh_runner.go:148] Run: openssl version
I0922 13:25:35.845096   10181 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0922 13:25:35.851368   10181 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0922 13:25:35.854177   10181 certs.go:389] hashing: -rw-r--r-- 1 root root 1066 Sep 22 11:17 /usr/share/ca-certificates/minikubeCA.pem
I0922 13:25:35.854232   10181 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0922 13:25:35.858406   10181 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0922 13:25:35.864324   10181 kubeadm.go:324] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b Memory:6300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.15.12 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.15.12 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s}
I0922 13:25:35.864405   10181 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0922 13:25:35.895644   10181 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0922 13:25:35.902100   10181 kubeadm.go:335] found existing configuration files, will attempt cluster restart
I0922 13:25:35.902176   10181 kubeadm.go:509] restartCluster start
I0922 13:25:35.902218   10181 ssh_runner.go:148] Run: sudo test -d /data/minikube
I0922 13:25:35.909565   10181 kubeadm.go:122] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:

stderr:
I0922 13:25:35.910259   10181 kubeconfig.go:93] found "minikube" server: "https://172.17.0.3:8443"
I0922 13:25:35.912929   10181 ssh_runner.go:148] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0922 13:25:35.920116   10181 api_server.go:146] Checking apiserver status ...
I0922 13:25:35.920163   10181 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0922 13:25:35.933344   10181 ssh_runner.go:148] Run: sudo egrep ^[0-9]+:freezer: /proc/1785/cgroup
I0922 13:25:35.940665   10181 api_server.go:162] apiserver freezer: "7:freezer:/docker/652b6a72b2386fe4141a4966a308d1691559ca53bb853ec61372de1807542a29/kubepods/burstable/pod8dc8c1409e93c265948b71b67c2462d3/65eee2193e93d6f00b66fa26a097f729a2f3296bce686e5eb1306d009cf8b93c"
I0922 13:25:35.940754   10181 ssh_runner.go:148] Run: sudo cat /sys/fs/cgroup/freezer/docker/652b6a72b2386fe4141a4966a308d1691559ca53bb853ec61372de1807542a29/kubepods/burstable/pod8dc8c1409e93c265948b71b67c2462d3/65eee2193e93d6f00b66fa26a097f729a2f3296bce686e5eb1306d009cf8b93c/freezer.state
I0922 13:25:35.948632   10181 api_server.go:184] freezer state: "THAWED"
I0922 13:25:35.948683   10181 api_server.go:221] Checking apiserver healthz at https://172.17.0.3:8443/healthz ...
I0922 13:26:07.865780   10181 api_server.go:231] stopped: https://172.17.0.3:8443/healthz: Get "https://172.17.0.3:8443/healthz": dial tcp 172.17.0.3:8443: connect: connection timed out
I0922 13:26:07.866002   10181 kubeadm.go:488] needs reconfigure: apiserver in state Stopped
I0922 13:26:07.866051   10181 kubeadm.go:928] stopping kube-system containers ...
I0922 13:26:07.866178   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0922 13:26:07.921121   10181 docker.go:229] Stopping containers: [b281bf20b6c7 504b0a2a29d3 bbd98161a79a be203fbdfff0 0223d4f5b216 99c22bacd6b7 29447de2825e 63c9fc70f62c 65eee2193e93 ca52af5c2503 1fbecb10cfaa 7d3cfbaf7c7a b71ba8caf4c8 7077f8795161 16a6450332b2]
I0922 13:26:07.921192   10181 ssh_runner.go:148] Run: docker stop b281bf20b6c7 504b0a2a29d3 bbd98161a79a be203fbdfff0 0223d4f5b216 99c22bacd6b7 29447de2825e 63c9fc70f62c 65eee2193e93 ca52af5c2503 1fbecb10cfaa 7d3cfbaf7c7a b71ba8caf4c8 7077f8795161 16a6450332b2
I0922 13:26:09.145032   10181 ssh_runner.go:188] Completed: docker stop b281bf20b6c7 504b0a2a29d3 bbd98161a79a be203fbdfff0 0223d4f5b216 99c22bacd6b7 29447de2825e 63c9fc70f62c 65eee2193e93 ca52af5c2503 1fbecb10cfaa 7d3cfbaf7c7a b71ba8caf4c8 7077f8795161 16a6450332b2: (1.223806816s)
I0922 13:26:09.145095   10181 ssh_runner.go:148] Run: sudo systemctl stop kubelet
I0922 13:26:09.156575   10181 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0922 13:26:09.163096   10181 kubeadm.go:150] found existing configuration files:
-rw------- 1 root root 5515 Sep 22 12:15 /etc/kubernetes/admin.conf
-rw------- 1 root root 5551 Sep 22 12:15 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 5539 Sep 22 12:15 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5499 Sep 22 12:15 /etc/kubernetes/scheduler.conf

I0922 13:26:09.163153   10181 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0922 13:26:09.168709   10181 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0922 13:26:09.174786   10181 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0922 13:26:09.180936   10181 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0922 13:26:09.187263   10181 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0922 13:26:09.192997   10181 kubeadm.go:585] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0922 13:26:09.193034   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.15.12:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0922 13:26:09.219481   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.15.12:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0922 13:26:09.862217   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.15.12:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0922 13:26:09.980410   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.15.12:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0922 13:26:10.020967   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.15.12:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0922 13:26:10.075012   10181 api_server.go:48] waiting for apiserver process to appear ...
I0922 13:26:10.075075   10181 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0922 13:26:10.585153   10181 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0922 13:26:11.085204   10181 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0922 13:26:11.585176   10181 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0922 13:26:12.085236   10181 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0922 13:26:12.165668   10181 api_server.go:68] duration metric: took 2.090655007s to wait for apiserver process to appear ...
I0922 13:26:12.165701   10181 api_server.go:84] waiting for apiserver healthz status ...
I0922 13:26:12.165712   10181 api_server.go:221] Checking apiserver healthz at https://172.17.0.3:8443/healthz ...
I0922 13:26:44.345985   10181 api_server.go:231] stopped: https://172.17.0.3:8443/healthz: Get "https://172.17.0.3:8443/healthz": dial tcp 172.17.0.3:8443: connect: connection timed out
I0922 13:26:44.846718   10181 api_server.go:221] Checking apiserver healthz at https://172.17.0.3:8443/healthz ...
I0922 13:27:16.985751   10181 api_server.go:231] stopped: https://172.17.0.3:8443/healthz: Get "https://172.17.0.3:8443/healthz": dial tcp 172.17.0.3:8443: connect: connection timed out
I0922 13:27:17.346474   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0922 13:27:17.382776   10181 logs.go:206] 2 containers: [dbe918ad4b4e 65eee2193e93]
I0922 13:27:17.382849   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0922 13:27:17.413277   10181 logs.go:206] 2 containers: [fea40e8cca41 1fbecb10cfaa]
I0922 13:27:17.413385   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0922 13:27:17.442778   10181 logs.go:206] 2 containers: [a9b5ec6fc4e9 bbd98161a79a]
I0922 13:27:17.442912   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0922 13:27:17.474569   10181 logs.go:206] 2 containers: [229bf3df246b 63c9fc70f62c]
I0922 13:27:17.474679   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0922 13:27:17.505683   10181 logs.go:206] 2 containers: [46eb6b0292df 504b0a2a29d3]
I0922 13:27:17.505753   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0922 13:27:17.534375   10181 logs.go:206] 0 containers: []
W0922 13:27:17.534414   10181 logs.go:208] No container was found matching "kubernetes-dashboard"
I0922 13:27:17.534445   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0922 13:27:17.564808   10181 logs.go:206] 2 containers: [fc578cfa995d b281bf20b6c7]
I0922 13:27:17.564877   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0922 13:27:17.594917   10181 logs.go:206] 1 containers: [978dbe7dd5ed]
I0922 13:27:17.594999   10181 logs.go:120] Gathering logs for dmesg ...
I0922 13:27:17.595048   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0922 13:27:17.608769   10181 logs.go:120] Gathering logs for kube-scheduler [229bf3df246b] ...
I0922 13:27:17.608807   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 229bf3df246b"
I0922 13:27:17.641817   10181 logs.go:120] Gathering logs for kube-scheduler [63c9fc70f62c] ...
I0922 13:27:17.641859   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 63c9fc70f62c"
I0922 13:27:17.677272   10181 logs.go:120] Gathering logs for storage-provisioner [b281bf20b6c7] ...
I0922 13:27:17.677314   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 b281bf20b6c7"
I0922 13:27:17.709212   10181 logs.go:120] Gathering logs for Docker ...
I0922 13:27:17.709253   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0922 13:27:17.724432   10181 logs.go:120] Gathering logs for container status ...
I0922 13:27:17.724471   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0922 13:27:17.740375   10181 logs.go:120] Gathering logs for kube-apiserver [65eee2193e93] ...
I0922 13:27:17.740447   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 65eee2193e93"
I0922 13:27:17.792949   10181 logs.go:120] Gathering logs for etcd [fea40e8cca41] ...
I0922 13:27:17.793056   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 fea40e8cca41"
I0922 13:27:17.826716   10181 logs.go:120] Gathering logs for etcd [1fbecb10cfaa] ...
I0922 13:27:17.826774   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 1fbecb10cfaa"
I0922 13:27:17.862425   10181 logs.go:120] Gathering logs for coredns [bbd98161a79a] ...
I0922 13:27:17.862466   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 bbd98161a79a"
I0922 13:27:17.893760   10181 logs.go:120] Gathering logs for kubelet ...
I0922 13:27:17.893830   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0922 13:27:17.950790   10181 logs.go:120] Gathering logs for kube-apiserver [dbe918ad4b4e] ...
I0922 13:27:17.950839   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 dbe918ad4b4e"
I0922 13:27:18.010571   10181 logs.go:120] Gathering logs for coredns [a9b5ec6fc4e9] ...
I0922 13:27:18.010669   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 a9b5ec6fc4e9"
I0922 13:27:18.059758   10181 logs.go:120] Gathering logs for kube-controller-manager [978dbe7dd5ed] ...
I0922 13:27:18.059820   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 978dbe7dd5ed"
I0922 13:27:18.098837   10181 logs.go:120] Gathering logs for describe nodes ...
I0922 13:27:18.098879   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.15.12/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0922 13:27:18.617200   10181 logs.go:120] Gathering logs for kube-proxy [46eb6b0292df] ...
I0922 13:27:18.617253   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 46eb6b0292df"
I0922 13:27:18.652135   10181 logs.go:120] Gathering logs for kube-proxy [504b0a2a29d3] ...
I0922 13:27:18.652175   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 504b0a2a29d3"
I0922 13:27:18.684162   10181 logs.go:120] Gathering logs for storage-provisioner [fc578cfa995d] ...
I0922 13:27:18.684203   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 fc578cfa995d"
I0922 13:27:21.215031   10181 api_server.go:221] Checking apiserver healthz at https://172.17.0.3:8443/healthz ...
I0922 13:27:53.475752   10181 api_server.go:231] stopped: https://172.17.0.3:8443/healthz: Get "https://172.17.0.3:8443/healthz": dial tcp 172.17.0.3:8443: connect: connection timed out
I0922 13:27:53.846410   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0922 13:27:53.880047   10181 logs.go:206] 2 containers: [dbe918ad4b4e 65eee2193e93]
I0922 13:27:53.880149   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0922 13:27:53.910182   10181 logs.go:206] 2 containers: [fea40e8cca41 1fbecb10cfaa]
I0922 13:27:53.910247   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0922 13:27:53.942093   10181 logs.go:206] 2 containers: [a9b5ec6fc4e9 bbd98161a79a]
I0922 13:27:53.942164   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0922 13:27:53.973205   10181 logs.go:206] 2 containers: [229bf3df246b 63c9fc70f62c]
I0922 13:27:53.973300   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0922 13:27:54.004053   10181 logs.go:206] 2 containers: [46eb6b0292df 504b0a2a29d3]
I0922 13:27:54.004151   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0922 13:27:54.031660   10181 logs.go:206] 0 containers: []
W0922 13:27:54.031721   10181 logs.go:208] No container was found matching "kubernetes-dashboard"
I0922 13:27:54.031773   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0922 13:27:54.060884   10181 logs.go:206] 2 containers: [fc578cfa995d b281bf20b6c7]
I0922 13:27:54.060985   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0922 13:27:54.090869   10181 logs.go:206] 1 containers: [978dbe7dd5ed]
I0922 13:27:54.090936   10181 logs.go:120] Gathering logs for describe nodes ...
I0922 13:27:54.090968   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.15.12/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0922 13:27:54.170441   10181 logs.go:120] Gathering logs for etcd [fea40e8cca41] ...
I0922 13:27:54.170483   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 fea40e8cca41"
I0922 13:27:54.202135   10181 logs.go:120] Gathering logs for kube-scheduler [63c9fc70f62c] ...
I0922 13:27:54.202175   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 63c9fc70f62c"
I0922 13:27:54.235845   10181 logs.go:120] Gathering logs for kube-proxy [46eb6b0292df] ...
I0922 13:27:54.235889   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 46eb6b0292df"
I0922 13:27:54.266815   10181 logs.go:120] Gathering logs for dmesg ...
I0922 13:27:54.266854   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0922 13:27:54.280350   10181 logs.go:120] Gathering logs for kube-apiserver [dbe918ad4b4e] ...
I0922 13:27:54.280397   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 dbe918ad4b4e"
I0922 13:27:54.329030   10181 logs.go:120] Gathering logs for kube-apiserver [65eee2193e93] ...
I0922 13:27:54.329070   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 65eee2193e93"
I0922 13:27:54.385125   10181 logs.go:120] Gathering logs for coredns [a9b5ec6fc4e9] ...
I0922 13:27:54.385233   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 a9b5ec6fc4e9"
I0922 13:27:54.432625   10181 logs.go:120] Gathering logs for Docker ...
I0922 13:27:54.432673   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0922 13:27:54.447382   10181 logs.go:120] Gathering logs for kubelet ...
I0922 13:27:54.447424   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0922 13:27:54.496614   10181 logs.go:120] Gathering logs for coredns [bbd98161a79a] ...
I0922 13:27:54.496661   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 bbd98161a79a"
I0922 13:27:54.531749   10181 logs.go:120] Gathering logs for kube-proxy [504b0a2a29d3] ...
I0922 13:27:54.531791   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 504b0a2a29d3"
I0922 13:27:54.563893   10181 logs.go:120] Gathering logs for storage-provisioner [fc578cfa995d] ...
I0922 13:27:54.563934   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 fc578cfa995d"
I0922 13:27:54.595393   10181 logs.go:120] Gathering logs for kube-controller-manager [978dbe7dd5ed] ...
I0922 13:27:54.595437   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 978dbe7dd5ed"
I0922 13:27:54.645646   10181 logs.go:120] Gathering logs for container status ...
I0922 13:27:54.645719   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0922 13:27:54.663722   10181 logs.go:120] Gathering logs for etcd [1fbecb10cfaa] ...
I0922 13:27:54.663765   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 1fbecb10cfaa"
I0922 13:27:54.696499   10181 logs.go:120] Gathering logs for storage-provisioner [b281bf20b6c7] ...
I0922 13:27:54.696541   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 b281bf20b6c7"
I0922 13:27:54.726963   10181 logs.go:120] Gathering logs for kube-scheduler [229bf3df246b] ...
I0922 13:27:54.727007   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 229bf3df246b"
I0922 13:27:57.259904   10181 api_server.go:221] Checking apiserver healthz at https://172.17.0.3:8443/healthz ...
I0922 13:28:29.305743   10181 api_server.go:231] stopped: https://172.17.0.3:8443/healthz: Get "https://172.17.0.3:8443/healthz": dial tcp 172.17.0.3:8443: connect: connection timed out
I0922 13:28:29.346574   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0922 13:28:29.378713   10181 logs.go:206] 2 containers: [dbe918ad4b4e 65eee2193e93]
I0922 13:28:29.378837   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0922 13:28:29.412341   10181 logs.go:206] 2 containers: [fea40e8cca41 1fbecb10cfaa]
I0922 13:28:29.412474   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0922 13:28:29.445872   10181 logs.go:206] 2 containers: [a9b5ec6fc4e9 bbd98161a79a]
I0922 13:28:29.446039   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0922 13:28:29.477120   10181 logs.go:206] 2 containers: [229bf3df246b 63c9fc70f62c]
I0922 13:28:29.477192   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0922 13:28:29.508971   10181 logs.go:206] 2 containers: [46eb6b0292df 504b0a2a29d3]
I0922 13:28:29.509112   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0922 13:28:29.535912   10181 logs.go:206] 0 containers: []
W0922 13:28:29.535984   10181 logs.go:208] No container was found matching "kubernetes-dashboard"
I0922 13:28:29.536057   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0922 13:28:29.566847   10181 logs.go:206] 2 containers: [fc578cfa995d b281bf20b6c7]
I0922 13:28:29.566914   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0922 13:28:29.597647   10181 logs.go:206] 1 containers: [978dbe7dd5ed]
I0922 13:28:29.597690   10181 logs.go:120] Gathering logs for dmesg ...
I0922 13:28:29.597714   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0922 13:28:29.610951   10181 logs.go:120] Gathering logs for Docker ...
I0922 13:28:29.610989   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0922 13:28:29.625601   10181 logs.go:120] Gathering logs for container status ...
I0922 13:28:29.625659   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0922 13:28:29.641127   10181 logs.go:120] Gathering logs for etcd [1fbecb10cfaa] ...
I0922 13:28:29.641166   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 1fbecb10cfaa"
I0922 13:28:29.676321   10181 logs.go:120] Gathering logs for coredns [a9b5ec6fc4e9] ...
I0922 13:28:29.676363   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 a9b5ec6fc4e9"
I0922 13:28:29.707827   10181 logs.go:120] Gathering logs for kube-scheduler [229bf3df246b] ...
I0922 13:28:29.707911   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 229bf3df246b"
I0922 13:28:29.742736   10181 logs.go:120] Gathering logs for kube-proxy [46eb6b0292df] ...
I0922 13:28:29.742795   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 46eb6b0292df"
I0922 13:28:29.810916   10181 logs.go:120] Gathering logs for storage-provisioner [fc578cfa995d] ...
I0922 13:28:29.811021   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 fc578cfa995d"
I0922 13:28:29.859553   10181 logs.go:120] Gathering logs for describe nodes ...
I0922 13:28:29.859599   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.15.12/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0922 13:28:29.970331   10181 logs.go:120] Gathering logs for kube-apiserver [dbe918ad4b4e] ...
I0922 13:28:29.970435   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 dbe918ad4b4e"
I0922 13:28:30.032185   10181 logs.go:120] Gathering logs for etcd [fea40e8cca41] ...
I0922 13:28:30.032226   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 fea40e8cca41"
I0922 13:28:30.063551   10181 logs.go:120] Gathering logs for storage-provisioner [b281bf20b6c7] ...
I0922 13:28:30.063590   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 b281bf20b6c7"
I0922 13:28:30.094680   10181 logs.go:120] Gathering logs for kube-proxy [504b0a2a29d3] ...
I0922 13:28:30.094718   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 504b0a2a29d3"
I0922 13:28:30.127979   10181 logs.go:120] Gathering logs for kube-controller-manager [978dbe7dd5ed] ...
I0922 13:28:30.128036   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 978dbe7dd5ed"
I0922 13:28:30.169666   10181 logs.go:120] Gathering logs for kubelet ...
I0922 13:28:30.169708   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0922 13:28:30.224166   10181 logs.go:120] Gathering logs for kube-apiserver [65eee2193e93] ...
I0922 13:28:30.224211   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 65eee2193e93"
I0922 13:28:30.273506   10181 logs.go:120] Gathering logs for kube-scheduler [63c9fc70f62c] ...
I0922 13:28:30.273550   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 63c9fc70f62c"
I0922 13:28:30.308483   10181 logs.go:120] Gathering logs for coredns [bbd98161a79a] ...
I0922 13:28:30.308526   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 bbd98161a79a"
I0922 13:28:32.839098   10181 api_server.go:221] Checking apiserver healthz at https://172.17.0.3:8443/healthz ...
I0922 13:29:05.145680   10181 api_server.go:231] stopped: https://172.17.0.3:8443/healthz: Get "https://172.17.0.3:8443/healthz": dial tcp 172.17.0.3:8443: connect: connection timed out
I0922 13:29:05.346563   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0922 13:29:05.382990   10181 logs.go:206] 2 containers: [dbe918ad4b4e 65eee2193e93]
I0922 13:29:05.383077   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0922 13:29:05.414773   10181 logs.go:206] 2 containers: [fea40e8cca41 1fbecb10cfaa]
I0922 13:29:05.414842   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0922 13:29:05.444261   10181 logs.go:206] 2 containers: [a9b5ec6fc4e9 bbd98161a79a]
I0922 13:29:05.444381   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0922 13:29:05.474807   10181 logs.go:206] 2 containers: [229bf3df246b 63c9fc70f62c]
I0922 13:29:05.474897   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0922 13:29:05.506323   10181 logs.go:206] 2 containers: [46eb6b0292df 504b0a2a29d3]
I0922 13:29:05.506401   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0922 13:29:05.535374   10181 logs.go:206] 0 containers: []
W0922 13:29:05.535413   10181 logs.go:208] No container was found matching "kubernetes-dashboard"
I0922 13:29:05.535477   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0922 13:29:05.566241   10181 logs.go:206] 2 containers: [fc578cfa995d b281bf20b6c7]
I0922 13:29:05.566332   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0922 13:29:05.594897   10181 logs.go:206] 1 containers: [978dbe7dd5ed]
I0922 13:29:05.594944   10181 logs.go:120] Gathering logs for kubelet ...
I0922 13:29:05.594953   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0922 13:29:05.666665   10181 logs.go:120] Gathering logs for etcd [1fbecb10cfaa] ...
I0922 13:29:05.666709   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 1fbecb10cfaa"
I0922 13:29:05.704662   10181 logs.go:120] Gathering logs for kube-proxy [504b0a2a29d3] ...
I0922 13:29:05.704703   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 504b0a2a29d3"
I0922 13:29:05.755464   10181 logs.go:120] Gathering logs for Docker ...
I0922 13:29:05.755526   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0922 13:29:05.772001   10181 logs.go:120] Gathering logs for kube-scheduler [63c9fc70f62c] ...
I0922 13:29:05.772054   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 63c9fc70f62c"
I0922 13:29:05.808843   10181 logs.go:120] Gathering logs for kube-proxy [46eb6b0292df] ...
I0922 13:29:05.808910   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 46eb6b0292df"
I0922 13:29:05.845839   10181 logs.go:120] Gathering logs for kube-apiserver [65eee2193e93] ...
I0922 13:29:05.845881   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 65eee2193e93"
I0922 13:29:05.898725   10181 logs.go:120] Gathering logs for etcd [fea40e8cca41] ...
I0922 13:29:05.898773   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 fea40e8cca41"
I0922 13:29:05.934321   10181 logs.go:120] Gathering logs for coredns [a9b5ec6fc4e9] ...
I0922 13:29:05.934417   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 a9b5ec6fc4e9"
I0922 13:29:05.982603   10181 logs.go:120] Gathering logs for kube-scheduler [229bf3df246b] ...
I0922 13:29:05.982684   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 229bf3df246b"
I0922 13:29:06.016326   10181 logs.go:120] Gathering logs for dmesg ...
I0922 13:29:06.016365   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0922 13:29:06.030662   10181 logs.go:120] Gathering logs for storage-provisioner [fc578cfa995d] ...
I0922 13:29:06.030711   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 fc578cfa995d"
I0922 13:29:06.066926   10181 logs.go:120] Gathering logs for storage-provisioner [b281bf20b6c7] ...
I0922 13:29:06.067014   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 b281bf20b6c7"
I0922 13:29:06.100085   10181 logs.go:120] Gathering logs for container status ...
I0922 13:29:06.100125   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0922 13:29:06.119287   10181 logs.go:120] Gathering logs for describe nodes ...
I0922 13:29:06.119330   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.15.12/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0922 13:29:06.192986   10181 logs.go:120] Gathering logs for kube-apiserver [dbe918ad4b4e] ...
I0922 13:29:06.193026   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 dbe918ad4b4e"
I0922 13:29:06.247263   10181 logs.go:120] Gathering logs for coredns [bbd98161a79a] ...
I0922 13:29:06.247307   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 bbd98161a79a"
I0922 13:29:06.278476   10181 logs.go:120] Gathering logs for kube-controller-manager [978dbe7dd5ed] ...
I0922 13:29:06.278516   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 978dbe7dd5ed"
I0922 13:29:08.819741   10181 api_server.go:221] Checking apiserver healthz at https://172.17.0.3:8443/healthz ...
I0922 13:29:40.995707   10181 api_server.go:231] stopped: https://172.17.0.3:8443/healthz: Get "https://172.17.0.3:8443/healthz": dial tcp 172.17.0.3:8443: connect: connection timed out
I0922 13:29:41.346523   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0922 13:29:41.380927   10181 logs.go:206] 2 containers: [dbe918ad4b4e 65eee2193e93]
I0922 13:29:41.380995   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0922 13:29:41.408270   10181 logs.go:206] 2 containers: [fea40e8cca41 1fbecb10cfaa]
I0922 13:29:41.408338   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0922 13:29:41.436876   10181 logs.go:206] 2 containers: [a9b5ec6fc4e9 bbd98161a79a]
I0922 13:29:41.436979   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0922 13:29:41.465514   10181 logs.go:206] 2 containers: [229bf3df246b 63c9fc70f62c]
I0922 13:29:41.465605   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0922 13:29:41.497216   10181 logs.go:206] 2 containers: [46eb6b0292df 504b0a2a29d3]
I0922 13:29:41.497327   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0922 13:29:41.529678   10181 logs.go:206] 0 containers: []
W0922 13:29:41.529746   10181 logs.go:208] No container was found matching "kubernetes-dashboard"
I0922 13:29:41.529827   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0922 13:29:41.559704   10181 logs.go:206] 2 containers: [fc578cfa995d b281bf20b6c7]
I0922 13:29:41.559856   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0922 13:29:41.590243   10181 logs.go:206] 1 containers: [978dbe7dd5ed]
I0922 13:29:41.590312   10181 logs.go:120] Gathering logs for coredns [a9b5ec6fc4e9] ...
I0922 13:29:41.590367   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 a9b5ec6fc4e9"
I0922 13:29:41.622059   10181 logs.go:120] Gathering logs for storage-provisioner [b281bf20b6c7] ...
I0922 13:29:41.622104   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 b281bf20b6c7"
I0922 13:29:41.654476   10181 logs.go:120] Gathering logs for kubelet ...
I0922 13:29:41.654518   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0922 13:29:41.705921   10181 logs.go:120] Gathering logs for kube-apiserver [dbe918ad4b4e] ...
I0922 13:29:41.705978   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 dbe918ad4b4e"
I0922 13:29:41.768542   10181 logs.go:120] Gathering logs for kube-apiserver [65eee2193e93] ...
I0922 13:29:41.768642   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 65eee2193e93"
I0922 13:29:41.827763   10181 logs.go:120] Gathering logs for etcd [1fbecb10cfaa] ...
I0922 13:29:41.827807   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 1fbecb10cfaa"
I0922 13:29:41.861536   10181 logs.go:120] Gathering logs for etcd [fea40e8cca41] ...
I0922 13:29:41.861587   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 fea40e8cca41"
I0922 13:29:41.892212   10181 logs.go:120] Gathering logs for coredns [bbd98161a79a] ...
I0922 13:29:41.892272   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 bbd98161a79a"
I0922 13:29:41.924089   10181 logs.go:120] Gathering logs for kube-scheduler [229bf3df246b] ...
I0922 13:29:41.924129   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 229bf3df246b"
I0922 13:29:41.961532   10181 logs.go:120] Gathering logs for kube-proxy [504b0a2a29d3] ...
I0922 13:29:41.961572   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 504b0a2a29d3"
I0922 13:29:41.995083   10181 logs.go:120] Gathering logs for kube-controller-manager [978dbe7dd5ed] ...
I0922 13:29:41.995125   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 978dbe7dd5ed"
I0922 13:29:42.033314   10181 logs.go:120] Gathering logs for Docker ...
I0922 13:29:42.033353   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0922 13:29:42.047691   10181 logs.go:120] Gathering logs for container status ...
I0922 13:29:42.047732   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0922 13:29:42.062094   10181 logs.go:120] Gathering logs for dmesg ...
I0922 13:29:42.062136   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0922 13:29:42.075271   10181 logs.go:120] Gathering logs for kube-scheduler [63c9fc70f62c] ...
I0922 13:29:42.075312   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 63c9fc70f62c"
I0922 13:29:42.107624   10181 logs.go:120] Gathering logs for kube-proxy [46eb6b0292df] ...
I0922 13:29:42.107666   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 46eb6b0292df"
I0922 13:29:42.138589   10181 logs.go:120] Gathering logs for storage-provisioner [fc578cfa995d] ...
I0922 13:29:42.138630   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 fc578cfa995d"
I0922 13:29:42.169711   10181 logs.go:120] Gathering logs for describe nodes ...
I0922 13:29:42.169751   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.15.12/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0922 13:29:44.737952   10181 api_server.go:221] Checking apiserver healthz at https://172.17.0.3:8443/healthz ...
I0922 13:30:16.825751   10181 api_server.go:231] stopped: https://172.17.0.3:8443/healthz: Get "https://172.17.0.3:8443/healthz": dial tcp 172.17.0.3:8443: connect: connection timed out
I0922 13:30:16.846465   10181 kubeadm.go:513] restartCluster took 4m40.944262108s
W0922 13:30:16.846663   10181 out.go:145] ๐Ÿคฆ  Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: cluster wait timed out during healthz check
๐Ÿคฆ  Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: cluster wait timed out during healthz check
I0922 13:30:16.846766   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.15.12:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0922 13:30:19.655708   10181 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.15.12:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (2.808899262s)
I0922 13:30:19.655798   10181 ssh_runner.go:148] Run: sudo systemctl stop -f kubelet
I0922 13:30:19.665146   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0922 13:30:19.693388   10181 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0922 13:30:19.699268   10181 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver
I0922 13:30:19.699340   10181 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0922 13:30:19.706177   10181 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0922 13:30:19.706227   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.15.12:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0922 13:30:29.773954   10181 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.15.12:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (10.067667439s)
I0922 13:30:29.774014   10181 cni.go:74] Creating CNI manager for ""
I0922 13:30:29.774092   10181 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I0922 13:30:29.774158   10181 ssh_runner.go:148] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0922 13:30:29.774207   10181 ssh_runner.go:148] Run: sudo /var/lib/minikube/binaries/v1.15.12/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0922 13:30:29.774316   10181 ssh_runner.go:148] Run: sudo /var/lib/minikube/binaries/v1.15.12/kubectl label nodes minikube.k8s.io/version=v1.13.1 minikube.k8s.io/commit=1fd1f67f338cbab4b3e5a6e4c71c551f522ca138-dirty minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_09_22T13_30_29_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0922 13:30:29.874242   10181 kubeadm.go:881] duration metric: took 100.068594ms to wait for elevateKubeSystemPrivileges.
I0922 13:30:29.874318   10181 ops.go:34] apiserver oom_adj: 16
I0922 13:30:29.874351   10181 ops.go:39] adjusting apiserver oom_adj to -10
I0922 13:30:29.874385   10181 ssh_runner.go:148] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
I0922 13:30:29.889568   10181 kubeadm.go:326] StartCluster complete in 4m54.025244364s
I0922 13:30:29.889616   10181 settings.go:123] acquiring lock: {Name:mkf4c03ff0e2233f564acb725323b80961a1c586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0922 13:30:29.889745   10181 settings.go:131] Updating kubeconfig:  /home/dave/.kube/config
I0922 13:30:29.890312   10181 lock.go:35] WriteFile acquiring /home/dave/.kube/config: {Name:mk27eccc2ead5f61b61563cdf3203e1fe63b24ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0922 13:30:29.890553   10181 start.go:199] Will wait wait-timeout for node ...
I0922 13:30:29.890624   10181 addons.go:359] enableAddons start: toEnable=map[], additional=[]
I0922 13:30:29.890649   10181 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.15.12/kubectl scale deployment --replicas=1 coredns -n=kube-system
I0922 13:30:29.905993   10181 out.go:109] ๐Ÿ”Ž  Verifying Kubernetes components...
I0922 13:30:29.890671   10181 addons.go:55] Setting storage-provisioner=true in profile "minikube"
๐Ÿ”Ž  Verifying Kubernetes components...
I0922 13:30:29.890679   10181 addons.go:55] Setting default-storageclass=true in profile "minikube"
I0922 13:30:29.906140   10181 api_server.go:48] waiting for apiserver process to appear ...
I0922 13:30:29.906158   10181 addons.go:274] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0922 13:30:29.906165   10181 addons.go:131] Setting addon storage-provisioner=true in "minikube"
W0922 13:30:29.906240   10181 addons.go:140] addon storage-provisioner should already be in state true
I0922 13:30:29.906265   10181 host.go:65] Checking if "minikube" exists ...
I0922 13:30:29.906173   10181 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0922 13:30:29.906561   10181 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0922 13:30:29.906623   10181 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0922 13:30:30.001966   10181 start.go:553] successfully scaled coredns replicas to 1
I0922 13:30:30.002019   10181 api_server.go:68] duration metric: took 111.398152ms to wait for apiserver process to appear ...
I0922 13:30:30.002059   10181 api_server.go:84] waiting for apiserver healthz status ...
I0922 13:30:30.002090   10181 api_server.go:221] Checking apiserver healthz at https://172.17.0.3:8443/healthz ...
I0922 13:30:30.038991   10181 addons.go:243] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0922 13:30:30.039032   10181 ssh_runner.go:215] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0922 13:30:30.039117   10181 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0922 13:30:30.154672   10181 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/dave/.minikube/machines/minikube/id_rsa Username:docker}
I0922 13:30:30.263392   10181 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.15.12/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
W0922 13:31:00.040794   10181 out.go:145] โ—  Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://172.17.0.3:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 172.17.0.3:8443: i/o timeout]
โ—  Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://172.17.0.3:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 172.17.0.3:8443: i/o timeout]
I0922 13:31:00.062438   10181 out.go:109] ๐ŸŒŸ  Enabled addons: default-storageclass, storage-provisioner
๐ŸŒŸ  Enabled addons: default-storageclass, storage-provisioner
I0922 13:31:00.062504   10181 addons.go:361] enableAddons completed in 30.171882553s
I0922 13:31:02.265775   10181 api_server.go:231] stopped: https://172.17.0.3:8443/healthz: Get "https://172.17.0.3:8443/healthz": dial tcp 172.17.0.3:8443: connect: connection timed out
I0922 13:31:02.766072   10181 api_server.go:221] Checking apiserver healthz at https://172.17.0.3:8443/healthz ...
I0922 13:31:34.905754   10181 api_server.go:231] stopped: https://172.17.0.3:8443/healthz: Get "https://172.17.0.3:8443/healthz": dial tcp 172.17.0.3:8443: connect: connection timed out
I0922 13:31:35.266172   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0922 13:31:35.298634   10181 logs.go:206] 1 containers: [290c4a99b2f1]
I0922 13:31:35.298749   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0922 13:31:35.328409   10181 logs.go:206] 1 containers: [c83ad16aac46]
I0922 13:31:35.328540   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0922 13:31:35.357526   10181 logs.go:206] 1 containers: [906f6b036189]
I0922 13:31:35.357638   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0922 13:31:35.388539   10181 logs.go:206] 1 containers: [acd67483b7b6]
I0922 13:31:35.388654   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0922 13:31:35.417117   10181 logs.go:206] 1 containers: [c1a26e8eb270]
I0922 13:31:35.417216   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0922 13:31:35.447140   10181 logs.go:206] 0 containers: []
W0922 13:31:35.447200   10181 logs.go:208] No container was found matching "kubernetes-dashboard"
I0922 13:31:35.447321   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0922 13:31:35.477995   10181 logs.go:206] 1 containers: [9a1f512ff930]
I0922 13:31:35.478084   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0922 13:31:35.508997   10181 logs.go:206] 1 containers: [b1feb8dbe555]
I0922 13:31:35.509041   10181 logs.go:120] Gathering logs for kube-proxy [c1a26e8eb270] ...
I0922 13:31:35.509094   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 c1a26e8eb270"
I0922 13:31:35.558832   10181 logs.go:120] Gathering logs for Docker ...
I0922 13:31:35.558880   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0922 13:31:35.574819   10181 logs.go:120] Gathering logs for kubelet ...
I0922 13:31:35.574858   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0922 13:31:35.631451   10181 logs.go:120] Gathering logs for kube-apiserver [290c4a99b2f1] ...
I0922 13:31:35.631496   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 290c4a99b2f1"
I0922 13:31:35.683617   10181 logs.go:120] Gathering logs for etcd [c83ad16aac46] ...
I0922 13:31:35.683716   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 c83ad16aac46"
I0922 13:31:35.733494   10181 logs.go:120] Gathering logs for coredns [906f6b036189] ...
I0922 13:31:35.733546   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 906f6b036189"
I0922 13:31:35.779277   10181 logs.go:120] Gathering logs for kube-scheduler [acd67483b7b6] ...
I0922 13:31:35.779367   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 acd67483b7b6"
I0922 13:31:35.833077   10181 logs.go:120] Gathering logs for dmesg ...
I0922 13:31:35.833119   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0922 13:31:35.846235   10181 logs.go:120] Gathering logs for describe nodes ...
I0922 13:31:35.846276   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.15.12/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0922 13:31:35.914056   10181 logs.go:120] Gathering logs for storage-provisioner [9a1f512ff930] ...
I0922 13:31:35.914098   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 9a1f512ff930"
I0922 13:31:35.945936   10181 logs.go:120] Gathering logs for kube-controller-manager [b1feb8dbe555] ...
I0922 13:31:35.945977   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 b1feb8dbe555"
I0922 13:31:35.986991   10181 logs.go:120] Gathering logs for container status ...
I0922 13:31:35.987032   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0922 13:31:38.501998   10181 api_server.go:221] Checking apiserver healthz at https://172.17.0.3:8443/healthz ...
I0922 13:32:10.745744   10181 api_server.go:231] stopped: https://172.17.0.3:8443/healthz: Get "https://172.17.0.3:8443/healthz": dial tcp 172.17.0.3:8443: connect: connection timed out
I0922 13:32:10.766187   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0922 13:32:10.800026   10181 logs.go:206] 1 containers: [290c4a99b2f1]
I0922 13:32:10.800141   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0922 13:32:10.831602   10181 logs.go:206] 1 containers: [c83ad16aac46]
I0922 13:32:10.831670   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0922 13:32:10.864087   10181 logs.go:206] 1 containers: [906f6b036189]
I0922 13:32:10.864164   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0922 13:32:10.899828   10181 logs.go:206] 1 containers: [acd67483b7b6]
I0922 13:32:10.899896   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0922 13:32:10.927703   10181 logs.go:206] 1 containers: [c1a26e8eb270]
I0922 13:32:10.927770   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0922 13:32:10.956575   10181 logs.go:206] 0 containers: []
W0922 13:32:10.956658   10181 logs.go:208] No container was found matching "kubernetes-dashboard"
I0922 13:32:10.956691   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0922 13:32:10.986085   10181 logs.go:206] 1 containers: [9a1f512ff930]
I0922 13:32:10.986167   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0922 13:32:11.013001   10181 logs.go:206] 1 containers: [b1feb8dbe555]
I0922 13:32:11.013051   10181 logs.go:120] Gathering logs for Docker ...
I0922 13:32:11.013084   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0922 13:32:11.027765   10181 logs.go:120] Gathering logs for dmesg ...
I0922 13:32:11.027807   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0922 13:32:11.040625   10181 logs.go:120] Gathering logs for kube-apiserver [290c4a99b2f1] ...
I0922 13:32:11.040666   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 290c4a99b2f1"
I0922 13:32:11.089658   10181 logs.go:120] Gathering logs for kube-scheduler [acd67483b7b6] ...
I0922 13:32:11.089699   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 acd67483b7b6"
I0922 13:32:11.123581   10181 logs.go:120] Gathering logs for kube-proxy [c1a26e8eb270] ...
I0922 13:32:11.123667   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 c1a26e8eb270"
I0922 13:32:11.174889   10181 logs.go:120] Gathering logs for storage-provisioner [9a1f512ff930] ...
I0922 13:32:11.174930   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 9a1f512ff930"
I0922 13:32:11.208109   10181 logs.go:120] Gathering logs for container status ...
I0922 13:32:11.208148   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0922 13:32:11.224128   10181 logs.go:120] Gathering logs for kubelet ...
I0922 13:32:11.224167   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0922 13:32:11.279900   10181 logs.go:120] Gathering logs for describe nodes ...
I0922 13:32:11.279946   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.15.12/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0922 13:32:11.351113   10181 logs.go:120] Gathering logs for etcd [c83ad16aac46] ...
I0922 13:32:11.351152   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 c83ad16aac46"
I0922 13:32:11.382949   10181 logs.go:120] Gathering logs for coredns [906f6b036189] ...
I0922 13:32:11.382988   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 906f6b036189"
I0922 13:32:11.415809   10181 logs.go:120] Gathering logs for kube-controller-manager [b1feb8dbe555] ...
I0922 13:32:11.415851   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 b1feb8dbe555"
I0922 13:32:13.961009   10181 api_server.go:221] Checking apiserver healthz at https://172.17.0.3:8443/healthz ...
I0922 13:32:45.945948   10181 api_server.go:231] stopped: https://172.17.0.3:8443/healthz: Get "https://172.17.0.3:8443/healthz": dial tcp 172.17.0.3:8443: connect: connection timed out
I0922 13:32:46.266312   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0922 13:32:46.326097   10181 logs.go:206] 1 containers: [290c4a99b2f1]
I0922 13:32:46.326176   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0922 13:32:46.355916   10181 logs.go:206] 1 containers: [c83ad16aac46]
I0922 13:32:46.356057   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0922 13:32:46.386805   10181 logs.go:206] 1 containers: [906f6b036189]
I0922 13:32:46.386909   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0922 13:32:46.432684   10181 logs.go:206] 1 containers: [acd67483b7b6]
I0922 13:32:46.432793   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0922 13:32:46.466273   10181 logs.go:206] 1 containers: [c1a26e8eb270]
I0922 13:32:46.466460   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0922 13:32:46.496827   10181 logs.go:206] 0 containers: []
W0922 13:32:46.496886   10181 logs.go:208] No container was found matching "kubernetes-dashboard"
I0922 13:32:46.497043   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0922 13:32:46.526722   10181 logs.go:206] 1 containers: [9a1f512ff930]
I0922 13:32:46.526893   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0922 13:32:46.556536   10181 logs.go:206] 1 containers: [b1feb8dbe555]
I0922 13:32:46.556583   10181 logs.go:120] Gathering logs for kube-apiserver [290c4a99b2f1] ...
I0922 13:32:46.556606   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 290c4a99b2f1"
I0922 13:32:46.607569   10181 logs.go:120] Gathering logs for coredns [906f6b036189] ...
I0922 13:32:46.607615   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 906f6b036189"
I0922 13:32:46.639661   10181 logs.go:120] Gathering logs for kube-proxy [c1a26e8eb270] ...
I0922 13:32:46.639746   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 c1a26e8eb270"
I0922 13:32:46.671406   10181 logs.go:120] Gathering logs for storage-provisioner [9a1f512ff930] ...
I0922 13:32:46.671467   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 9a1f512ff930"
I0922 13:32:46.706674   10181 logs.go:120] Gathering logs for kube-controller-manager [b1feb8dbe555] ...
I0922 13:32:46.706786   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 b1feb8dbe555"
I0922 13:32:46.766568   10181 logs.go:120] Gathering logs for kubelet ...
I0922 13:32:46.766612   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0922 13:32:46.830153   10181 logs.go:120] Gathering logs for describe nodes ...
I0922 13:32:46.830199   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.15.12/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0922 13:32:46.922087   10181 logs.go:120] Gathering logs for kube-scheduler [acd67483b7b6] ...
I0922 13:32:46.922128   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 acd67483b7b6"
I0922 13:32:46.956698   10181 logs.go:120] Gathering logs for Docker ...
I0922 13:32:46.956744   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0922 13:32:46.972019   10181 logs.go:120] Gathering logs for container status ...
I0922 13:32:46.972064   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0922 13:32:46.986451   10181 logs.go:120] Gathering logs for dmesg ...
I0922 13:32:46.986493   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0922 13:32:47.000141   10181 logs.go:120] Gathering logs for etcd [c83ad16aac46] ...
I0922 13:32:47.000180   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 c83ad16aac46"
I0922 13:32:49.532434   10181 api_server.go:221] Checking apiserver healthz at https://172.17.0.3:8443/healthz ...
I0922 13:33:21.785788   10181 api_server.go:231] stopped: https://172.17.0.3:8443/healthz: Get "https://172.17.0.3:8443/healthz": dial tcp 172.17.0.3:8443: connect: connection timed out
I0922 13:33:22.266212   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0922 13:33:22.299124   10181 logs.go:206] 1 containers: [290c4a99b2f1]
I0922 13:33:22.299252   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0922 13:33:22.334359   10181 logs.go:206] 1 containers: [c83ad16aac46]
I0922 13:33:22.334460   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0922 13:33:22.366468   10181 logs.go:206] 1 containers: [906f6b036189]
I0922 13:33:22.366595   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0922 13:33:22.395764   10181 logs.go:206] 1 containers: [acd67483b7b6]
I0922 13:33:22.395849   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0922 13:33:22.425179   10181 logs.go:206] 1 containers: [c1a26e8eb270]
I0922 13:33:22.425284   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0922 13:33:22.457165   10181 logs.go:206] 0 containers: []
W0922 13:33:22.457246   10181 logs.go:208] No container was found matching "kubernetes-dashboard"
I0922 13:33:22.457332   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0922 13:33:22.486004   10181 logs.go:206] 1 containers: [9a1f512ff930]
I0922 13:33:22.486095   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0922 13:33:22.531621   10181 logs.go:206] 1 containers: [b1feb8dbe555]
I0922 13:33:22.531687   10181 logs.go:120] Gathering logs for dmesg ...
I0922 13:33:22.531698   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0922 13:33:22.546671   10181 logs.go:120] Gathering logs for describe nodes ...
I0922 13:33:22.546712   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.15.12/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0922 13:33:22.641476   10181 logs.go:120] Gathering logs for etcd [c83ad16aac46] ...
I0922 13:33:22.641573   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 c83ad16aac46"
I0922 13:33:22.694186   10181 logs.go:120] Gathering logs for coredns [906f6b036189] ...
I0922 13:33:22.694229   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 906f6b036189"
I0922 13:33:22.726482   10181 logs.go:120] Gathering logs for Docker ...
I0922 13:33:22.726545   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0922 13:33:22.750289   10181 logs.go:120] Gathering logs for kubelet ...
I0922 13:33:22.750378   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0922 13:33:22.820969   10181 logs.go:120] Gathering logs for kube-scheduler [acd67483b7b6] ...
I0922 13:33:22.821056   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 acd67483b7b6"
I0922 13:33:22.854108   10181 logs.go:120] Gathering logs for kube-proxy [c1a26e8eb270] ...
I0922 13:33:22.854149   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 c1a26e8eb270"
I0922 13:33:22.895258   10181 logs.go:120] Gathering logs for storage-provisioner [9a1f512ff930] ...
I0922 13:33:22.895360   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 9a1f512ff930"
I0922 13:33:22.939703   10181 logs.go:120] Gathering logs for kube-controller-manager [b1feb8dbe555] ...
I0922 13:33:22.939746   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 b1feb8dbe555"
I0922 13:33:22.981281   10181 logs.go:120] Gathering logs for container status ...
I0922 13:33:22.981319   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0922 13:33:22.998218   10181 logs.go:120] Gathering logs for kube-apiserver [290c4a99b2f1] ...
I0922 13:33:22.998260   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 290c4a99b2f1"
I0922 13:33:25.549579   10181 api_server.go:221] Checking apiserver healthz at https://172.17.0.3:8443/healthz ...
I0922 13:33:57.625704   10181 api_server.go:231] stopped: https://172.17.0.3:8443/healthz: Get "https://172.17.0.3:8443/healthz": dial tcp 172.17.0.3:8443: connect: connection timed out
I0922 13:33:57.766111   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0922 13:33:57.799238   10181 logs.go:206] 1 containers: [290c4a99b2f1]
I0922 13:33:57.799362   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0922 13:33:57.834014   10181 logs.go:206] 1 containers: [c83ad16aac46]
I0922 13:33:57.834208   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0922 13:33:57.868604   10181 logs.go:206] 1 containers: [906f6b036189]
I0922 13:33:57.868711   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0922 13:33:57.902836   10181 logs.go:206] 1 containers: [acd67483b7b6]
I0922 13:33:57.902948   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0922 13:33:57.937982   10181 logs.go:206] 1 containers: [c1a26e8eb270]
I0922 13:33:57.938048   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0922 13:33:57.968395   10181 logs.go:206] 0 containers: []
W0922 13:33:57.968448   10181 logs.go:208] No container was found matching "kubernetes-dashboard"
I0922 13:33:57.968478   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0922 13:33:57.999285   10181 logs.go:206] 1 containers: [9a1f512ff930]
I0922 13:33:57.999345   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0922 13:33:58.027376   10181 logs.go:206] 1 containers: [b1feb8dbe555]
I0922 13:33:58.027488   10181 logs.go:120] Gathering logs for coredns [906f6b036189] ...
I0922 13:33:58.027600   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 906f6b036189"
I0922 13:33:58.058823   10181 logs.go:120] Gathering logs for kube-scheduler [acd67483b7b6] ...
I0922 13:33:58.058918   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 acd67483b7b6"
I0922 13:33:58.109533   10181 logs.go:120] Gathering logs for storage-provisioner [9a1f512ff930] ...
I0922 13:33:58.109596   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 9a1f512ff930"
I0922 13:33:58.141785   10181 logs.go:120] Gathering logs for kube-controller-manager [b1feb8dbe555] ...
I0922 13:33:58.141829   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 b1feb8dbe555"
I0922 13:33:58.203359   10181 logs.go:120] Gathering logs for Docker ...
I0922 13:33:58.203444   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0922 13:33:58.233236   10181 logs.go:120] Gathering logs for kubelet ...
I0922 13:33:58.233330   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0922 13:33:58.321495   10181 logs.go:120] Gathering logs for dmesg ...
I0922 13:33:58.321542   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0922 13:33:58.335285   10181 logs.go:120] Gathering logs for kube-apiserver [290c4a99b2f1] ...
I0922 13:33:58.335327   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 290c4a99b2f1"
I0922 13:33:58.383335   10181 logs.go:120] Gathering logs for container status ...
I0922 13:33:58.383376   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0922 13:33:58.399644   10181 logs.go:120] Gathering logs for describe nodes ...
I0922 13:33:58.399682   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.15.12/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0922 13:33:58.467992   10181 logs.go:120] Gathering logs for etcd [c83ad16aac46] ...
I0922 13:33:58.468031   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 c83ad16aac46"
I0922 13:33:58.501605   10181 logs.go:120] Gathering logs for kube-proxy [c1a26e8eb270] ...
I0922 13:33:58.501671   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 c1a26e8eb270"
I0922 13:34:01.038102   10181 api_server.go:221] Checking apiserver healthz at https://172.17.0.3:8443/healthz ...
I0922 13:34:33.466133   10181 api_server.go:231] stopped: https://172.17.0.3:8443/healthz: Get "https://172.17.0.3:8443/healthz": dial tcp 172.17.0.3:8443: connect: connection timed out
I0922 13:34:33.766392   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0922 13:34:33.833194   10181 logs.go:206] 1 containers: [290c4a99b2f1]
I0922 13:34:33.833301   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0922 13:34:33.864720   10181 logs.go:206] 1 containers: [c83ad16aac46]
I0922 13:34:33.864860   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0922 13:34:33.895434   10181 logs.go:206] 1 containers: [906f6b036189]
I0922 13:34:33.895524   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0922 13:34:33.922718   10181 logs.go:206] 1 containers: [acd67483b7b6]
I0922 13:34:33.922855   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0922 13:34:33.951837   10181 logs.go:206] 1 containers: [c1a26e8eb270]
I0922 13:34:33.951934   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0922 13:34:33.981686   10181 logs.go:206] 0 containers: []
W0922 13:34:33.981746   10181 logs.go:208] No container was found matching "kubernetes-dashboard"
I0922 13:34:33.981794   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0922 13:34:34.010561   10181 logs.go:206] 1 containers: [9a1f512ff930]
I0922 13:34:34.010682   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0922 13:34:34.039786   10181 logs.go:206] 1 containers: [b1feb8dbe555]
I0922 13:34:34.039876   10181 logs.go:120] Gathering logs for kubelet ...
I0922 13:34:34.039922   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0922 13:34:34.095936   10181 logs.go:120] Gathering logs for dmesg ...
I0922 13:34:34.095981   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0922 13:34:34.110204   10181 logs.go:120] Gathering logs for etcd [c83ad16aac46] ...
I0922 13:34:34.110245   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 c83ad16aac46"
I0922 13:34:34.142123   10181 logs.go:120] Gathering logs for kube-proxy [c1a26e8eb270] ...
I0922 13:34:34.142168   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 c1a26e8eb270"
I0922 13:34:34.176031   10181 logs.go:120] Gathering logs for storage-provisioner [9a1f512ff930] ...
I0922 13:34:34.176072   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 9a1f512ff930"
I0922 13:34:34.207772   10181 logs.go:120] Gathering logs for Docker ...
I0922 13:34:34.207837   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0922 13:34:34.224238   10181 logs.go:120] Gathering logs for container status ...
I0922 13:34:34.224279   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0922 13:34:34.239830   10181 logs.go:120] Gathering logs for describe nodes ...
I0922 13:34:34.239872   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.15.12/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0922 13:34:34.302746   10181 logs.go:120] Gathering logs for kube-apiserver [290c4a99b2f1] ...
I0922 13:34:34.302788   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 290c4a99b2f1"
I0922 13:34:34.352893   10181 logs.go:120] Gathering logs for coredns [906f6b036189] ...
I0922 13:34:34.352936   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 906f6b036189"
I0922 13:34:34.381828   10181 logs.go:120] Gathering logs for kube-scheduler [acd67483b7b6] ...
I0922 13:34:34.381870   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 acd67483b7b6"
I0922 13:34:34.416671   10181 logs.go:120] Gathering logs for kube-controller-manager [b1feb8dbe555] ...
I0922 13:34:34.416723   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 b1feb8dbe555"
I0922 13:34:36.962794   10181 api_server.go:221] Checking apiserver healthz at https://172.17.0.3:8443/healthz ...
I0922 13:35:09.306002   10181 api_server.go:231] stopped: https://172.17.0.3:8443/healthz: Get "https://172.17.0.3:8443/healthz": dial tcp 172.17.0.3:8443: connect: connection timed out
I0922 13:35:09.306290   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0922 13:35:09.379031   10181 logs.go:206] 1 containers: [290c4a99b2f1]
I0922 13:35:09.379139   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0922 13:35:09.411351   10181 logs.go:206] 1 containers: [c83ad16aac46]
I0922 13:35:09.411459   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0922 13:35:09.442100   10181 logs.go:206] 1 containers: [906f6b036189]
I0922 13:35:09.442163   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0922 13:35:09.494302   10181 logs.go:206] 1 containers: [acd67483b7b6]
I0922 13:35:09.494368   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0922 13:35:09.524332   10181 logs.go:206] 1 containers: [c1a26e8eb270]
I0922 13:35:09.524436   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0922 13:35:09.565579   10181 logs.go:206] 0 containers: []
W0922 13:35:09.565640   10181 logs.go:208] No container was found matching "kubernetes-dashboard"
I0922 13:35:09.565722   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0922 13:35:09.599710   10181 logs.go:206] 1 containers: [9a1f512ff930]
I0922 13:35:09.599891   10181 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0922 13:35:09.627717   10181 logs.go:206] 1 containers: [b1feb8dbe555]
I0922 13:35:09.627810   10181 logs.go:120] Gathering logs for kube-proxy [c1a26e8eb270] ...
I0922 13:35:09.627819   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 c1a26e8eb270"
I0922 13:35:09.659403   10181 logs.go:120] Gathering logs for storage-provisioner [9a1f512ff930] ...
I0922 13:35:09.659444   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 9a1f512ff930"
I0922 13:35:09.689656   10181 logs.go:120] Gathering logs for kubelet ...
I0922 13:35:09.689750   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0922 13:35:09.745821   10181 logs.go:120] Gathering logs for dmesg ...
I0922 13:35:09.745863   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0922 13:35:09.759308   10181 logs.go:120] Gathering logs for describe nodes ...
I0922 13:35:09.759348   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.15.12/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0922 13:35:09.825187   10181 logs.go:120] Gathering logs for coredns [906f6b036189] ...
I0922 13:35:09.825231   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 906f6b036189"
I0922 13:35:09.856303   10181 logs.go:120] Gathering logs for kube-scheduler [acd67483b7b6] ...
I0922 13:35:09.856372   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 acd67483b7b6"
I0922 13:35:09.900197   10181 logs.go:120] Gathering logs for kube-apiserver [290c4a99b2f1] ...
I0922 13:35:09.900239   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 290c4a99b2f1"
I0922 13:35:09.996711   10181 logs.go:120] Gathering logs for etcd [c83ad16aac46] ...
I0922 13:35:09.996790   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 c83ad16aac46"
I0922 13:35:10.035879   10181 logs.go:120] Gathering logs for kube-controller-manager [b1feb8dbe555] ...
I0922 13:35:10.035978   10181 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 b1feb8dbe555"
I0922 13:35:10.085248   10181 logs.go:120] Gathering logs for Docker ...
I0922 13:35:10.085313   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0922 13:35:10.102401   10181 logs.go:120] Gathering logs for container status ...
I0922 13:35:10.102444   10181 ssh_runner.go:148] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0922 13:35:12.617160   10181 api_server.go:221] Checking apiserver healthz at https://172.17.0.3:8443/healthz ...
I0922 13:35:44.505801   10181 api_server.go:231] stopped: https://172.17.0.3:8443/healthz: Get "https://172.17.0.3:8443/healthz": dial tcp 172.17.0.3:8443: connect: connection timed out
I0922 13:35:44.516637   10181 out.go:109]

W0922 13:35:44.516870   10181 out.go:145] โŒ  Exiting due to GUEST_START: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: timed out waiting for the condition
โŒ  Exiting due to GUEST_START: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: timed out waiting for the condition
W0922 13:35:44.516959   10181 out.go:145]

W0922 13:35:44.517042   10181 out.go:145] ๐Ÿ˜ฟ  If the above advice does not help, please let us know:
๐Ÿ˜ฟ  If the above advice does not help, please let us know:
W0922 13:35:44.517130   10181 out.go:145] ๐Ÿ‘‰  https://github.com/kubernetes/minikube/issues/new/choose
๐Ÿ‘‰  https://github.com/kubernetes/minikube/issues/new/choose
I0922 13:35:44.550862   10181 out.go:109]
</details>

**Optional: Full output of `minikube logs` command:**
<details>
dave@DESKTOP-RO4RC0J:~$ minikube logs
==> Docker <==
-- Logs begin at Tue 2020-09-22 12:14:53 UTC, end at Tue 2020-09-22 12:36:38 UTC. --
Sep 22 12:14:59 minikube dockerd[161]: time="2020-09-22T12:14:59.502190227Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Sep 22 12:14:59 minikube dockerd[161]: time="2020-09-22T12:14:59.502703135Z" level=info msg="Daemon shutdown complete"
Sep 22 12:14:59 minikube dockerd[161]: time="2020-09-22T12:14:59.502738135Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Sep 22 12:14:59 minikube systemd[1]: docker.service: Succeeded.
Sep 22 12:14:59 minikube systemd[1]: Stopped Docker Application Container Engine.
Sep 22 12:14:59 minikube systemd[1]: Starting Docker Application Container Engine...
Sep 22 12:14:59 minikube dockerd[400]: time="2020-09-22T12:14:59.541686917Z" level=info msg="Starting up"
Sep 22 12:14:59 minikube dockerd[400]: time="2020-09-22T12:14:59.543611746Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Sep 22 12:14:59 minikube dockerd[400]: time="2020-09-22T12:14:59.543638446Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Sep 22 12:14:59 minikube dockerd[400]: time="2020-09-22T12:14:59.543653947Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Sep 22 12:14:59 minikube dockerd[400]: time="2020-09-22T12:14:59.543660847Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Sep 22 12:14:59 minikube dockerd[400]: time="2020-09-22T12:14:59.545011367Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Sep 22 12:14:59 minikube dockerd[400]: time="2020-09-22T12:14:59.545062268Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Sep 22 12:14:59 minikube dockerd[400]: time="2020-09-22T12:14:59.545076268Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Sep 22 12:14:59 minikube dockerd[400]: time="2020-09-22T12:14:59.545082868Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Sep 22 12:14:59 minikube dockerd[400]: time="2020-09-22T12:14:59.570569149Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Sep 22 12:14:59 minikube dockerd[400]: time="2020-09-22T12:14:59.583616344Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Sep 22 12:14:59 minikube dockerd[400]: time="2020-09-22T12:14:59.583640144Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Sep 22 12:14:59 minikube dockerd[400]: time="2020-09-22T12:14:59.583646844Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
Sep 22 12:14:59 minikube dockerd[400]: time="2020-09-22T12:14:59.583650344Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
Sep 22 12:14:59 minikube dockerd[400]: time="2020-09-22T12:14:59.583653645Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
Sep 22 12:14:59 minikube dockerd[400]: time="2020-09-22T12:14:59.583656745Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
Sep 22 12:14:59 minikube dockerd[400]: time="2020-09-22T12:14:59.583769746Z" level=info msg="Loading containers: start."
Sep 22 12:14:59 minikube dockerd[400]: time="2020-09-22T12:14:59.584970464Z" level=warning msg="Running modprobe bridge br_netfilter failed with message: modprobe: WARNING: Module bridge not found in directory /lib/modules/4.19.128-microsoft-standard\nmodprobe: WARNING: Module br_netfilter not found in directory /lib/modules/4.19.128-microsoft-standard\n, error: exit status 1"
Sep 22 12:14:59 minikube dockerd[400]: time="2020-09-22T12:14:59.662330920Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Sep 22 12:14:59 minikube dockerd[400]: time="2020-09-22T12:14:59.700762195Z" level=info msg="Loading containers: done."
Sep 22 12:14:59 minikube dockerd[400]: time="2020-09-22T12:14:59.715696918Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8
Sep 22 12:14:59 minikube dockerd[400]: time="2020-09-22T12:14:59.715748119Z" level=info msg="Daemon has completed initialization"
Sep 22 12:14:59 minikube dockerd[400]: time="2020-09-22T12:14:59.735329411Z" level=info msg="API listen on /var/run/docker.sock"
Sep 22 12:14:59 minikube dockerd[400]: time="2020-09-22T12:14:59.735343712Z" level=info msg="API listen on [::]:2376"
Sep 22 12:14:59 minikube systemd[1]: Started Docker Application Container Engine.
Sep 22 12:15:41 minikube dockerd[400]: time="2020-09-22T12:15:41.519436966Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:26:08 minikube dockerd[400]: time="2020-09-22T12:26:08.468807871Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:26:08 minikube dockerd[400]: time="2020-09-22T12:26:08.469153076Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:26:08 minikube dockerd[400]: time="2020-09-22T12:26:08.556309102Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:26:08 minikube dockerd[400]: time="2020-09-22T12:26:08.556360903Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:26:08 minikube dockerd[400]: time="2020-09-22T12:26:08.556394903Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:26:08 minikube dockerd[400]: time="2020-09-22T12:26:08.558368431Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:26:08 minikube dockerd[400]: time="2020-09-22T12:26:08.558435932Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:26:08 minikube dockerd[400]: time="2020-09-22T12:26:08.568617575Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:26:08 minikube dockerd[400]: time="2020-09-22T12:26:08.572593031Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:26:08 minikube dockerd[400]: time="2020-09-22T12:26:08.572938936Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:26:08 minikube dockerd[400]: time="2020-09-22T12:26:08.574425857Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:26:08 minikube dockerd[400]: time="2020-09-22T12:26:08.576912892Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:26:08 minikube dockerd[400]: time="2020-09-22T12:26:08.576973593Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:26:09 minikube dockerd[400]: time="2020-09-22T12:26:09.117751000Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:30:17 minikube dockerd[400]: time="2020-09-22T12:30:17.053344480Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:30:17 minikube dockerd[400]: time="2020-09-22T12:30:17.181110662Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:30:17 minikube dockerd[400]: time="2020-09-22T12:30:17.304084576Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:30:17 minikube dockerd[400]: time="2020-09-22T12:30:17.739675649Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:30:17 minikube dockerd[400]: time="2020-09-22T12:30:17.865245100Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:30:18 minikube dockerd[400]: time="2020-09-22T12:30:18.003692330Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:30:18 minikube dockerd[400]: time="2020-09-22T12:30:18.147444534Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:30:18 minikube dockerd[400]: time="2020-09-22T12:30:18.268944728Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:30:18 minikube dockerd[400]: time="2020-09-22T12:30:18.390839128Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:30:18 minikube dockerd[400]: time="2020-09-22T12:30:18.516109874Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:30:18 minikube dockerd[400]: time="2020-09-22T12:30:18.655967424Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:30:18 minikube dockerd[400]: time="2020-09-22T12:30:18.805292106Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:30:18 minikube dockerd[400]: time="2020-09-22T12:30:18.952946565Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 22 12:30:19 minikube dockerd[400]: time="2020-09-22T12:30:19.079084923Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"

==> container status <==
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
906f6b036189c       eb516548c180f       6 minutes ago       Running             coredns                   0
      09952f9d12ec5
c1a26e8eb270d       00206e1127f2a       6 minutes ago       Running             kube-proxy                0
      35d3911b7d19d
9a1f512ff930a       bad58561c4be7       6 minutes ago       Running             storage-provisioner       0
      4c085ccb5eda3
c83ad16aac465       2c4adeb21b4ff       6 minutes ago       Running             etcd                      0
      4a32d436e465f
acd67483b7b61       196d53938faab       6 minutes ago       Running             kube-scheduler            0
      a09a431f9063e
b1feb8dbe5556       7b4d4985877a5       6 minutes ago       Running             kube-controller-manager   0
      1297f68ff1271
290c4a99b2f13       c81971987f04a       6 minutes ago       Running             kube-apiserver            0
      a72523424ab7a

==> coredns [906f6b036189] <==
.:53
2020-09-22T12:30:37.171Z [INFO] CoreDNS-1.3.1
2020-09-22T12:30:37.171Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2020-09-22T12:30:37.171Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843

==> describe nodes <==
Name:               minikube
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=minikube
                    kubernetes.io/os=linux
                    minikube.k8s.io/commit=1fd1f67f338cbab4b3e5a6e4c71c551f522ca138-dirty
                    minikube.k8s.io/name=minikube
                    minikube.k8s.io/updated_at=2020_09_22T13_30_29_0700
                    minikube.k8s.io/version=v1.13.1
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Tue, 22 Sep 2020 12:30:26 +0000
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason
    Message
  ----             ------  -----------------                 ------------------                ------
    -------
  MemoryPressure   False   Tue, 22 Sep 2020 12:36:28 +0000   Tue, 22 Sep 2020 12:30:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Tue, 22 Sep 2020 12:36:28 +0000   Tue, 22 Sep 2020 12:30:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Tue, 22 Sep 2020 12:36:28 +0000   Tue, 22 Sep 2020 12:30:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Tue, 22 Sep 2020 12:36:28 +0000   Tue, 22 Sep 2020 12:30:22 +0000   KubeletReady
    kubelet is posting ready status
Addresses:
  InternalIP:  172.17.0.3
  Hostname:    minikube
Capacity:
 cpu:                16
 ephemeral-storage:  263174212Ki
 hugepages-2Mi:      0
 memory:             26189860Ki
 pods:               110
Allocatable:
 cpu:                16
 ephemeral-storage:  263174212Ki
 hugepages-2Mi:      0
 memory:             26189860Ki
 pods:               110
System Info:
 Machine ID:                 5851cafe564a4b0d808992c2ac513eac
 System UUID:                5851cafe564a4b0d808992c2ac513eac
 Boot ID:                    db911e1a-fa51-4d51-ad49-7e65ffdbcf31
 Kernel Version:             4.19.128-microsoft-standard
 OS Image:                   Ubuntu 20.04 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://19.3.8
 Kubelet Version:            v1.15.12
 Kube-Proxy Version:         v1.15.12
Non-terminated Pods:         (7 in total)
  Namespace                  Name                                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                                ------------  ----------  ---------------  -------------  ---
  kube-system                coredns-5d4dd4b4db-sjzm5            100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m3s
  kube-system                etcd-minikube                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
  kube-system                kube-apiserver-minikube             250m (1%)     0 (0%)      0 (0%)           0 (0%)         5m10s
  kube-system                kube-controller-manager-minikube    200m (1%)     0 (0%)      0 (0%)           0 (0%)         5m6s
  kube-system                kube-proxy-gbnrt                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
  kube-system                kube-scheduler-minikube             100m (0%)     0 (0%)      0 (0%)           0 (0%)         4m48s
  kube-system                storage-provisioner                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests   Limits
  --------           --------   ------
  cpu                650m (4%)  0 (0%)
  memory             70Mi (0%)  170Mi (0%)
  ephemeral-storage  0 (0%)     0 (0%)
Events:
  Type     Reason                   Age                    From                  Message
  ----     ------                   ----                   ----                  -------
  Normal   NodeHasSufficientMemory  6m18s (x8 over 6m18s)  kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    6m18s (x8 over 6m18s)  kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     6m18s (x7 over 6m18s)  kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
  Warning  readOnlySysFS            6m3s                   kube-proxy, minikube  CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)
  Normal   Starting                 6m3s                   kube-proxy, minikube  Starting kube-proxy.

==> dmesg <==
[Sep22 11:28] WSL2: Performing memory compaction.
[Sep22 11:29] WSL2: Performing memory compaction.
[Sep22 11:30] WSL2: Performing memory compaction.
[Sep22 11:31] WSL2: Performing memory compaction.
[Sep22 11:32] WSL2: Performing memory compaction.
[Sep22 11:33] WSL2: Performing memory compaction.
[Sep22 11:34] WSL2: Performing memory compaction.
[Sep22 11:35] WSL2: Performing memory compaction.
[Sep22 11:36] WSL2: Performing memory compaction.
[Sep22 11:38] WSL2: Performing memory compaction.
[Sep22 11:40] WSL2: Performing memory compaction.
[Sep22 11:42] WSL2: Performing memory compaction.
[Sep22 11:43] WSL2: Performing memory compaction.
[Sep22 11:44] WSL2: Performing memory compaction.
[Sep22 11:45] WSL2: Performing memory compaction.
[Sep22 11:46] WSL2: Performing memory compaction.
[Sep22 11:47] WSL2: Performing memory compaction.
[Sep22 11:49] WSL2: Performing memory compaction.
[Sep22 11:50] WSL2: Performing memory compaction.
[Sep22 11:51] WSL2: Performing memory compaction.
[Sep22 11:53] WSL2: Performing memory compaction.
[Sep22 11:54] WSL2: Performing memory compaction.
[Sep22 11:56] WSL2: Performing memory compaction.
[Sep22 11:58] WSL2: Performing memory compaction.
[Sep22 11:59] WSL2: Performing memory compaction.
[Sep22 12:00] WSL2: Performing memory compaction.
[Sep22 12:01] WSL2: Performing memory compaction.
[Sep22 12:02] WSL2: Performing memory compaction.
[Sep22 12:04] WSL2: Performing memory compaction.
[Sep22 12:05] WSL2: Performing memory compaction.
[Sep22 12:06] WSL2: Performing memory compaction.
[Sep22 12:08] WSL2: Performing memory compaction.
[Sep22 12:09] WSL2: Performing memory compaction.
[Sep22 12:10] WSL2: Performing memory compaction.
[Sep22 12:11] WSL2: Performing memory compaction.
[Sep22 12:12] WSL2: Performing memory compaction.
[Sep22 12:13] WSL2: Performing memory compaction.
[Sep22 12:14] WSL2: Performing memory compaction.
[Sep22 12:15] tee (34687): /proc/34227/oom_adj is deprecated, please use /proc/34227/oom_score_adj instead.
[ +43.456898] WSL2: Performing memory compaction.
[Sep22 12:16] WSL2: Performing memory compaction.
[Sep22 12:17] WSL2: Performing memory compaction.
[Sep22 12:19] WSL2: Performing memory compaction.
[Sep22 12:20] WSL2: Performing memory compaction.
[Sep22 12:21] WSL2: Performing memory compaction.
[Sep22 12:22] WSL2: Performing memory compaction.
[Sep22 12:23] WSL2: Performing memory compaction.
[Sep22 12:24] WSL2: Performing memory compaction.
[Sep22 12:25] WSL2: Performing memory compaction.
[Sep22 12:26] WSL2: Performing memory compaction.
[Sep22 12:27] WSL2: Performing memory compaction.
[Sep22 12:28] WSL2: Performing memory compaction.
[Sep22 12:29] WSL2: Performing memory compaction.
[Sep22 12:30] WSL2: Performing memory compaction.
[Sep22 12:31] WSL2: Performing memory compaction.
[Sep22 12:32] WSL2: Performing memory compaction.
[Sep22 12:33] WSL2: Performing memory compaction.
[Sep22 12:34] WSL2: Performing memory compaction.
[Sep22 12:35] WSL2: Performing memory compaction.
[Sep22 12:36] WSL2: Performing memory compaction.

==> etcd [c83ad16aac46] <==
2020-09-22 12:30:22.577077 I | etcdmain: etcd Version: 3.3.10
2020-09-22 12:30:22.577152 I | etcdmain: Git SHA: 27fc7e2
2020-09-22 12:30:22.577158 I | etcdmain: Go Version: go1.10.4
2020-09-22 12:30:22.577161 I | etcdmain: Go OS/Arch: linux/amd64
2020-09-22 12:30:22.577164 I | etcdmain: setting maximum number of CPUs to 16, total number of available CPUs is 16
2020-09-22 12:30:22.577211 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-09-22 12:30:22.577730 I | embed: listening for peers on https://172.17.0.3:2380
2020-09-22 12:30:22.577783 I | embed: listening for client requests on 127.0.0.1:2379
2020-09-22 12:30:22.577803 I | embed: listening for client requests on 172.17.0.3:2379
2020-09-22 12:30:22.584366 I | etcdserver: name = minikube
2020-09-22 12:30:22.584397 I | etcdserver: data dir = /var/lib/minikube/etcd
2020-09-22 12:30:22.584404 I | etcdserver: member dir = /var/lib/minikube/etcd/member
2020-09-22 12:30:22.584407 I | etcdserver: heartbeat = 100ms
2020-09-22 12:30:22.584409 I | etcdserver: election = 1000ms
2020-09-22 12:30:22.584412 I | etcdserver: snapshot count = 10000
2020-09-22 12:30:22.584418 I | etcdserver: advertise client URLs = https://172.17.0.3:2379
2020-09-22 12:30:22.584421 I | etcdserver: initial advertise peer URLs = https://172.17.0.3:2380
2020-09-22 12:30:22.584426 I | etcdserver: initial cluster = minikube=https://172.17.0.3:2380
2020-09-22 12:30:22.595699 I | etcdserver: starting member b273bc7741bcb020 in cluster 86482fea2286a1d2
2020-09-22 12:30:22.595732 I | raft: b273bc7741bcb020 became follower at term 0
2020-09-22 12:30:22.595739 I | raft: newRaft b273bc7741bcb020 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2020-09-22 12:30:22.595741 I | raft: b273bc7741bcb020 became follower at term 1
2020-09-22 12:30:22.674726 W | auth: simple token is not cryptographically signed
2020-09-22 12:30:22.686076 I | etcdserver: starting server... [version: 3.3.10, cluster version: to_be_decided]
2020-09-22 12:30:22.686252 I | etcdserver: b273bc7741bcb020 as single-node; fast-forwarding 9 ticks (election ticks 10)
2020-09-22 12:30:22.686529 I | etcdserver/membership: added member b273bc7741bcb020 [https://172.17.0.3:2380] to cluster 86482fea2286a1d2
2020-09-22 12:30:22.687807 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-09-22 12:30:22.687911 I | embed: listening for metrics on http://172.17.0.3:2381
2020-09-22 12:30:22.687958 I | embed: listening for metrics on http://127.0.0.1:2381
2020-09-22 12:30:22.996020 I | raft: b273bc7741bcb020 is starting a new election at term 1
2020-09-22 12:30:22.996053 I | raft: b273bc7741bcb020 became candidate at term 2
2020-09-22 12:30:22.996076 I | raft: b273bc7741bcb020 received MsgVoteResp from b273bc7741bcb020 at term 2
2020-09-22 12:30:22.996085 I | raft: b273bc7741bcb020 became leader at term 2
2020-09-22 12:30:22.996088 I | raft: raft.node: b273bc7741bcb020 elected leader b273bc7741bcb020 at term 2
2020-09-22 12:30:22.996448 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.3:2379]} to cluster 86482fea2286a1d2
2020-09-22 12:30:22.996506 I | embed: ready to serve client requests
2020-09-22 12:30:22.996595 I | etcdserver: setting up the initial cluster version to 3.3
2020-09-22 12:30:22.996679 I | embed: ready to serve client requests
2020-09-22 12:30:22.998832 I | embed: serving client requests on 172.17.0.3:2379
2020-09-22 12:30:22.998927 I | embed: serving client requests on 127.0.0.1:2379
2020-09-22 12:30:23.000149 N | etcdserver/membership: set the initial cluster version to 3.3
2020-09-22 12:30:23.000226 I | etcdserver/api: enabled capabilities for version 3.3
proto: no coders for int
proto: no encoder for ValueSize int [GetProperties]

==> kernel <==
 12:36:40 up  2:30,  0 users,  load average: 0.24, 0.34, 0.29
Linux minikube 4.19.128-microsoft-standard #1 SMP Tue Jun 23 12:58:10 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04 LTS"

==> kube-apiserver [290c4a99b2f1] <==
E0922 12:30:25.425384       1 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0922 12:30:25.425397       1 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0922 12:30:25.425408       1 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
E0922 12:30:25.425435       1 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
E0922 12:30:25.425474       1 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
E0922 12:30:25.425542       1 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
E0922 12:30:25.425560       1 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0922 12:30:25.425601       1 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0922 12:30:25.425688       1 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
E0922 12:30:25.425781       1 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
E0922 12:30:25.425811       1 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
I0922 12:30:25.425841       1 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
I0922 12:30:25.425867       1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0922 12:30:25.427023       1 client.go:354] parsed scheme: ""
I0922 12:30:25.427043       1 client.go:354] scheme "" not registered, fallback to default scheme
I0922 12:30:25.427082       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0922 12:30:25.427102       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0922 12:30:25.433501       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0922 12:30:25.438199       1 client.go:354] parsed scheme: ""
I0922 12:30:25.438230       1 client.go:354] scheme "" not registered, fallback to default scheme
I0922 12:30:25.438254       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0922 12:30:25.438278       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0922 12:30:25.444423       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0922 12:30:26.504870       1 secure_serving.go:116] Serving securely on [::]:8443
I0922 12:30:26.505350       1 autoregister_controller.go:140] Starting autoregister controller
I0922 12:30:26.505382       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0922 12:30:26.505447       1 crdregistration_controller.go:112] Starting crd-autoregister controller
I0922 12:30:26.505493       1 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
I0922 12:30:26.505475       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0922 12:30:26.505529       1 crd_finalizer.go:255] Starting CRDFinalizer
I0922 12:30:26.505533       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0922 12:30:26.505567       1 establishing_controller.go:73] Starting EstablishingController
I0922 12:30:26.505593       1 customresource_discovery_controller.go:208] Starting DiscoveryController
I0922 12:30:26.505615       1 naming_controller.go:288] Starting NamingConditionController
I0922 12:30:26.505641       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I0922 12:30:26.505628       1 controller.go:81] Starting OpenAPI AggregationController
I0922 12:30:26.505612       1 controller.go:83] Starting OpenAPI controller
I0922 12:30:26.505845       1 available_controller.go:376] Starting AvailableConditionController
I0922 12:30:26.506049       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
E0922 12:30:26.506992       1 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.3, ResourceVersion: 0, AdditionalErrorMsg:
I0922 12:30:26.605511       1 cache.go:39] Caches are synced for autoregister controller
I0922 12:30:26.605638       1 controller_utils.go:1036] Caches are synced for crd-autoregister controller
I0922 12:30:26.606283       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0922 12:30:26.656496       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0922 12:30:27.503566       1 controller.go:107] OpenAPI AggregationController: Processing item
I0922 12:30:27.503628       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0922 12:30:27.503648       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0922 12:30:27.510013       1 storage_scheduling.go:119] created PriorityClass system-node-critical with value 2000001000
I0922 12:30:27.520026       1 storage_scheduling.go:119] created PriorityClass system-cluster-critical with value 2000000000
I0922 12:30:27.520089       1 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
I0922 12:30:27.928394       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0922 12:30:27.973768       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0922 12:30:28.107545       1 lease.go:223] Resetting endpoints for master service "kubernetes" to [172.17.0.3]
I0922 12:30:28.108028       1 controller.go:606] quota admission added evaluator for: endpoints
I0922 12:30:28.376979       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0922 12:30:28.774836       1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0922 12:30:29.414516       1 controller.go:606] quota admission added evaluator for: deployments.apps
I0922 12:30:29.760542       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0922 12:30:36.023824       1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0922 12:30:36.043125       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps

==> kube-controller-manager [b1feb8dbe555] <==
I0922 12:30:33.522841       1 tokencleaner.go:116] Starting token cleaner controller
I0922 12:30:33.522860       1 controller_utils.go:1029] Waiting for caches to sync for token_cleaner controller
I0922 12:30:33.623120       1 controller_utils.go:1036] Caches are synced for token_cleaner controller
I0922 12:30:33.772483       1 controllermanager.go:532] Started "endpoint"
I0922 12:30:33.772556       1 endpoints_controller.go:166] Starting endpoint controller
I0922 12:30:33.772573       1 controller_utils.go:1029] Waiting for caches to sync for endpoint controller
I0922 12:30:34.022523       1 controllermanager.go:532] Started "deployment"
I0922 12:30:34.022590       1 deployment_controller.go:152] Starting deployment controller
I0922 12:30:34.022607       1 controller_utils.go:1029] Waiting for caches to sync for deployment controller
E0922 12:30:34.272626       1 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0922 12:30:34.272672       1 controllermanager.go:524] Skipping "service"
I0922 12:30:34.522571       1 controllermanager.go:532] Started "pvc-protection"
I0922 12:30:34.522607       1 pvc_protection_controller.go:100] Starting PVC protection controller
I0922 12:30:34.522630       1 controller_utils.go:1029] Waiting for caches to sync for PVC protection controller
I0922 12:30:35.223024       1 controllermanager.go:532] Started "horizontalpodautoscaling"
W0922 12:30:35.223104       1 controllermanager.go:524] Skipping "nodeipam"
I0922 12:30:35.223058       1 horizontal.go:156] Starting HPA controller
I0922 12:30:35.225538       1 controller_utils.go:1029] Waiting for caches to sync for HPA controller
I0922 12:30:35.225111       1 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
I0922 12:30:35.232662       1 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
W0922 12:30:35.235723       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0922 12:30:35.272306       1 controller_utils.go:1036] Caches are synced for PV protection controller
I0922 12:30:35.272970       1 controller_utils.go:1036] Caches are synced for certificate controller
I0922 12:30:35.273073       1 controller_utils.go:1036] Caches are synced for certificate controller
I0922 12:30:35.273077       1 controller_utils.go:1036] Caches are synced for bootstrap_signer controller
I0922 12:30:35.295278       1 controller_utils.go:1036] Caches are synced for TTL controller
I0922 12:30:35.303090       1 log.go:172] [INFO] signed certificate with serial number 247459004568357080238576935397259663880471698330
I0922 12:30:35.372823       1 controller_utils.go:1036] Caches are synced for ClusterRoleAggregator controller
E0922 12:30:35.383227       1 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0922 12:30:35.425944       1 controller_utils.go:1036] Caches are synced for HPA controller
I0922 12:30:35.456780       1 controller_utils.go:1036] Caches are synced for attach detach controller
I0922 12:30:35.472927       1 controller_utils.go:1036] Caches are synced for endpoint controller
I0922 12:30:35.473947       1 controller_utils.go:1036] Caches are synced for GC controller
I0922 12:30:35.510181       1 controller_utils.go:1036] Caches are synced for ReplicationController controller
I0922 12:30:35.522787       1 controller_utils.go:1036] Caches are synced for PVC protection controller
I0922 12:30:35.522804       1 controller_utils.go:1036] Caches are synced for taint controller
I0922 12:30:35.522892       1 node_lifecycle_controller.go:1424] Initializing eviction metric for zone:
I0922 12:30:35.522900       1 taint_manager.go:182] Starting NoExecuteTaintManager
W0922 12:30:35.522940       1 node_lifecycle_controller.go:1036] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0922 12:30:35.523016       1 node_lifecycle_controller.go:1240] Controller detected that zone  is now in state Normal.
I0922 12:30:35.523127       1 event.go:258] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"d0e0e086-5853-4885-b567-70589761fe89", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I0922 12:30:35.722904       1 controller_utils.go:1036] Caches are synced for job controller
I0922 12:30:35.829253       1 controller_utils.go:1036] Caches are synced for namespace controller
I0922 12:30:35.924618       1 controller_utils.go:1036] Caches are synced for service account controller
I0922 12:30:35.958281       1 controller_utils.go:1036] Caches are synced for ReplicaSet controller
I0922 12:30:35.975719       1 controller_utils.go:1036] Caches are synced for resource quota controller
I0922 12:30:36.022771       1 controller_utils.go:1036] Caches are synced for disruption controller
I0922 12:30:36.022799       1 disruption.go:338] Sending events to api server.
I0922 12:30:36.022772       1 controller_utils.go:1036] Caches are synced for deployment controller
I0922 12:30:36.025220       1 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"1de2eb1b-dbe0-4701-b349-822f6f90b2f6", APIVersion:"apps/v1", ResourceVersion:"223", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5d4dd4b4db to 1
I0922 12:30:36.025961       1 controller_utils.go:1036] Caches are synced for garbage collector controller
I0922 12:30:36.029890       1 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5d4dd4b4db", UID:"97a10c07-0752-474f-b1da-9c6843bc75e8", APIVersion:"apps/v1", ResourceVersion:"333", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5d4dd4b4db-sjzm5
I0922 12:30:36.032906       1 controller_utils.go:1036] Caches are synced for resource quota controller
I0922 12:30:36.041559       1 controller_utils.go:1036] Caches are synced for daemon sets controller
I0922 12:30:36.064756       1 event.go:258] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"2677dd45-df65-4f6a-9b09-6c5170046684", APIVersion:"apps/v1", ResourceVersion:"213", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-gbnrt
I0922 12:30:36.072978       1 controller_utils.go:1036] Caches are synced for stateful set controller
I0922 12:30:36.077507       1 controller_utils.go:1036] Caches are synced for garbage collector controller
I0922 12:30:36.077532       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0922 12:30:36.122968       1 controller_utils.go:1036] Caches are synced for persistent volume controller
I0922 12:30:36.124278       1 controller_utils.go:1036] Caches are synced for expand controller

==> kube-proxy [c1a26e8eb270] <==
W0922 12:30:36.868612       1 proxier.go:500] Failed to read file /lib/modules/4.19.128-microsoft-standard/modules.builtin with error open /lib/modules/4.19.128-microsoft-standard/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0922 12:30:36.870769       1 proxier.go:513] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0922 12:30:36.871540       1 proxier.go:513] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0922 12:30:36.872369       1 proxier.go:513] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0922 12:30:36.872992       1 proxier.go:513] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0922 12:30:36.873488       1 proxier.go:513] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0922 12:30:36.877953       1 server_others.go:249] Flag proxy-mode="" unknown, assuming iptables proxy
I0922 12:30:36.886353       1 server_others.go:143] Using iptables Proxier.
W0922 12:30:36.886481       1 proxier.go:321] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0922 12:30:36.886769       1 server.go:534] Version: v1.15.12
I0922 12:30:36.894255       1 conntrack.go:52] Setting nf_conntrack_max to 524288
E0922 12:30:36.894578       1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime])
I0922 12:30:36.895211       1 config.go:96] Starting endpoints config controller
I0922 12:30:36.895269       1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
I0922 12:30:36.895323       1 config.go:187] Starting service config controller
I0922 12:30:36.895355       1 controller_utils.go:1029] Waiting for caches to sync for service config controller
I0922 12:30:36.995498       1 controller_utils.go:1036] Caches are synced for endpoints config controller
I0922 12:30:36.995613       1 controller_utils.go:1036] Caches are synced for service config controller

==> kube-scheduler [acd67483b7b6] <==
I0922 12:30:22.961538       1 serving.go:319] Generated self-signed cert in-memory
W0922 12:30:24.067171       1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0922 12:30:24.067243       1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0922 12:30:24.067261       1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0922 12:30:24.070611       1 server.go:142] Version: v1.15.12
I0922 12:30:24.070722       1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0922 12:30:24.072194       1 authorization.go:47] Authorization is disabled
W0922 12:30:24.072240       1 authentication.go:55] Authentication is disabled
I0922 12:30:24.072258       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0922 12:30:24.073935       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
E0922 12:30:26.558435       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0922 12:30:26.562340       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0922 12:30:26.562743       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0922 12:30:26.562985       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0922 12:30:26.563038       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0922 12:30:26.563223       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0922 12:30:26.563277       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0922 12:30:26.563281       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0922 12:30:26.563463       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0922 12:30:26.564053       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0922 12:30:27.560132       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0922 12:30:27.563429       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0922 12:30:27.564549       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0922 12:30:27.565681       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0922 12:30:27.566564       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0922 12:30:27.567841       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0922 12:30:27.657165       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0922 12:30:27.658562       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0922 12:30:27.659793       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0922 12:30:27.660969       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope

==> kubelet <==
-- Logs begin at Tue 2020-09-22 12:14:53 UTC, end at Tue 2020-09-22 12:36:41 UTC. --
Sep 22 12:30:23 minikube kubelet[16018]: E0922 12:30:23.890400   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:23 minikube kubelet[16018]: E0922 12:30:23.990625   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:24 minikube kubelet[16018]: E0922 12:30:24.090943   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:24 minikube kubelet[16018]: E0922 12:30:24.191219   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:24 minikube kubelet[16018]: I0922 12:30:24.266084   16018 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
Sep 22 12:30:24 minikube kubelet[16018]: I0922 12:30:24.266171   16018 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
Sep 22 12:30:24 minikube kubelet[16018]: I0922 12:30:24.266099   16018 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
Sep 22 12:30:24 minikube kubelet[16018]: I0922 12:30:24.266114   16018 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
Sep 22 12:30:24 minikube kubelet[16018]: E0922 12:30:24.356867   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:24 minikube kubelet[16018]: E0922 12:30:24.457293   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:24 minikube kubelet[16018]: E0922 12:30:24.557756   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:24 minikube kubelet[16018]: E0922 12:30:24.658114   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:24 minikube kubelet[16018]: E0922 12:30:24.758390   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:24 minikube kubelet[16018]: E0922 12:30:24.858687   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:24 minikube kubelet[16018]: E0922 12:30:24.958924   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:25 minikube kubelet[16018]: E0922 12:30:25.059101   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:25 minikube kubelet[16018]: E0922 12:30:25.159314   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:25 minikube kubelet[16018]: E0922 12:30:25.259643   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:25 minikube kubelet[16018]: E0922 12:30:25.360031   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:25 minikube kubelet[16018]: E0922 12:30:25.460306   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:25 minikube kubelet[16018]: E0922 12:30:25.560627   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:25 minikube kubelet[16018]: E0922 12:30:25.660910   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:25 minikube kubelet[16018]: E0922 12:30:25.761165   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:25 minikube kubelet[16018]: E0922 12:30:25.861483   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:25 minikube kubelet[16018]: E0922 12:30:25.961805   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:26 minikube kubelet[16018]: E0922 12:30:26.061995   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:26 minikube kubelet[16018]: E0922 12:30:26.162388   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:26 minikube kubelet[16018]: E0922 12:30:26.262648   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:26 minikube kubelet[16018]: E0922 12:30:26.362847   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:26 minikube kubelet[16018]: E0922 12:30:26.463022   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:26 minikube kubelet[16018]: E0922 12:30:26.563465   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:26 minikube kubelet[16018]: E0922 12:30:26.656978   16018 controller.go:204] failed to get node "minikube" when trying to set owner ref to the node lease: nodes "minikube" not found
Sep 22 12:30:26 minikube kubelet[16018]: E0922 12:30:26.658383   16018 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.16371af267d9fc4c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfd299773a6d644c, ext:825515309, loc:(*time.Location)(0x7645720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfd299773a6d644c, ext:825515309, loc:(*time.Location)(0x7645720)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Sep 22 12:30:26 minikube kubelet[16018]: E0922 12:30:26.663616   16018 kubelet.go:2252] node "minikube" not found
Sep 22 12:30:26 minikube kubelet[16018]: I0922 12:30:26.667906   16018 reconciler.go:150] Reconciler: start to sync state
Sep 22 12:30:26 minikube kubelet[16018]: I0922 12:30:26.675077   16018 kubelet_node_status.go:75] Successfully registered node minikube
Sep 22 12:30:26 minikube kubelet[16018]: E0922 12:30:26.771647   16018 controller.go:125] failed to ensure node lease exists, will retry in 1.6s, error: namespaces "kube-node-lease" not found
Sep 22 12:30:26 minikube kubelet[16018]: E0922 12:30:26.787384   16018 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.16371af272b9377f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfd2997749b1d57f, ext:1007917252, loc:(*time.Location)(0x7645720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfd2997749b1d57f, ext:1007917252, loc:(*time.Location)(0x7645720)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Sep 22 12:30:26 minikube kubelet[16018]: E0922 12:30:26.910205   16018 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.16371af272b9b6d8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfd2997749b254d8, ext:1007949853, loc:(*time.Location)(0x7645720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfd2997749b254d8, ext:1007949853, loc:(*time.Location)(0x7645720)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Sep 22 12:30:26 minikube kubelet[16018]: E0922 12:30:26.964264   16018 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.16371af272b9c8d0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfd2997749b266d0, ext:1007954353, loc:(*time.Location)(0x7645720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfd2997749b266d0, ext:1007954353, loc:(*time.Location)(0x7645720)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Sep 22 12:30:26 minikube kubelet[16018]: W0922 12:30:26.993463   16018 kubelet_getters.go:301] Path "/var/lib/kubelet/pods/17fbb26a-a326-496c-bff9-847d266b6f45/volumes" does not exist
Sep 22 12:30:26 minikube kubelet[16018]: W0922 12:30:26.993571   16018 kubelet_getters.go:301] Path "/var/lib/kubelet/pods/e78e780b-0fe7-41a4-8379-05788d0470be/volumes" does not exist
Sep 22 12:30:26 minikube kubelet[16018]: W0922 12:30:26.993597   16018 kubelet_getters.go:301] Path "/var/lib/kubelet/pods/dc7513a2-a386-4272-b02e-ee78d20a18fb/volumes" does not exist
Sep 22 12:30:27 minikube kubelet[16018]: E0922 12:30:27.021190   16018 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.16371af272b9b6d8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfd2997749b254d8, ext:1007949853, loc:(*time.Location)(0x7645720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfd299774aac4cc0, ext:1024331781, loc:(*time.Location)(0x7645720)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Sep 22 12:30:27 minikube kubelet[16018]: E0922 12:30:27.078178   16018 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.16371af272b9377f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfd2997749b1d57f, ext:1007917252, loc:(*time.Location)(0x7645720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfd299774aac3bf4, ext:1024327381, loc:(*time.Location)(0x7645720)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Sep 22 12:30:27 minikube kubelet[16018]: E0922 12:30:27.134580   16018 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.16371af272b9c8d0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfd2997749b266d0, ext:1007954353, loc:(*time.Location)(0x7645720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfd299774aac5684, ext:1024334281, loc:(*time.Location)(0x7645720)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Sep 22 12:30:27 minikube kubelet[16018]: E0922 12:30:27.188859   16018 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.16371af273c2b09a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfd299774abb4e9a, ext:1025315195, loc:(*time.Location)(0x7645720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfd299774abb4e9a, ext:1025315195, loc:(*time.Location)(0x7645720)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Sep 22 12:30:27 minikube kubelet[16018]: E0922 12:30:27.245748   16018 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.16371af272b9377f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfd2997749b1d57f, ext:1007917252, loc:(*time.Location)(0x7645720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfd2997753f88c67, ext:1180323656, loc:(*time.Location)(0x7645720)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Sep 22 12:30:27 minikube kubelet[16018]: E0922 12:30:27.518206   16018 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.16371af272b9b6d8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfd2997749b254d8, ext:1007949853, loc:(*time.Location)(0x7645720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfd2997753f8de0b, ext:1180344556, loc:(*time.Location)(0x7645720)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Sep 22 12:30:27 minikube kubelet[16018]: E0922 12:30:27.910366   16018 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.16371af272b9c8d0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfd2997749b266d0, ext:1007954353, loc:(*time.Location)(0x7645720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfd2997753f8e7cf, ext:1180347156, loc:(*time.Location)(0x7645720)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Sep 22 12:30:35 minikube kubelet[16018]: I0922 12:30:35.672001   16018 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/a2c60059-5a78-44f6-b0f2-f1c53c25b4c3-tmp") pod "storage-provisioner" (UID: "a2c60059-5a78-44f6-b0f2-f1c53c25b4c3")
Sep 22 12:30:35 minikube kubelet[16018]: I0922 12:30:35.672160   16018 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-hsgfm" (UniqueName: "kubernetes.io/secret/a2c60059-5a78-44f6-b0f2-f1c53c25b4c3-storage-provisioner-token-hsgfm") pod "storage-provisioner" (UID: "a2c60059-5a78-44f6-b0f2-f1c53c25b4c3")
Sep 22 12:30:36 minikube kubelet[16018]: I0922 12:30:36.173640   16018 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ea5d5a99-581d-4432-8dc7-70c1ec9c6e0c-config-volume") pod "coredns-5d4dd4b4db-sjzm5" (UID: "ea5d5a99-581d-4432-8dc7-70c1ec9c6e0c")
Sep 22 12:30:36 minikube kubelet[16018]: I0922 12:30:36.173685   16018 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-zmsbd" (UniqueName: "kubernetes.io/secret/ea5d5a99-581d-4432-8dc7-70c1ec9c6e0c-coredns-token-zmsbd") pod "coredns-5d4dd4b4db-sjzm5" (UID: "ea5d5a99-581d-4432-8dc7-70c1ec9c6e0c")
Sep 22 12:30:36 minikube kubelet[16018]: I0922 12:30:36.173704   16018 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/5d0f55b5-c7ed-49ab-af38-8af4d871371a-lib-modules") pod "kube-proxy-gbnrt" (UID: "5d0f55b5-c7ed-49ab-af38-8af4d871371a")
Sep 22 12:30:36 minikube kubelet[16018]: I0922 12:30:36.173719   16018 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-9pbbs" (UniqueName: "kubernetes.io/secret/5d0f55b5-c7ed-49ab-af38-8af4d871371a-kube-proxy-token-9pbbs") pod "kube-proxy-gbnrt" (UID: "5d0f55b5-c7ed-49ab-af38-8af4d871371a")
Sep 22 12:30:36 minikube kubelet[16018]: I0922 12:30:36.173733   16018 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/empty-dir/ea5d5a99-581d-4432-8dc7-70c1ec9c6e0c-tmp") pod "coredns-5d4dd4b4db-sjzm5" (UID: "ea5d5a99-581d-4432-8dc7-70c1ec9c6e0c")
Sep 22 12:30:36 minikube kubelet[16018]: I0922 12:30:36.173794   16018 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/5d0f55b5-c7ed-49ab-af38-8af4d871371a-kube-proxy") pod "kube-proxy-gbnrt" (UID: "5d0f55b5-c7ed-49ab-af38-8af4d871371a")
Sep 22 12:30:36 minikube kubelet[16018]: I0922 12:30:36.173820   16018 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/5d0f55b5-c7ed-49ab-af38-8af4d871371a-xtables-lock") pod "kube-proxy-gbnrt" (UID: "5d0f55b5-c7ed-49ab-af38-8af4d871371a")
Sep 22 12:30:40 minikube kubelet[16018]: I0922 12:30:40.477907   16018 transport.go:132] certificate rotation detected, shutting down client connections to start using new credentials

==> storage-provisioner [9a1f512ff930] <==
I0922 12:30:36.184208       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
I0922 12:30:36.187777       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0922 12:30:36.187882       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dca96aed-feb3-4455-a06e-0e0746f38996", APIVersion:"v1", ResourceVersion:"354", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_64d358b7-004d-4407-aa90-150ec2c9ac71 became leader
I0922 12:30:36.187913       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_64d358b7-004d-4407-aa90-150ec2c9ac71!
I0922 12:30:36.288240       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_64d358b7-004d-4407-aa90-150ec2c9ac71!
</details>
RA489 commented 4 years ago

/triage support

lyqht commented 3 years ago

any update on this? same setup on WSL2 image

priyawadhwa commented 3 years ago

Hey @davidedmonds @lyqht I'm not very familiar with this error on Windows, but it looks like a bunch of minikube users have faced a similar issue in this thread: https://github.com/kubernetes/minikube/issues/5392

Perhaps something in that thread would provide some guidance for you?

medyagh commented 3 years ago

@davidedmonds @lyqht i wonder if u have tried latest verison of minikube , we fixed WSL in previous versions. if u still have this issue plz reopen