kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.49k stars 4.89k forks source link

docker driver on Windows 10 Home #8924

Closed ale8k closed 4 years ago

ale8k commented 4 years ago

Steps to reproduce the issue:

Environment:

  1. Install minikube via choco.
  2. run: minikube start --driver=docker <optionally set memory to 2 or 4gb>
  3. Wait a very long time.... (20mins - 1 hour)

Things I've tried

Full output of failed command:

I've posted as much as I can, but it continues doing this forever (repeating the same things I mean).

PS C:\WINDOWS\system32> minikube start --v=5 --alsologtostderr
I0805 17:05:22.799706   12456 out.go:191] Setting JSON to false
I0805 17:05:22.810718   12456 start.go:100] hostinfo: {"hostname":"DESKTOP-30110V3","uptime":7183,"bootTime":1596636339,"procs":334,"os":"windows","platform":"Microsoft Windows 10 Home","platformFamily":"Standalone Workstation","platformVersion":"10.0.19041 Build 19041","kernelVersion":"","virtualizationSystem":"","virtualizationRole":"","hostid":"4ed8f32a-7905-4014-83c3-2f7a79627df6"}
W0805 17:05:22.810718   12456 start.go:108] gopshost.Virtualization returned error: not implemented yet
* minikube v1.12.2 on Microsoft Windows 10 Home 10.0.19041 Build 19041
I0805 17:05:22.818705   12456 notify.go:125] Checking for updates...
I0805 17:05:22.819706   12456 driver.go:287] Setting default libvirt URI to qemu:///system
I0805 17:05:23.002739   12456 docker.go:87] docker version: linux-19.03.12
* Using the docker driver based on existing profile
I0805 17:05:23.005737   12456 start.go:229] selected driver: docker
I0805 17:05:23.006736   12456 start.go:635] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]}
I0805 17:05:23.007736   12456 start.go:646] status for docker: {Installed:true Healthy:true NeedsImprovement:false Error:<nil> Fix: Doc:}
I0805 17:05:23.035735   12456 cli_runner.go:109] Run: docker system info --format "{{json .}}"
I0805 17:05:24.471581   12456 start_flags.go:344] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]}
* Starting control plane node minikube in cluster minikube
I0805 17:05:24.652581   12456 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 in local docker daemon, skipping pull
I0805 17:05:24.652581   12456 cache.go:113] gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 exists in daemon, skipping pull
I0805 17:05:24.654585   12456 preload.go:97] Checking if preload exists for k8s version v1.18.3 and runtime docker
I0805 17:05:24.655582   12456 preload.go:105] Found local preload: C:\Users\alexa\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4
I0805 17:05:24.655582   12456 cache.go:51] Caching tarball of preloaded images
I0805 17:05:24.658583   12456 preload.go:131] Found C:\Users\alexa\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0805 17:05:24.658583   12456 cache.go:54] Finished verifying existence of preloaded tar for  v1.18.3 on docker
I0805 17:05:24.659581   12456 profile.go:150] Saving config to C:\Users\alexa\.minikube\profiles\minikube\config.json ...
I0805 17:05:24.661581   12456 cache.go:181] Successfully downloaded all kic artifacts
I0805 17:05:24.662583   12456 start.go:241] acquiring machines lock for minikube: {Name:mk5120602512b6a5292922d3470881da00090fe7 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0805 17:05:24.664584   12456 start.go:245] acquired machines lock for "minikube" in 0s
I0805 17:05:24.665583   12456 start.go:89] Skipping create...Using existing machine configuration
I0805 17:05:24.665583   12456 fix.go:53] fixHost starting:
I0805 17:05:24.724583   12456 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}}
I0805 17:05:24.865582   12456 fix.go:105] recreateIfNeeded on minikube: state=Running err=<nil>
W0805 17:05:24.865582   12456 fix.go:131] unexpected machine state, will restart: <nil>
* Updating the running docker "minikube" container ...
I0805 17:05:24.871582   12456 machine.go:88] provisioning docker machine ...
I0805 17:05:24.871582   12456 ubuntu.go:166] provisioning hostname "minikube"
I0805 17:05:24.899583   12456 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0805 17:05:25.049582   12456 main.go:115] libmachine: Using SSH client type: native
I0805 17:05:25.050584   12456 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b7420] 0x7b73f0 <nil>  [] 0s} 127.0.0.1 32775 <nil> <nil>}
I0805 17:05:25.051581   12456 main.go:115] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0805 17:05:25.217625   12456 main.go:115] libmachine: SSH cmd err, output: <nil>: minikube

I0805 17:05:25.250624   12456 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0805 17:05:25.390625   12456 main.go:115] libmachine: Using SSH client type: native
I0805 17:05:25.390625   12456 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b7420] 0x7b73f0 <nil>  [] 0s} 127.0.0.1 32775 <nil> <nil>}
I0805 17:05:25.392624   12456 main.go:115] libmachine: About to run SSH command:

                if ! grep -xq '.*\sminikube' /etc/hosts; then
                        if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
                        else
                                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts;
                        fi
                fi
I0805 17:05:25.536623   12456 main.go:115] libmachine: SSH cmd err, output: <nil>:
I0805 17:05:25.536623   12456 ubuntu.go:172] set auth options {CertDir:C:\Users\alexa\.minikube CaCertPath:C:\Users\alexa\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\alexa\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\alexa\.minikube\machines\server.pem ServerKeyPath:C:\Users\alexa\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\alexa\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\alexa\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\alexa\.minikube}
I0805 17:05:25.537626   12456 ubuntu.go:174] setting up certificates
I0805 17:05:25.538625   12456 provision.go:82] configureAuth start
I0805 17:05:25.568623   12456 cli_runner.go:109] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0805 17:05:25.703624   12456 provision.go:131] copyHostCerts
I0805 17:05:25.703624   12456 vm_assets.go:95] NewFileAsset: C:\Users\alexa\.minikube\certs\cert.pem -> C:\Users\alexa\.minikube/cert.pem
I0805 17:05:25.704626   12456 exec_runner.go:91] found C:\Users\alexa\.minikube/cert.pem, removing ...
I0805 17:05:25.705624   12456 exec_runner.go:98] cp: C:\Users\alexa\.minikube\certs\cert.pem --> C:\Users\alexa\.minikube/cert.pem (1074 bytes)
I0805 17:05:25.706625   12456 vm_assets.go:95] NewFileAsset: C:\Users\alexa\.minikube\certs\key.pem -> C:\Users\alexa\.minikube/key.pem
I0805 17:05:25.707627   12456 exec_runner.go:91] found C:\Users\alexa\.minikube/key.pem, removing ...
I0805 17:05:25.708624   12456 exec_runner.go:98] cp: C:\Users\alexa\.minikube\certs\key.pem --> C:\Users\alexa\.minikube/key.pem (1679 bytes)
I0805 17:05:25.709624   12456 vm_assets.go:95] NewFileAsset: C:\Users\alexa\.minikube\certs\ca.pem -> C:\Users\alexa\.minikube/ca.pem
I0805 17:05:25.709624   12456 exec_runner.go:91] found C:\Users\alexa\.minikube/ca.pem, removing ...
I0805 17:05:25.710629   12456 exec_runner.go:98] cp: C:\Users\alexa\.minikube\certs\ca.pem --> C:\Users\alexa\.minikube/ca.pem (1034 bytes)
I0805 17:05:25.711626   12456 provision.go:105] generating server cert: C:\Users\alexa\.minikube\machines\server.pem ca-key=C:\Users\alexa\.minikube\certs\ca.pem private-key=C:\Users\alexa\.minikube\certs\ca-key.pem org=alexa.minikube san=[172.17.0.3 localhost 127.0.0.1 minikube minikube]
I0805 17:05:25.850624   12456 provision.go:159] copyRemoteCerts
I0805 17:05:25.889625   12456 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0805 17:05:25.917624   12456 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0805 17:05:26.054625   12456 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32775 SSHKeyPath:C:\Users\alexa\.minikube\machines\minikube\id_rsa Username:docker}
I0805 17:05:26.168628   12456 vm_assets.go:95] NewFileAsset: C:\Users\alexa\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
I0805 17:05:26.169625   12456 ssh_runner.go:215] scp C:\Users\alexa\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0805 17:05:26.218624   12456 vm_assets.go:95] NewFileAsset: C:\Users\alexa\.minikube\certs\ca.pem -> /etc/docker/ca.pem
I0805 17:05:26.218624   12456 ssh_runner.go:215] scp C:\Users\alexa\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1034 bytes)
I0805 17:05:26.274144   12456 vm_assets.go:95] NewFileAsset: C:\Users\alexa\.minikube\machines\server.pem -> /etc/docker/server.pem
I0805 17:05:26.275144   12456 ssh_runner.go:215] scp C:\Users\alexa\.minikube\machines\server.pem --> /etc/docker/server.pem (1143 bytes)
I0805 17:05:26.324145   12456 provision.go:85] duration metric: configureAuth took 784.5206ms
I0805 17:05:26.324145   12456 ubuntu.go:190] setting minikube options for container-runtime
I0805 17:05:26.354664   12456 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0805 17:05:26.492662   12456 main.go:115] libmachine: Using SSH client type: native
I0805 17:05:26.493663   12456 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b7420] 0x7b73f0 <nil>  [] 0s} 127.0.0.1 32775 <nil> <nil>}
I0805 17:05:26.494662   12456 main.go:115] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0805 17:05:26.626663   12456 main.go:115] libmachine: SSH cmd err, output: <nil>: overlay

I0805 17:05:26.626663   12456 ubuntu.go:71] root file system type: overlay
I0805 17:05:26.627665   12456 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I0805 17:05:26.657663   12456 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0805 17:05:26.795663   12456 main.go:115] libmachine: Using SSH client type: native
I0805 17:05:26.795663   12456 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b7420] 0x7b73f0 <nil>  [] 0s} 127.0.0.1 32775 <nil> <nil>}
I0805 17:05:26.797662   12456 main.go:115] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0805 17:05:26.961664   12456 main.go:115] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0805 17:05:26.992663   12456 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0805 17:05:27.136662   12456 main.go:115] libmachine: Using SSH client type: native
I0805 17:05:27.136662   12456 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b7420] 0x7b73f0 <nil>  [] 0s} 127.0.0.1 32775 <nil> <nil>}
I0805 17:05:27.138663   12456 main.go:115] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0805 17:05:27.289662   12456 main.go:115] libmachine: SSH cmd err, output: <nil>:
I0805 17:05:27.289662   12456 machine.go:91] provisioned docker machine in 2.4180799s
I0805 17:05:27.290663   12456 start.go:204] post-start starting for "minikube" (driver="docker")
I0805 17:05:27.291667   12456 start.go:214] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0805 17:05:27.331669   12456 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0805 17:05:27.363662   12456 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0805 17:05:27.495664   12456 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32775 SSHKeyPath:C:\Users\alexa\.minikube\machines\minikube\id_rsa Username:docker}
I0805 17:05:27.648663   12456 ssh_runner.go:148] Run: cat /etc/os-release
I0805 17:05:27.653663   12456 main.go:115] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0805 17:05:27.653663   12456 main.go:115] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0805 17:05:27.654664   12456 main.go:115] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0805 17:05:27.655662   12456 info.go:101] Remote host: Ubuntu 20.04 LTS
I0805 17:05:27.655662   12456 filesync.go:118] Scanning C:\Users\alexa\.minikube\addons for local assets ...
I0805 17:05:27.656663   12456 filesync.go:118] Scanning C:\Users\alexa\.minikube\files for local assets ...
I0805 17:05:27.657664   12456 start.go:207] post-start completed in 365.9964ms
I0805 17:05:27.658664   12456 fix.go:55] fixHost completed within 2.993081s
I0805 17:05:27.659668   12456 start.go:76] releasing machines lock for "minikube", held for 2.9940849s
I0805 17:05:27.689663   12456 cli_runner.go:109] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0805 17:05:27.818670   12456 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I0805 17:05:27.860662   12456 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0805 17:05:27.869661   12456 ssh_runner.go:148] Run: systemctl --version
I0805 17:05:27.906665   12456 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0805 17:05:28.007663   12456 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32775 SSHKeyPath:C:\Users\alexa\.minikube\machines\minikube\id_rsa Username:docker}
I0805 17:05:28.061676   12456 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32775 SSHKeyPath:C:\Users\alexa\.minikube\machines\minikube\id_rsa Username:docker}
I0805 17:05:28.258663   12456 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service containerd
I0805 17:05:28.318660   12456 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0805 17:05:28.340662   12456 cruntime.go:192] skipping containerd shutdown because we are bound to it
I0805 17:05:28.378662   12456 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I0805 17:05:28.437664   12456 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0805 17:05:28.497662   12456 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0805 17:05:28.627183   12456 ssh_runner.go:148] Run: sudo systemctl start docker
I0805 17:05:28.680183   12456 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
* Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
I0805 17:05:28.762737   12456 cli_runner.go:109] Run: docker exec -t minikube dig +short host.docker.internal
I0805 17:05:28.962741   12456 network.go:57] got host ip for mount in container by digging dns: 192.168.65.2
I0805 17:05:29.013739   12456 ssh_runner.go:148] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I0805 17:05:29.057738   12456 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0805 17:05:29.191737   12456 preload.go:97] Checking if preload exists for k8s version v1.18.3 and runtime docker
I0805 17:05:29.191737   12456 preload.go:105] Found local preload: C:\Users\alexa\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4
I0805 17:05:29.222741   12456 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0805 17:05:29.263737   12456 docker.go:381] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v2
kubernetesui/dashboard:v2.0.1
k8s.gcr.io/kube-proxy:v1.18.3
k8s.gcr.io/kube-apiserver:v1.18.3
k8s.gcr.io/kube-scheduler:v1.18.3
k8s.gcr.io/kube-controller-manager:v1.18.3
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0

-- /stdout --
I0805 17:05:29.263737   12456 docker.go:319] Images already preloaded, skipping extraction
I0805 17:05:29.294736   12456 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0805 17:05:29.334738   12456 docker.go:381] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v2
kubernetesui/dashboard:v2.0.1
k8s.gcr.io/kube-proxy:v1.18.3
k8s.gcr.io/kube-scheduler:v1.18.3
k8s.gcr.io/kube-apiserver:v1.18.3
k8s.gcr.io/kube-controller-manager:v1.18.3
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0

-- /stdout --
I0805 17:05:29.334738   12456 cache_images.go:69] Images are preloaded, skipping loading
I0805 17:05:29.365737   12456 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
I0805 17:05:29.414736   12456 cni.go:74] Creating CNI manager for ""
I0805 17:05:29.414736   12456 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I0805 17:05:29.416736   12456 kubeadm.go:84] Using pod CIDR:
I0805 17:05:29.416736   12456 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:172.17.0.3 APIServerPort:8443 KubernetesVersion:v1.18.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.3"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0805 17:05:29.417735   12456 kubeadm.go:154] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.17.0.3
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 172.17.0.3
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "172.17.0.3"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
controllerManager:
  extraArgs:
    "leader-elect": "false"
scheduler:
  extraArgs:
    "leader-elect": "false"
kubernetesVersion: v1.18.3
networking:
  dnsDomain: cluster.local
  podSubnet: ""
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: ""
metricsBindAddress: 172.17.0.3:10249

I0805 17:05:29.418737   12456 kubeadm.go:796] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.3

[Install]
 config:
{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0805 17:05:29.458736   12456 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.18.3
I0805 17:05:29.477737   12456 binaries.go:43] Found k8s binaries, skipping transfer
I0805 17:05:29.516736   12456 ssh_runner.go:148] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0805 17:05:29.534736   12456 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
I0805 17:05:29.584737   12456 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
I0805 17:05:29.630738   12456 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1730 bytes)
I0805 17:05:29.722742   12456 ssh_runner.go:148] Run: grep 172.17.0.3   control-plane.minikube.internal$ /etc/hosts
I0805 17:05:29.771738   12456 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0805 17:05:29.903737   12456 ssh_runner.go:148] Run: sudo systemctl start kubelet
I0805 17:05:29.925736   12456 certs.go:52] Setting up C:\Users\alexa\.minikube\profiles\minikube for IP: 172.17.0.3
I0805 17:05:29.925736   12456 certs.go:169] skipping minikubeCA CA generation: C:\Users\alexa\.minikube\ca.key
I0805 17:05:29.926739   12456 certs.go:169] skipping proxyClientCA CA generation: C:\Users\alexa\.minikube\proxy-client-ca.key
I0805 17:05:29.927737   12456 certs.go:269] skipping minikube-user signed cert generation: C:\Users\alexa\.minikube\profiles\minikube\client.key
I0805 17:05:29.928736   12456 certs.go:269] skipping minikube signed cert generation: C:\Users\alexa\.minikube\profiles\minikube\apiserver.key.0f3e66d0
I0805 17:05:29.928736   12456 certs.go:269] skipping aggregator signed cert generation: C:\Users\alexa\.minikube\profiles\minikube\proxy-client.key
I0805 17:05:29.929736   12456 vm_assets.go:95] NewFileAsset: C:\Users\alexa\.minikube\profiles\minikube\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0805 17:05:29.930737   12456 vm_assets.go:95] NewFileAsset: C:\Users\alexa\.minikube\profiles\minikube\apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0805 17:05:29.930737   12456 vm_assets.go:95] NewFileAsset: C:\Users\alexa\.minikube\profiles\minikube\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0805 17:05:29.931737   12456 vm_assets.go:95] NewFileAsset: C:\Users\alexa\.minikube\profiles\minikube\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0805 17:05:29.932736   12456 vm_assets.go:95] NewFileAsset: C:\Users\alexa\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
I0805 17:05:29.933736   12456 vm_assets.go:95] NewFileAsset: C:\Users\alexa\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
I0805 17:05:29.933736   12456 vm_assets.go:95] NewFileAsset: C:\Users\alexa\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0805 17:05:29.934736   12456 vm_assets.go:95] NewFileAsset: C:\Users\alexa\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0805 17:05:29.935735   12456 certs.go:348] found cert: C:\Users\alexa\.minikube\certs\C:\Users\alexa\.minikube\certs\ca-key.pem (1675 bytes)
I0805 17:05:29.936736   12456 certs.go:348] found cert: C:\Users\alexa\.minikube\certs\C:\Users\alexa\.minikube\certs\ca.pem (1034 bytes)
I0805 17:05:29.937737   12456 certs.go:348] found cert: C:\Users\alexa\.minikube\certs\C:\Users\alexa\.minikube\certs\cert.pem (1074 bytes)
I0805 17:05:29.938737   12456 certs.go:348] found cert: C:\Users\alexa\.minikube\certs\C:\Users\alexa\.minikube\certs\key.pem (1679 bytes)
I0805 17:05:29.938737   12456 vm_assets.go:95] NewFileAsset: C:\Users\alexa\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0805 17:05:29.940735   12456 ssh_runner.go:215] scp C:\Users\alexa\.minikube\profiles\minikube\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1350 bytes)
I0805 17:05:29.989740   12456 ssh_runner.go:215] scp C:\Users\alexa\.minikube\profiles\minikube\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0805 17:05:30.039745   12456 ssh_runner.go:215] scp C:\Users\alexa\.minikube\profiles\minikube\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1103 bytes)
I0805 17:05:30.091738   12456 ssh_runner.go:215] scp C:\Users\alexa\.minikube\profiles\minikube\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0805 17:05:30.148736   12456 ssh_runner.go:215] scp C:\Users\alexa\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes)
I0805 17:05:30.195735   12456 ssh_runner.go:215] scp C:\Users\alexa\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0805 17:05:30.257735   12456 ssh_runner.go:215] scp C:\Users\alexa\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes)
I0805 17:05:30.303737   12456 ssh_runner.go:215] scp C:\Users\alexa\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0805 17:05:30.352736   12456 ssh_runner.go:215] scp C:\Users\alexa\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes)
I0805 17:05:30.401739   12456 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I0805 17:05:30.486738   12456 ssh_runner.go:148] Run: openssl version
I0805 17:05:30.528734   12456 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0805 17:05:30.582735   12456 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0805 17:05:30.587737   12456 certs.go:389] hashing: -rw-r--r-- 1 root root 1066 Aug  5 11:35 /usr/share/ca-certificates/minikubeCA.pem
I0805 17:05:30.624737   12456 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0805 17:05:30.667738   12456 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0805 17:05:30.686736   12456 kubeadm.go:327] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]}
I0805 17:05:30.712742   12456 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0805 17:05:30.783345   12456 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0805 17:05:30.802346   12456 kubeadm.go:338] found existing configuration files, will attempt cluster restart
I0805 17:05:30.802346   12456 kubeadm.go:512] restartCluster start
I0805 17:05:30.839344   12456 ssh_runner.go:148] Run: sudo test -d /data/minikube
I0805 17:05:30.856346   12456 kubeadm.go:122] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:

stderr:
I0805 17:05:30.882347   12456 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0805 17:05:31.001347   12456 kubeadm.go:516] restartCluster took 197.0017ms
! Unable to restart cluster, will reset it: getting k8s client: client config: client config: context "minikube" does not exist
I0805 17:05:31.003346   12456 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm reset --cri-socket npipe:////./pipe/docker_engine --force"
I0805 17:06:11.402151   12456 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm reset --cri-socket npipe:////./pipe/docker_engine --force": (40.3988052s)
I0805 17:06:11.439150   12456 ssh_runner.go:148] Run: sudo systemctl stop -f kubelet
I0805 17:06:11.486163   12456 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
W0805 17:06:11.523152   12456 kubeadm.go:728] found 16 kube-system containers to stop
I0805 17:06:11.523152   12456 docker.go:229] Stopping containers: [53ad4dd86761 bc9d3a37ea3a f523ace36e6c 04139762a9d4 7556aba4efa9 6abac4191b8c 12e3d2faf872 6d5d589de4b8 ae9d0b1c32dc 2f541bfa1dc6 ea98c3523a31 e7d01cb93c7e 09e5dfb5efcd c9173d17e4c9 687eead55fa4 712d02faf62a]
I0805 17:06:11.551133   12456 ssh_runner.go:148] Run: docker stop 53ad4dd86761 bc9d3a37ea3a f523ace36e6c 04139762a9d4 7556aba4efa9 6abac4191b8c 12e3d2faf872 6d5d589de4b8 ae9d0b1c32dc 2f541bfa1dc6 ea98c3523a31 e7d01cb93c7e 09e5dfb5efcd c9173d17e4c9 687eead55fa4 712d02faf62a
I0805 17:06:21.783082   12456 ssh_runner.go:188] Completed: docker stop 53ad4dd86761 bc9d3a37ea3a f523ace36e6c 04139762a9d4 7556aba4efa9 6abac4191b8c 12e3d2faf872 6d5d589de4b8 ae9d0b1c32dc 2f541bfa1dc6 ea98c3523a31 e7d01cb93c7e 09e5dfb5efcd c9173d17e4c9 687eead55fa4 712d02faf62a: (10.2319482s)
I0805 17:06:21.827083   12456 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0805 17:06:21.849085   12456 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver
I0805 17:06:21.888094   12456 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0805 17:06:21.908082   12456 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0805 17:06:21.908082   12456 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0805 17:10:44.964297   12456 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (4m23.0552154s)
! initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

stderr:
W0805 16:06:21.970862    8032 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0805 16:06:23.455150    8032 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0805 16:06:23.456325    8032 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

I0805 17:10:44.965296   12456 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm reset --cri-socket npipe:////./pipe/docker_engine --force"
I0805 17:11:25.363885   12456 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm reset --cri-socket npipe:////./pipe/docker_engine --force": (40.397588s)
I0805 17:11:25.399939   12456 ssh_runner.go:148] Run: sudo systemctl stop -f kubelet
I0805 17:11:25.446929   12456 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
W0805 17:11:25.483925   12456 kubeadm.go:728] found 16 kube-system containers to stop
I0805 17:11:25.484925   12456 docker.go:229] Stopping containers: [a99bccfc834a 4d784442830d 7115cfb8c701 1630cbde270a 2c8cd82bc63a fb6f7f878265 4f2f9ff3310b 56e4eb252fae 53ad4dd86761 bc9d3a37ea3a f523ace36e6c 04139762a9d4 7556aba4efa9 6abac4191b8c 12e3d2faf872 6d5d589de4b8]
I0805 17:11:25.512927   12456 ssh_runner.go:148] Run: docker stop a99bccfc834a 4d784442830d 7115cfb8c701 1630cbde270a 2c8cd82bc63a fb6f7f878265 4f2f9ff3310b 56e4eb252fae 53ad4dd86761 bc9d3a37ea3a f523ace36e6c 04139762a9d4 7556aba4efa9 6abac4191b8c 12e3d2faf872 6d5d589de4b8
I0805 17:11:35.691140   12456 ssh_runner.go:188] Completed: docker stop a99bccfc834a 4d784442830d 7115cfb8c701 1630cbde270a 2c8cd82bc63a fb6f7f878265 4f2f9ff3310b 56e4eb252fae 53ad4dd86761 bc9d3a37ea3a f523ace36e6c 04139762a9d4 7556aba4efa9 6abac4191b8c 12e3d2faf872 6d5d589de4b8: (10.1782131s)
I0805 17:11:35.691140   12456 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver
I0805 17:11:35.734139   12456 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0805 17:11:35.752137   12456 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0805 17:11:35.753139   12456 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"

Optional: Full output of minikube logs command:

PS>minikube logs * ==> Docker <== * -- Logs begin at Wed 2020-08-05 15:55:40 UTC, end at Wed 2020-08-05 16:14:02 UTC. -- * Aug 05 15:56:54 minikube dockerd[388]: time="2020-08-05T15:56:54.210992900Z" level=info msg="Starting up" * Aug 05 15:56:54 minikube dockerd[388]: time="2020-08-05T15:56:54.214724200Z" level=info msg="parsed scheme: \"unix\"" module=grpc * Aug 05 15:56:54 minikube dockerd[388]: time="2020-08-05T15:56:54.214797100Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * Aug 05 15:56:54 minikube dockerd[388]: time="2020-08-05T15:56:54.214833200Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc * Aug 05 15:56:54 minikube dockerd[388]: time="2020-08-05T15:56:54.214845900Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * Aug 05 15:56:54 minikube dockerd[388]: time="2020-08-05T15:56:54.216379200Z" level=info msg="parsed scheme: \"unix\"" module=grpc * Aug 05 15:56:54 minikube dockerd[388]: time="2020-08-05T15:56:54.216439400Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * Aug 05 15:56:54 minikube dockerd[388]: time="2020-08-05T15:56:54.216467100Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc * Aug 05 15:56:54 minikube dockerd[388]: time="2020-08-05T15:56:54.216489400Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * Aug 05 15:56:54 minikube dockerd[388]: time="2020-08-05T15:56:54.226824700Z" level=info msg="[graphdriver] using prior storage driver: overlay2" * Aug 05 15:56:54 minikube dockerd[388]: time="2020-08-05T15:56:54.239451100Z" level=warning msg="Your kernel does not support cgroup blkio weight" * Aug 05 15:56:54 minikube dockerd[388]: time="2020-08-05T15:56:54.239494400Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" * Aug 05 15:56:54 minikube dockerd[388]: time="2020-08-05T15:56:54.239503600Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" * Aug 05 15:56:54 minikube dockerd[388]: time="2020-08-05T15:56:54.239509100Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" * Aug 05 15:56:54 minikube dockerd[388]: time="2020-08-05T15:56:54.239514500Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" * Aug 05 15:56:54 minikube dockerd[388]: time="2020-08-05T15:56:54.239519400Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" * Aug 05 15:56:54 minikube dockerd[388]: time="2020-08-05T15:56:54.239683000Z" level=info msg="Loading containers: start." * Aug 05 15:56:54 minikube dockerd[388]: time="2020-08-05T15:56:54.240998700Z" level=warning msg="Running modprobe bridge br_netfilter failed with message: modprobe: WARNING: Module bridge not found in directory /lib/modules/4.19.104-microsoft-standard\nmodprobe: WARNING: Module br_netfilter not found in directory /lib/modules/4.19.104-microsoft-standard\n, error: exit status 1" * Aug 05 15:56:54 minikube dockerd[388]: time="2020-08-05T15:56:54.329661800Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address" * Aug 05 15:56:54 minikube dockerd[388]: time="2020-08-05T15:56:54.366283700Z" level=info msg="Loading containers: done." * Aug 05 15:56:54 minikube dockerd[388]: time="2020-08-05T15:56:54.382335900Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8 * Aug 05 15:56:54 minikube dockerd[388]: time="2020-08-05T15:56:54.382433700Z" level=info msg="Daemon has completed initialization" * Aug 05 15:56:54 minikube dockerd[388]: time="2020-08-05T15:56:54.400423900Z" level=info msg="API listen on /var/run/docker.sock" * Aug 05 15:56:54 minikube dockerd[388]: time="2020-08-05T15:56:54.400523600Z" level=info msg="API listen on [::]:2376" * Aug 05 15:56:54 minikube systemd[1]: Started Docker Application Container Engine. * Aug 05 16:02:04 minikube dockerd[388]: time="2020-08-05T16:02:04.466069600Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:02:04 minikube dockerd[388]: time="2020-08-05T16:02:04.717819100Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:02:04 minikube dockerd[388]: time="2020-08-05T16:02:04.724404900Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:02:04 minikube dockerd[388]: time="2020-08-05T16:02:04.725532500Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:02:04 minikube dockerd[388]: time="2020-08-05T16:02:04.727602300Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:02:04 minikube dockerd[388]: time="2020-08-05T16:02:04.727718800Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:02:04 minikube dockerd[388]: time="2020-08-05T16:02:04.730082900Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:02:14 minikube dockerd[388]: time="2020-08-05T16:02:14.509724400Z" level=info msg="Container c264627962e807603a13bfd43d808d4edeb9ab768aa159ee324bb5b32cf0d742 failed to exit within 10 seconds of signal 15 - using the force" * Aug 05 16:02:14 minikube dockerd[388]: time="2020-08-05T16:02:14.568084500Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:03:13 minikube dockerd[388]: time="2020-08-05T16:03:13.210113000Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:03:13 minikube dockerd[388]: time="2020-08-05T16:03:13.217977300Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:03:13 minikube dockerd[388]: time="2020-08-05T16:03:13.226095900Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:03:13 minikube dockerd[388]: time="2020-08-05T16:03:13.229267800Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:03:13 minikube dockerd[388]: time="2020-08-05T16:03:13.229360500Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:03:13 minikube dockerd[388]: time="2020-08-05T16:03:13.229396800Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:03:13 minikube dockerd[388]: time="2020-08-05T16:03:13.229430800Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:03:13 minikube dockerd[388]: time="2020-08-05T16:03:13.231810500Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:06:11 minikube dockerd[388]: time="2020-08-05T16:06:11.820038200Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:06:11 minikube dockerd[388]: time="2020-08-05T16:06:11.899219400Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:06:11 minikube dockerd[388]: time="2020-08-05T16:06:11.899456900Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:06:11 minikube dockerd[388]: time="2020-08-05T16:06:11.899484800Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:06:11 minikube dockerd[388]: time="2020-08-05T16:06:11.900056700Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:06:11 minikube dockerd[388]: time="2020-08-05T16:06:11.900687000Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:06:11 minikube dockerd[388]: time="2020-08-05T16:06:11.918849200Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:06:21 minikube dockerd[388]: time="2020-08-05T16:06:21.698922900Z" level=info msg="Container f523ace36e6cd32514daf78d82fc545cc2f401f116f8c7bb4572db387e7c523d failed to exit within 10 seconds of signal 15 - using the force" * Aug 05 16:06:21 minikube dockerd[388]: time="2020-08-05T16:06:21.758126600Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:11:25 minikube dockerd[388]: time="2020-08-05T16:11:25.814351100Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:11:25 minikube dockerd[388]: time="2020-08-05T16:11:25.821710400Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:11:25 minikube dockerd[388]: time="2020-08-05T16:11:25.825872800Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:11:25 minikube dockerd[388]: time="2020-08-05T16:11:25.826070000Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:11:25 minikube dockerd[388]: time="2020-08-05T16:11:25.826101400Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:11:25 minikube dockerd[388]: time="2020-08-05T16:11:25.826122500Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:11:25 minikube dockerd[388]: time="2020-08-05T16:11:25.826434900Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 05 16:11:35 minikube dockerd[388]: time="2020-08-05T16:11:35.609131200Z" level=info msg="Container a99bccfc834a2322fdabe28c7af7f4cb65c4419ca6577c7cc9fc3e5c3ccca7e5 failed to exit within 10 seconds of signal 15 - using the force" * Aug 05 16:11:35 minikube dockerd[388]: time="2020-08-05T16:11:35.669506200Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID * c362f17e7ac04 303ce5db0e90d 2 minutes ago Running etcd 4 9b888e81996a8 * 8e2c4077e83ab 76216c34ed0c7 2 minutes ago Running kube-scheduler 4 a4618dace078b * ab3e18304cd05 da26705ccb4b5 2 minutes ago Running kube-controller-manager 4 3277f10bd9039 * c24e9af0db778 7e28efa976bd1 2 minutes ago Running kube-apiserver 4 8d9312757a688 * a99bccfc834a2 7e28efa976bd1 7 minutes ago Exited kube-apiserver 3 4d784442830d4 * 7115cfb8c7018 da26705ccb4b5 7 minutes ago Exited kube-controller-manager 3 fb6f7f8782657 * 1630cbde270ae 303ce5db0e90d 7 minutes ago Exited etcd 3 4f2f9ff3310b9 * 2c8cd82bc63ac 76216c34ed0c7 7 minutes ago Exited kube-scheduler 3 56e4eb252fae1 * * ==> describe nodes <== * No resources found in default namespace. * * ==> dmesg <== * [Aug 5 15:11] WSL2: Performing memory compaction. * [Aug 5 15:12] WSL2: Performing memory compaction. * [Aug 5 15:13] WSL2: Performing memory compaction. * [Aug 5 15:14] WSL2: Performing memory compaction. * [Aug 5 15:15] WSL2: Performing memory compaction. * [Aug 5 15:16] WSL2: Performing memory compaction. * [Aug 5 15:17] WSL2: Performing memory compaction. * [Aug 5 15:18] WSL2: Performing memory compaction. * [Aug 5 15:19] WSL2: Performing memory compaction. * [Aug 5 15:20] WSL2: Performing memory compaction. * [Aug 5 15:21] WSL2: Performing memory compaction. * [Aug 5 15:22] WSL2: Performing memory compaction. * [Aug 5 15:23] WSL2: Performing memory compaction. * [Aug 5 15:24] WSL2: Performing memory compaction. * [Aug 5 15:25] WSL2: Performing memory compaction. * [Aug 5 15:26] WSL2: Performing memory compaction. * [Aug 5 15:27] WSL2: Performing memory compaction. * [Aug 5 15:28] WSL2: Performing memory compaction. * [Aug 5 15:29] WSL2: Performing memory compaction. * [Aug 5 15:30] WSL2: Performing memory compaction. * [Aug 5 15:31] WSL2: Performing memory compaction. * [Aug 5 15:32] WSL2: Performing memory compaction. * [Aug 5 15:33] WSL2: Performing memory compaction. * [Aug 5 15:34] WSL2: Performing memory compaction. * [Aug 5 15:35] WSL2: Performing memory compaction. * [Aug 5 15:36] WSL2: Performing memory compaction. * [Aug 5 15:37] WSL2: Performing memory compaction. * [Aug 5 15:38] WSL2: Performing memory compaction. * [Aug 5 15:39] WSL2: Performing memory compaction. * [Aug 5 15:40] WSL2: Performing memory compaction. * [Aug 5 15:41] WSL2: Performing memory compaction. * [Aug 5 15:42] WSL2: Performing memory compaction. * [Aug 5 15:43] WSL2: Performing memory compaction. * [Aug 5 15:44] WSL2: Performing memory compaction. * [Aug 5 15:45] WSL2: Performing memory compaction. * [Aug 5 15:46] WSL2: Performing memory compaction. * [Aug 5 15:47] WSL2: Performing memory compaction. * [Aug 5 15:48] WSL2: Performing memory compaction. * [Aug 5 15:49] WSL2: Performing memory compaction. * [Aug 5 15:51] WSL2: Performing memory compaction. * [Aug 5 15:52] WSL2: Performing memory compaction. * [Aug 5 15:53] WSL2: Performing memory compaction. * [Aug 5 15:56] WSL2: Performing memory compaction. * [Aug 5 15:57] WSL2: Performing memory compaction. * [Aug 5 15:58] WSL2: Performing memory compaction. * [Aug 5 15:59] WSL2: Performing memory compaction. * [Aug 5 16:00] WSL2: Performing memory compaction. * [Aug 5 16:01] WSL2: Performing memory compaction. * [Aug 5 16:02] WSL2: Performing memory compaction. * [Aug 5 16:03] WSL2: Performing memory compaction. * [Aug 5 16:04] WSL2: Performing memory compaction. * [Aug 5 16:05] WSL2: Performing memory compaction. * [Aug 5 16:06] WSL2: Performing memory compaction. * [Aug 5 16:07] WSL2: Performing memory compaction. * [Aug 5 16:08] WSL2: Performing memory compaction. * [Aug 5 16:09] WSL2: Performing memory compaction. * [Aug 5 16:10] WSL2: Performing memory compaction. * [Aug 5 16:11] WSL2: Performing memory compaction. * [Aug 5 16:12] WSL2: Performing memory compaction. * [Aug 5 16:13] WSL2: Performing memory compaction. * * ==> etcd [1630cbde270a] <== * [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead * 2020-08-05 16:06:25.536993 I | etcdmain: etcd Version: 3.4.3 * 2020-08-05 16:06:25.537017 I | etcdmain: Git SHA: 3cf2f69b5 * 2020-08-05 16:06:25.537020 I | etcdmain: Go Version: go1.12.12 * 2020-08-05 16:06:25.537023 I | etcdmain: Go OS/Arch: linux/amd64 * 2020-08-05 16:06:25.537027 I | etcdmain: setting maximum number of CPUs to 12, total number of available CPUs is 12 * [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead * 2020-08-05 16:06:25.537117 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = * 2020-08-05 16:06:25.537600 I | embed: name = minikube * 2020-08-05 16:06:25.537627 I | embed: data dir = /var/lib/minikube/etcd * 2020-08-05 16:06:25.537632 I | embed: member dir = /var/lib/minikube/etcd/member * 2020-08-05 16:06:25.537635 I | embed: heartbeat = 100ms * 2020-08-05 16:06:25.537637 I | embed: election = 1000ms * 2020-08-05 16:06:25.537640 I | embed: snapshot count = 10000 * 2020-08-05 16:06:25.537645 I | embed: advertise client URLs = https://172.17.0.3:2379 * 2020-08-05 16:06:25.668763 I | etcdserver: starting member b273bc7741bcb020 in cluster 86482fea2286a1d2 * raft2020/08/05 16:06:25 INFO: b273bc7741bcb020 switched to configuration voters=() * raft2020/08/05 16:06:25 INFO: b273bc7741bcb020 became follower at term 0 * raft2020/08/05 16:06:25 INFO: newRaft b273bc7741bcb020 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] * raft2020/08/05 16:06:25 INFO: b273bc7741bcb020 became follower at term 1 * raft2020/08/05 16:06:25 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056) * 2020-08-05 16:06:25.703627 W | auth: simple token is not cryptographically signed * 2020-08-05 16:06:25.709950 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided] * 2020-08-05 16:06:25.710193 I | etcdserver: b273bc7741bcb020 as single-node; fast-forwarding 9 ticks (election ticks 10) * raft2020/08/05 16:06:25 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056) * 2020-08-05 16:06:25.710607 I | etcdserver/membership: added member b273bc7741bcb020 [https://172.17.0.3:2380] to cluster 86482fea2286a1d2 * 2020-08-05 16:06:25.711985 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = * 2020-08-05 16:06:25.712099 I | embed: listening for peers on 172.17.0.3:2380 * 2020-08-05 16:06:25.712223 I | embed: listening for metrics on http://127.0.0.1:2381 * raft2020/08/05 16:06:26 INFO: b273bc7741bcb020 is starting a new election at term 1 * raft2020/08/05 16:06:26 INFO: b273bc7741bcb020 became candidate at term 2 * raft2020/08/05 16:06:26 INFO: b273bc7741bcb020 received MsgVoteResp from b273bc7741bcb020 at term 2 * raft2020/08/05 16:06:26 INFO: b273bc7741bcb020 became leader at term 2 * raft2020/08/05 16:06:26 INFO: raft.node: b273bc7741bcb020 elected leader b273bc7741bcb020 at term 2 * 2020-08-05 16:06:26.072364 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.3:2379]} to cluster 86482fea2286a1d2 * 2020-08-05 16:06:26.072399 I | embed: ready to serve client requests * 2020-08-05 16:06:26.073280 I | embed: serving client requests on 127.0.0.1:2379 * 2020-08-05 16:06:26.073369 I | etcdserver: setting up the initial cluster version to 3.4 * 2020-08-05 16:06:26.073423 I | embed: ready to serve client requests * 2020-08-05 16:06:26.074193 I | embed: serving client requests on 172.17.0.3:2379 * 2020-08-05 16:06:26.100995 N | etcdserver/membership: set the initial cluster version to 3.4 * 2020-08-05 16:06:26.103116 I | etcdserver/api: enabled capabilities for version 3.4 * 2020-08-05 16:06:31.440769 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (133.9261ms) to execute * 2020-08-05 16:06:31.440942 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/system:controller:endpointslice-controller\" " with result "range_response_count:0 size:4" took too long (133.9821ms) to execute * 2020-08-05 16:06:39.051209 W | etcdserver: read-only range request "key:\"/registry/clusterroles/view\" " with result "range_response_count:1 size:2042" took too long (136.1533ms) to execute * 2020-08-05 16:11:25.601218 N | pkg/osutil: received terminated signal, shutting down... * WARNING: 2020/08/05 16:11:25 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * 2020-08-05 16:11:25.605120 I | etcdserver: skipped leadership transfer for single voting member cluster * WARNING: 2020/08/05 16:11:25 grpc: addrConn.createTransport failed to connect to {172.17.0.3:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 172.17.0.3:2379: operation was canceled". Reconnecting... * * ==> etcd [c362f17e7ac0] <== * [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead * 2020-08-05 16:11:39.319051 I | etcdmain: etcd Version: 3.4.3 * 2020-08-05 16:11:39.319074 I | etcdmain: Git SHA: 3cf2f69b5 * 2020-08-05 16:11:39.319077 I | etcdmain: Go Version: go1.12.12 * 2020-08-05 16:11:39.319080 I | etcdmain: Go OS/Arch: linux/amd64 * 2020-08-05 16:11:39.319083 I | etcdmain: setting maximum number of CPUs to 12, total number of available CPUs is 12 * [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead * 2020-08-05 16:11:39.319130 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = * 2020-08-05 16:11:39.319601 I | embed: name = minikube * 2020-08-05 16:11:39.319627 I | embed: data dir = /var/lib/minikube/etcd * 2020-08-05 16:11:39.319632 I | embed: member dir = /var/lib/minikube/etcd/member * 2020-08-05 16:11:39.319635 I | embed: heartbeat = 100ms * 2020-08-05 16:11:39.319638 I | embed: election = 1000ms * 2020-08-05 16:11:39.319641 I | embed: snapshot count = 10000 * 2020-08-05 16:11:39.319649 I | embed: advertise client URLs = https://172.17.0.3:2379 * 2020-08-05 16:11:39.337913 I | etcdserver: starting member b273bc7741bcb020 in cluster 86482fea2286a1d2 * raft2020/08/05 16:11:39 INFO: b273bc7741bcb020 switched to configuration voters=() * raft2020/08/05 16:11:39 INFO: b273bc7741bcb020 became follower at term 0 * raft2020/08/05 16:11:39 INFO: newRaft b273bc7741bcb020 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] * raft2020/08/05 16:11:39 INFO: b273bc7741bcb020 became follower at term 1 * raft2020/08/05 16:11:39 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056) * 2020-08-05 16:11:39.400771 W | auth: simple token is not cryptographically signed * 2020-08-05 16:11:39.405993 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided] * 2020-08-05 16:11:39.406367 I | etcdserver: b273bc7741bcb020 as single-node; fast-forwarding 9 ticks (election ticks 10) * raft2020/08/05 16:11:39 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056) * 2020-08-05 16:11:39.406716 I | etcdserver/membership: added member b273bc7741bcb020 [https://172.17.0.3:2380] to cluster 86482fea2286a1d2 * 2020-08-05 16:11:39.407764 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = * 2020-08-05 16:11:39.407864 I | embed: listening for peers on 172.17.0.3:2380 * 2020-08-05 16:11:39.407948 I | embed: listening for metrics on http://127.0.0.1:2381 * raft2020/08/05 16:11:39 INFO: b273bc7741bcb020 is starting a new election at term 1 * raft2020/08/05 16:11:39 INFO: b273bc7741bcb020 became candidate at term 2 * raft2020/08/05 16:11:39 INFO: b273bc7741bcb020 received MsgVoteResp from b273bc7741bcb020 at term 2 * raft2020/08/05 16:11:39 INFO: b273bc7741bcb020 became leader at term 2 * raft2020/08/05 16:11:39 INFO: raft.node: b273bc7741bcb020 elected leader b273bc7741bcb020 at term 2 * 2020-08-05 16:11:39.998453 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.3:2379]} to cluster 86482fea2286a1d2 * 2020-08-05 16:11:39.998471 I | embed: ready to serve client requests * 2020-08-05 16:11:39.998717 I | embed: ready to serve client requests * 2020-08-05 16:11:39.998862 I | etcdserver: setting up the initial cluster version to 3.4 * 2020-08-05 16:11:40.000413 I | embed: serving client requests on 127.0.0.1:2379 * 2020-08-05 16:11:40.000730 I | embed: serving client requests on 172.17.0.3:2379 * 2020-08-05 16:11:40.001145 N | etcdserver/membership: set the initial cluster version to 3.4 * 2020-08-05 16:11:40.001334 I | etcdserver/api: enabled capabilities for version 3.4 * * ==> kernel <== * 16:14:03 up 2:07, 0 users, load average: 0.00, 0.07, 0.22 * Linux minikube 4.19.104-microsoft-standard #1 SMP Wed Feb 19 06:37:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux * PRETTY_NAME="Ubuntu 20.04 LTS" * * ==> kube-apiserver [a99bccfc834a] <== * W0805 16:11:33.755307 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:33.991712 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.132396 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.192309 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.305201 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.315162 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.327306 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.349465 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.355863 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.445646 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.466657 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.476701 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.479045 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.481994 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.531370 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.545560 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.608690 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.619196 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.643451 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.667254 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.669029 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.737949 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.749186 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.757212 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.764942 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.793795 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.804647 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.816109 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.818204 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.852899 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.885338 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.896955 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.914816 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.926271 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.957390 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.965573 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.983241 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:34.992974 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:35.044965 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:35.047482 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:35.062028 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:35.089708 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:35.091476 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:35.138643 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:35.172601 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:35.247837 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:35.269377 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:35.337151 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:35.356071 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:35.384107 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:35.393569 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:35.406954 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:35.429518 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:35.456278 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:35.472720 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:35.474660 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:35.475685 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:35.493428 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:35.522128 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * W0805 16:11:35.534927 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * * ==> kube-apiserver [c24e9af0db77] <== * I0805 16:11:41.437563 1 client.go:361] parsed scheme: "endpoint" * I0805 16:11:41.437610 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0805 16:11:41.444847 1 client.go:361] parsed scheme: "endpoint" * I0805 16:11:41.444905 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * W0805 16:11:41.586520 1 genericapiserver.go:409] Skipping API batch/v2alpha1 because it has no resources. * W0805 16:11:41.594483 1 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources. * W0805 16:11:41.603767 1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources. * W0805 16:11:41.619576 1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. * W0805 16:11:41.622679 1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. * W0805 16:11:41.635847 1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources. * W0805 16:11:41.662448 1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources. * W0805 16:11:41.662495 1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources. * I0805 16:11:41.671185 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. * I0805 16:11:41.671226 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. * I0805 16:11:41.672883 1 client.go:361] parsed scheme: "endpoint" * I0805 16:11:41.672926 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0805 16:11:41.679376 1 client.go:361] parsed scheme: "endpoint" * I0805 16:11:41.679421 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0805 16:11:43.605523 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt * I0805 16:11:43.605524 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt * I0805 16:11:43.605671 1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key * I0805 16:11:43.606149 1 secure_serving.go:178] Serving securely on [::]:8443 * I0805 16:11:43.606190 1 tlsconfig.go:240] Starting DynamicServingCertificateController * I0805 16:11:43.606322 1 controller.go:81] Starting OpenAPI AggregationController * I0805 16:11:43.606353 1 crd_finalizer.go:266] Starting CRDFinalizer * I0805 16:11:43.606412 1 controller.go:86] Starting OpenAPI controller * I0805 16:11:43.606458 1 customresource_discovery_controller.go:209] Starting DiscoveryController * I0805 16:11:43.606483 1 naming_controller.go:291] Starting NamingConditionController * I0805 16:11:43.606513 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController * I0805 16:11:43.606568 1 establishing_controller.go:76] Starting EstablishingController * I0805 16:11:43.606583 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController * I0805 16:11:43.606625 1 apiservice_controller.go:94] Starting APIServiceRegistrationController * I0805 16:11:43.606641 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller * I0805 16:11:43.606664 1 available_controller.go:387] Starting AvailableConditionController * I0805 16:11:43.606670 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller * I0805 16:11:43.606762 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller * I0805 16:11:43.606796 1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller * I0805 16:11:43.606864 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt * I0805 16:11:43.606926 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt * E0805 16:11:43.607613 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.3, ResourceVersion: 0, AdditionalErrorMsg: * I0805 16:11:43.607911 1 autoregister_controller.go:141] Starting autoregister controller * I0805 16:11:43.607924 1 cache.go:32] Waiting for caches to sync for autoregister controller * I0805 16:11:43.607947 1 crdregistration_controller.go:111] Starting crd-autoregister controller * I0805 16:11:43.607952 1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister * I0805 16:11:43.706863 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller * I0805 16:11:43.706876 1 cache.go:39] Caches are synced for AvailableConditionController controller * I0805 16:11:43.707014 1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller * I0805 16:11:43.708102 1 shared_informer.go:230] Caches are synced for crd-autoregister * I0805 16:11:43.708108 1 cache.go:39] Caches are synced for autoregister controller * I0805 16:11:44.605574 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). * I0805 16:11:44.605936 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). * I0805 16:11:44.611644 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000 * I0805 16:11:44.616065 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000 * I0805 16:11:44.616101 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist. * I0805 16:11:45.116095 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io * I0805 16:11:45.155608 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io * W0805 16:11:45.225461 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.17.0.3] * I0805 16:11:45.226441 1 controller.go:606] quota admission added evaluator for: endpoints * I0805 16:11:45.230307 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io * I0805 16:11:45.882015 1 controller.go:606] quota admission added evaluator for: serviceaccounts * * ==> kube-controller-manager [7115cfb8c701] <== * I0805 16:06:36.757990 1 controllermanager.go:533] Started "daemonset" * I0805 16:06:36.758058 1 daemon_controller.go:257] Starting daemon sets controller * I0805 16:06:36.758064 1 shared_informer.go:223] Waiting for caches to sync for daemon sets * I0805 16:06:37.008255 1 controllermanager.go:533] Started "job" * I0805 16:06:37.008299 1 job_controller.go:144] Starting job controller * I0805 16:06:37.008313 1 shared_informer.go:223] Waiting for caches to sync for job * I0805 16:06:37.258344 1 controllermanager.go:533] Started "replicaset" * I0805 16:06:37.258380 1 replica_set.go:181] Starting replicaset controller * I0805 16:06:37.258395 1 shared_informer.go:223] Waiting for caches to sync for ReplicaSet * I0805 16:06:37.957712 1 controllermanager.go:533] Started "horizontalpodautoscaling" * I0805 16:06:37.957811 1 horizontal.go:169] Starting HPA controller * I0805 16:06:37.957818 1 shared_informer.go:223] Waiting for caches to sync for HPA * I0805 16:06:38.208197 1 controllermanager.go:533] Started "statefulset" * I0805 16:06:38.208233 1 stateful_set.go:146] Starting stateful set controller * I0805 16:06:38.208244 1 shared_informer.go:223] Waiting for caches to sync for stateful set * I0805 16:06:38.357800 1 node_lifecycle_controller.go:78] Sending events to api server * E0805 16:06:38.357850 1 core.go:229] failed to start cloud node lifecycle controller: no cloud provider provided * W0805 16:06:38.357858 1 controllermanager.go:525] Skipping "cloud-node-lifecycle" * I0805 16:06:38.608282 1 controllermanager.go:533] Started "tokencleaner" * I0805 16:06:38.608320 1 tokencleaner.go:118] Starting token cleaner controller * I0805 16:06:38.608329 1 shared_informer.go:223] Waiting for caches to sync for token_cleaner * I0805 16:06:38.608333 1 shared_informer.go:230] Caches are synced for token_cleaner * E0805 16:06:38.858314 1 core.go:89] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail * W0805 16:06:38.858352 1 controllermanager.go:525] Skipping "service" * I0805 16:06:38.858881 1 shared_informer.go:223] Waiting for caches to sync for resource quota * I0805 16:06:38.860682 1 shared_informer.go:223] Waiting for caches to sync for garbage collector * I0805 16:06:38.870834 1 shared_informer.go:230] Caches are synced for service account * I0805 16:06:38.880981 1 shared_informer.go:230] Caches are synced for certificate-csrsigning * I0805 16:06:38.888919 1 shared_informer.go:230] Caches are synced for certificate-csrapproving * I0805 16:06:38.908182 1 shared_informer.go:230] Caches are synced for TTL * I0805 16:06:38.908743 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator * I0805 16:06:38.963151 1 shared_informer.go:230] Caches are synced for namespace * E0805 16:06:39.052719 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again * E0805 16:06:39.055843 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again * E0805 16:06:39.055928 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again * I0805 16:06:39.058330 1 shared_informer.go:230] Caches are synced for bootstrap_signer * I0805 16:06:39.126971 1 shared_informer.go:230] Caches are synced for PV protection * I0805 16:06:39.208246 1 shared_informer.go:230] Caches are synced for expand * I0805 16:06:39.242366 1 shared_informer.go:230] Caches are synced for endpoint * I0805 16:06:39.258006 1 shared_informer.go:230] Caches are synced for HPA * I0805 16:06:39.258289 1 shared_informer.go:230] Caches are synced for daemon sets * I0805 16:06:39.258578 1 shared_informer.go:230] Caches are synced for ReplicaSet * I0805 16:06:39.258642 1 shared_informer.go:230] Caches are synced for endpoint_slice * I0805 16:06:39.258734 1 shared_informer.go:230] Caches are synced for persistent volume * I0805 16:06:39.259274 1 shared_informer.go:230] Caches are synced for taint * I0805 16:06:39.259319 1 taint_manager.go:187] Starting NoExecuteTaintManager * I0805 16:06:39.259870 1 shared_informer.go:230] Caches are synced for PVC protection * I0805 16:06:39.275800 1 shared_informer.go:230] Caches are synced for deployment * I0805 16:06:39.308155 1 shared_informer.go:230] Caches are synced for disruption * I0805 16:06:39.308194 1 disruption.go:339] Sending events to api server. * I0805 16:06:39.308236 1 shared_informer.go:230] Caches are synced for ReplicationController * I0805 16:06:39.308510 1 shared_informer.go:230] Caches are synced for job * I0805 16:06:39.308519 1 shared_informer.go:230] Caches are synced for stateful set * I0805 16:06:39.308519 1 shared_informer.go:230] Caches are synced for GC * I0805 16:06:39.309691 1 shared_informer.go:230] Caches are synced for attach detach * I0805 16:06:39.459101 1 shared_informer.go:230] Caches are synced for resource quota * I0805 16:06:39.460982 1 shared_informer.go:230] Caches are synced for garbage collector * I0805 16:06:39.461075 1 shared_informer.go:230] Caches are synced for resource quota * I0805 16:06:39.559982 1 shared_informer.go:230] Caches are synced for garbage collector * I0805 16:06:39.560018 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage * * ==> kube-controller-manager [ab3e18304cd0] <== * I0805 16:11:51.681497 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io * I0805 16:11:51.681514 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions * I0805 16:11:51.681555 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps * I0805 16:11:51.681571 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch * I0805 16:11:51.681652 1 controllermanager.go:533] Started "resourcequota" * I0805 16:11:51.681788 1 resource_quota_controller.go:272] Starting resource quota controller * I0805 16:11:51.681818 1 shared_informer.go:223] Waiting for caches to sync for resource quota * I0805 16:11:51.681829 1 resource_quota_monitor.go:303] QuotaMonitor running * I0805 16:11:51.699035 1 controllermanager.go:533] Started "job" * I0805 16:11:51.699139 1 job_controller.go:144] Starting job controller * I0805 16:11:51.699148 1 shared_informer.go:223] Waiting for caches to sync for job * I0805 16:11:51.829253 1 node_lifecycle_controller.go:384] Sending events to api server. * I0805 16:11:51.829529 1 taint_manager.go:163] Sending events to api server. * I0805 16:11:51.829623 1 node_lifecycle_controller.go:512] Controller will reconcile labels. * I0805 16:11:51.829653 1 controllermanager.go:533] Started "nodelifecycle" * I0805 16:11:51.829742 1 node_lifecycle_controller.go:546] Starting node controller * I0805 16:11:51.829755 1 shared_informer.go:223] Waiting for caches to sync for taint * I0805 16:11:52.080103 1 controllermanager.go:533] Started "persistentvolume-expander" * I0805 16:11:52.080126 1 expand_controller.go:319] Starting expand controller * I0805 16:11:52.080221 1 shared_informer.go:223] Waiting for caches to sync for expand * I0805 16:11:52.330188 1 controllermanager.go:533] Started "replicationcontroller" * I0805 16:11:52.330266 1 replica_set.go:181] Starting replicationcontroller controller * I0805 16:11:52.330278 1 shared_informer.go:223] Waiting for caches to sync for ReplicationController * I0805 16:11:52.580077 1 controllermanager.go:533] Started "daemonset" * I0805 16:11:52.580744 1 shared_informer.go:223] Waiting for caches to sync for garbage collector * I0805 16:11:52.580100 1 daemon_controller.go:257] Starting daemon sets controller * I0805 16:11:52.580940 1 shared_informer.go:223] Waiting for caches to sync for daemon sets * I0805 16:11:52.587430 1 shared_informer.go:230] Caches are synced for bootstrap_signer * I0805 16:11:52.599483 1 shared_informer.go:230] Caches are synced for job * I0805 16:11:52.601043 1 shared_informer.go:230] Caches are synced for certificate-csrsigning * I0805 16:11:52.619383 1 shared_informer.go:230] Caches are synced for HPA * I0805 16:11:52.630443 1 shared_informer.go:230] Caches are synced for service account * I0805 16:11:52.659765 1 shared_informer.go:230] Caches are synced for PV protection * I0805 16:11:52.677505 1 shared_informer.go:230] Caches are synced for TTL * I0805 16:11:52.679671 1 shared_informer.go:230] Caches are synced for GC * I0805 16:11:52.679792 1 shared_informer.go:230] Caches are synced for certificate-csrapproving * I0805 16:11:52.681181 1 shared_informer.go:230] Caches are synced for daemon sets * I0805 16:11:52.685758 1 shared_informer.go:230] Caches are synced for namespace * I0805 16:11:52.729836 1 shared_informer.go:230] Caches are synced for endpoint_slice * I0805 16:11:52.729956 1 shared_informer.go:230] Caches are synced for taint * I0805 16:11:52.730019 1 taint_manager.go:187] Starting NoExecuteTaintManager * I0805 16:11:53.029835 1 shared_informer.go:230] Caches are synced for deployment * I0805 16:11:53.079913 1 shared_informer.go:230] Caches are synced for ReplicaSet * I0805 16:11:53.129638 1 shared_informer.go:230] Caches are synced for disruption * I0805 16:11:53.129691 1 shared_informer.go:230] Caches are synced for PVC protection * I0805 16:11:53.129697 1 disruption.go:339] Sending events to api server. * I0805 16:11:53.130041 1 shared_informer.go:230] Caches are synced for attach detach * I0805 16:11:53.130629 1 shared_informer.go:230] Caches are synced for ReplicationController * I0805 16:11:53.135467 1 shared_informer.go:230] Caches are synced for stateful set * I0805 16:11:53.179864 1 shared_informer.go:230] Caches are synced for endpoint * I0805 16:11:53.180180 1 shared_informer.go:230] Caches are synced for persistent volume * I0805 16:11:53.180556 1 shared_informer.go:230] Caches are synced for expand * I0805 16:11:53.182162 1 shared_informer.go:230] Caches are synced for resource quota * I0805 16:11:53.229855 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator * I0805 16:11:53.235096 1 shared_informer.go:230] Caches are synced for garbage collector * I0805 16:11:53.235146 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage * E0805 16:11:53.250707 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again * I0805 16:11:53.281001 1 shared_informer.go:230] Caches are synced for garbage collector * I0805 16:11:53.431585 1 shared_informer.go:223] Waiting for caches to sync for resource quota * I0805 16:11:53.431694 1 shared_informer.go:230] Caches are synced for resource quota * * ==> kube-scheduler [2c8cd82bc63a] <== * I0805 16:06:25.532794 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0805 16:06:25.532861 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0805 16:06:26.599375 1 serving.go:313] Generated self-signed cert in-memory * W0805 16:06:29.899978 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' * W0805 16:06:29.900171 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" * W0805 16:06:29.900321 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous. * W0805 16:06:29.900360 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false * I0805 16:06:29.911574 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0805 16:06:29.911670 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * W0805 16:06:29.912922 1 authorization.go:47] Authorization is disabled * W0805 16:06:29.912954 1 authentication.go:40] Authentication is disabled * I0805 16:06:29.912967 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 * I0805 16:06:29.914329 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0805 16:06:29.914381 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0805 16:06:29.914561 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 * I0805 16:06:29.914611 1 tlsconfig.go:240] Starting DynamicServingCertificateController * E0805 16:06:29.916491 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" * E0805 16:06:29.916894 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope * E0805 16:06:29.916862 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope * E0805 16:06:29.916922 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope * E0805 16:06:29.917031 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope * E0805 16:06:29.917068 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope * E0805 16:06:29.917580 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope * E0805 16:06:29.917583 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope * E0805 16:06:29.917653 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope * E0805 16:06:30.926829 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope * E0805 16:06:30.999074 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope * E0805 16:06:30.999134 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope * E0805 16:06:30.999163 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope * E0805 16:06:30.999200 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" * E0805 16:06:31.070518 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope * E0805 16:06:31.099616 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope * E0805 16:06:31.218357 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope * I0805 16:06:33.014821 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kube-scheduler [8e2c4077e83a] <== * I0805 16:11:39.202095 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0805 16:11:39.202161 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0805 16:11:40.328166 1 serving.go:313] Generated self-signed cert in-memory * W0805 16:11:43.623703 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' * W0805 16:11:43.623735 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" * W0805 16:11:43.623746 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous. * W0805 16:11:43.623752 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false * I0805 16:11:43.711112 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0805 16:11:43.711167 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * W0805 16:11:43.713203 1 authorization.go:47] Authorization is disabled * W0805 16:11:43.713225 1 authentication.go:40] Authentication is disabled * I0805 16:11:43.713235 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 * I0805 16:11:43.714902 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 * I0805 16:11:43.715200 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0805 16:11:43.715242 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0805 16:11:43.715301 1 tlsconfig.go:240] Starting DynamicServingCertificateController * E0805 16:11:43.717638 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" * E0805 16:11:43.717646 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope * E0805 16:11:43.717844 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope * E0805 16:11:43.718013 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope * E0805 16:11:43.718117 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope * E0805 16:11:43.718220 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope * E0805 16:11:43.718447 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope * E0805 16:11:43.719823 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope * E0805 16:11:43.719946 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope * E0805 16:11:44.562242 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope * E0805 16:11:44.898624 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope * E0805 16:11:44.929326 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope * E0805 16:11:45.007010 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" * I0805 16:11:47.515568 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Wed 2020-08-05 15:55:40 UTC, end at Wed 2020-08-05 16:14:03 UTC. -- * Aug 05 16:13:57 minikube kubelet[10929]: E0805 16:13:57.715996 10929 kubelet.go:2267] node "minikube" not found * Aug 05 16:13:57 minikube kubelet[10929]: E0805 16:13:57.816338 10929 kubelet.go:2267] node "minikube" not found * Aug 05 16:13:57 minikube kubelet[10929]: E0805 16:13:57.916585 10929 kubelet.go:2267] node "minikube" not found * Aug 05 16:13:58 minikube kubelet[10929]: E0805 16:13:58.016826 10929 kubelet.go:2267] node "minikube" not found * Aug 05 16:13:58 minikube kubelet[10929]: E0805 16:13:58.117056 10929 kubelet.go:2267] node "minikube" not found * Aug 05 16:13:58 minikube kubelet[10929]: E0805 16:13:58.217298 10929 kubelet.go:2267] node "minikube" not found * Aug 05 16:13:58 minikube kubelet[10929]: E0805 16:13:58.317593 10929 kubelet.go:2267] node "minikube" not found * Aug 05 16:13:58 minikube kubelet[10929]: I0805 16:13:58.366496 10929 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach * Aug 05 16:13:58 minikube kubelet[10929]: E0805 16:13:58.417811 10929 kubelet.go:2267] node "minikube" not found

If there's anything else that would be useful / helpful in solving this, let me know and I'll update the issue ASAP.

medyagh commented 4 years ago

@ale8k thank you for taking the time to report this issue, I believe the logs is not fully pasted, do you mind sharingt the full logs ? so I can see the exact error message ? all the logs that pasted only has info logs and not the actual error.

also do you mind sharing how much memory/cpus your Docker Desktop has ? Docker Desktop > Settings > Resources ...

and have you tried

docker system prune -a

to see if that helps you?

btw are you using any specific corp VPN or Proxy ?

medyagh commented 4 years ago

/triage needs-information /triage support

medyagh commented 4 years ago

@ale8k do you still have this issue ? and you mind pasting the full log with the lastes verison of minikube ? I see the log is partial

medyagh commented 4 years ago

I haven't heard back from you, I wonder if you still have this issue? Regrettably, there isn't enough information in this issue to make it actionable, and a long enough duration has passed, so this issue is likely difficult to replicate.

I will close this issue for now but please feel free to reopen whenever you feel ready to provide more information.

Chiragmodha commented 3 years ago

I have also occurred this problem

C:\Users\chirag.modha>minikube start --alsologtostderr -v=4 to debug crashes I0221 12:36:06.055972 7884 out.go:229] Setting OutFile to fd 84 ... I0221 12:36:06.058974 7884 out.go:276] TERM=,COLORTERM=, which probably does not support color I0221 12:36:06.059975 7884 out.go:242] Setting ErrFile to fd 88... I0221 12:36:06.060988 7884 out.go:276] TERM=,COLORTERM=, which probably does not support color W0221 12:36:06.081000 7884 root.go:266] Error reading config file at C:\Users\vivek gandhi.minikube\config\config.json: open C:\Users\vivek gandhi.minikube\config\config.json: The system cannot find the file specified. I0221 12:36:06.086232 7884 out.go:236] Setting JSON to false I0221 12:36:06.091231 7884 start.go:106] hostinfo: {"hostname":"DESKTOP-VK9NG8Q","uptime":8720,"bootTime":1613882446,"procs":225,"os":"windows","platform":"Microsoft Windows 10 Pro","platformFamily":"Standalone Workstation","platformVersion":"10.0.19041 Build 19041","kernelVersion":"10.0.19041 Build 19041","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2db81c1c-cf2a-46d0-be6a-3d3a3ba68f4c"} W0221 12:36:06.092289 7884 start.go:114] gopshost.Virtualization returned error: not implemented yet I0221 12:36:06.144692 7884 out.go:119] * minikube v1.17.1 on Microsoft Windows 10 Pro 10.0.19041 Build 19041

stderr: I0221 12:36:10.476737 7884 cli_runner.go:111] Run: podman container inspect minikube --format={{.State.Status}} I0221 12:36:10.476737 7884 delete.go:46] couldn't inspect container "minikube" before deleting: unknown state "minikube": podman container inspect minikube --format={{.State.Status}}: exec: "podman": executable file not found in %PATH% stdout:

stderr: W0221 12:36:10.478229 7884 start.go:382] delete host: Docker machine "minikube" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one. W0221 12:36:10.479279 7884 out.go:181] ! StartHost failed, but will try again: config: please provide an IP address ! StartHost failed, but will try again: config: please provide an IP address I0221 12:36:10.480986 7884 start.go:392] Will try again in 5 seconds ... I0221 12:36:15.494256 7884 start.go:313] acquiring machines lock for minikube: {Name:mk31ca9b7cf51714808dad142666d59973a346ce Clock:{} Delay:500ms Timeout:13m0s Cancel:} I0221 12:36:15.495095 7884 start.go:317] acquired machines lock for "minikube" in 0s I0221 12:36:15.519863 7884 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:ssh HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} I0221 12:36:15.527385 7884 start.go:126] createHost starting for "" (driver="ssh") I0221 12:36:15.530542 7884 start.go:129] duration metric: createHost completed in 1.0118ms I0221 12:36:15.531770 7884 start.go:80] releasing machines lock for "minikube", held for 12.7703ms W0221 12:36:15.534927 7884 out.go:181] * Failed to start ssh bare metal machine. Running "minikube delete" may fix it: config: please provide an IP address

W0221 12:36:15.635320 7884 out.go:181] X Exiting due to GUEST_PROVISION: Failed to start host: config: please provide an IP address X Exiting due to GUEST_PROVISION: Failed to start host: config: please provide an IP address W0221 12:36:15.639771 7884 out.go:181] W0221 12:36:15.642523 7884 out.go:181] * If the above advice does not help, please let us know: