kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.23k stars 4.87k forks source link

Unable to start: no free private network subnets found with given parameters #11156

Closed etix closed 3 years ago

etix commented 3 years ago

Steps to reproduce the issue:

  1. minikube start --cpus=6 --memory=12288 --disk-size='40000mb' --kubernetes-version=v1.18.18 --alsologtostderr

Details

OS: ArchLinux (kernel 5.11.15) Minikube version: 1.19.0 Docker version: 20.10.6 Systemd: 248

I tried to reboot the system multiple times and tried to use the kvm2 driver. Same issue.

Full output of failed command:

I0420 12:05:31.935612   79734 out.go:278] Setting OutFile to fd 1 ...
I0420 12:05:31.935683   79734 out.go:330] isatty.IsTerminal(1) = true
I0420 12:05:31.935687   79734 out.go:291] Setting ErrFile to fd 2...
I0420 12:05:31.935690   79734 out.go:330] isatty.IsTerminal(2) = true
I0420 12:05:31.935761   79734 root.go:317] Updating PATH: /home/etix/.minikube/bin
W0420 12:05:31.935830   79734 root.go:292] Error reading config file at /home/etix/.minikube/config/config.json: open /home/etix/.minikube/config/config.json: no such file or directory
I0420 12:05:31.935980   79734 out.go:285] Setting JSON to false
I0420 12:05:31.953563   79734 start.go:108] hostinfo: {"hostname":"ryzen","uptime":3124,"bootTime":1618910008,"procs":392,"os":"linux","platform":"arch","platformFamily":"arch","platformVersion":"rolling","kernelVersion":"5.11.15-arch1-2","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"5a508ea3-ff7e-4054-8ca3-624ea1702805"}
I0420 12:05:31.953625   79734 start.go:118] virtualization: kvm host
I0420 12:05:31.957880   79734 out.go:157] ๐Ÿ˜„  minikube v1.19.0 on Arch rolling
๐Ÿ˜„  minikube v1.19.0 on Arch rolling
I0420 12:05:31.958002   79734 notify.go:126] Checking for updates...
I0420 12:05:31.958108   79734 driver.go:322] Setting default libvirt URI to qemu:///system
I0420 12:05:31.958131   79734 global.go:103] Querying for installed drivers using PATH=/home/etix/.minikube/bin:/home/etix/go/bin:/var/lib/snapd/snap/bin:/home/etix/Android/Sdk/platform-tools:/home/etix/bin/:/opt/google-cloud-sdk/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/opt/cuda/bin:/opt/cuda/nsight_compute:/opt/cuda/nsight_systems/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
I0420 12:05:31.981693   79734 docker.go:119] docker version: linux-20.10.6
I0420 12:05:31.981758   79734 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0420 12:05:32.029760   79734 info.go:261] docker info: {ID:W6YK:5ARX:XVU6:6MNS:LQWS:SEC3:2TN2:3RGQ:5AT4:AS5M:ZDZ5:JJIX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff false] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:false NGoroutines:40 SystemTime:2021-04-20 12:05:31.997738973 +0200 CEST LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.11.15-arch1-2 OperatingSystem:Arch Linux OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33648889856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ryzen Labels:[] ExperimentalBuild:true ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e.m Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e.m} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Experimental:true Name:buildx Path:/usr/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-tp-docker]] Warnings:<nil>}}
I0420 12:05:32.029876   79734 docker.go:225] overlay module found
I0420 12:05:32.029887   79734 global.go:111] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0420 12:05:32.068435   79734 global.go:111] kvm2 default: true priority: 8, state: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0420 12:05:32.072058   79734 global.go:111] none default: false priority: 4, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:running the 'none' driver as a regular user requires sudo permissions Reason: Fix: Doc:}
I0420 12:05:32.072129   79734 global.go:111] podman default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/}
I0420 12:05:32.072158   79734 global.go:111] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0420 12:05:32.251829   79734 global.go:111] virtualbox default: true priority: 6, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0420 12:05:32.251892   79734 global.go:111] vmware default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Reason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/}
I0420 12:05:32.251915   79734 driver.go:258] not recommending "ssh" due to default: false
I0420 12:05:32.251927   79734 driver.go:292] Picked: docker
I0420 12:05:32.251934   79734 driver.go:293] Alternatives: [kvm2 virtualbox ssh]
I0420 12:05:32.251940   79734 driver.go:294] Rejects: [none podman vmware]
I0420 12:05:32.256183   79734 out.go:157] โœจ  Automatically selected the docker driver. Other choices: kvm2, virtualbox, ssh
โœจ  Automatically selected the docker driver. Other choices: kvm2, virtualbox, ssh
I0420 12:05:32.256193   79734 start.go:276] selected driver: docker
I0420 12:05:32.256197   79734 start.go:718] validating driver "docker" against <nil>
I0420 12:05:32.256208   79734 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
W0420 12:05:32.256235   79734 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
W0420 12:05:32.256262   79734 out.go:222] โ—  Your cgroup does not allow setting memory.
โ—  Your cgroup does not allow setting memory.
I0420 12:05:32.257138   79734 out.go:157]     โ–ช More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
    โ–ช More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
I0420 12:05:32.272903   79734 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0420 12:05:32.322823   79734 info.go:261] docker info: {ID:W6YK:5ARX:XVU6:6MNS:LQWS:SEC3:2TN2:3RGQ:5AT4:AS5M:ZDZ5:JJIX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff false] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:false NGoroutines:40 SystemTime:2021-04-20 12:05:32.291343288 +0200 CEST LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.11.15-arch1-2 OperatingSystem:Arch Linux OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33648889856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ryzen Labels:[] ExperimentalBuild:true ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e.m Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e.m} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Experimental:true Name:buildx Path:/usr/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-tp-docker]] Warnings:<nil>}}
I0420 12:05:32.322912   79734 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
I0420 12:05:32.322999   79734 start_flags.go:730] Wait components to verify : map[apiserver:true system_pods:true]
I0420 12:05:32.323016   79734 cni.go:81] Creating CNI manager for ""
I0420 12:05:32.323023   79734 cni.go:153] CNI unnecessary in this configuration, recommending no CNI
I0420 12:05:32.323029   79734 start_flags.go:270] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 Memory:12288 CPUs:6 DiskSize:40000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.18 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0420 12:05:32.323819   79734 out.go:157] ๐Ÿ‘  Starting control plane node minikube in cluster minikube
๐Ÿ‘  Starting control plane node minikube in cluster minikube
I0420 12:05:32.323833   79734 image.go:107] Checking for gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 in local docker daemon
I0420 12:05:32.342380   79734 cache.go:120] Beginning downloading kic base image for docker with docker
I0420 12:05:32.343327   79734 out.go:157] ๐Ÿšœ  Pulling base image ...
๐Ÿšœ  Pulling base image ...
I0420 12:05:32.343344   79734 preload.go:97] Checking if preload exists for k8s version v1.18.18 and runtime docker
I0420 12:05:32.343445   79734 cache.go:145] Downloading gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 to local daemon
I0420 12:05:32.343455   79734 image.go:162] Writing gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 to local daemon
I0420 12:05:32.461952   79734 preload.go:122] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.18.18-docker-overlay2-amd64.tar.lz4
I0420 12:05:32.461971   79734 cache.go:54] Caching tarball of preloaded images
I0420 12:05:32.461992   79734 preload.go:97] Checking if preload exists for k8s version v1.18.18 and runtime docker
I0420 12:05:32.577409   79734 preload.go:122] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.18.18-docker-overlay2-amd64.tar.lz4
I0420 12:05:32.581344   79734 out.go:157] ๐Ÿ’พ  Downloading Kubernetes v1.18.18 preload ...
๐Ÿ’พ  Downloading Kubernetes v1.18.18 preload ...
I0420 12:05:32.581425   79734 download.go:78] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.18.18-docker-overlay2-amd64.tar.lz4 -> /home/etix/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.18.18-docker-overlay2-amd64.tar.lz4
    > preloaded-images-k8s-v10-v1...: 512.86 MiB / 512.86 MiB  100.00% 97.19 Mi
I0420 12:05:38.397132   79734 preload.go:160] saving checksum for preloaded-images-k8s-v10-v1.18.18-docker-overlay2-amd64.tar.lz4 ...
    > gcr.io/k8s-minikube/kicbase...: 50.60 MiB / 357.67 MiB  14.15% 3.79 MiB pI0420 12:05:38.525901   79734 preload.go:177] verifying checksumm of /home/etix/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.18.18-docker-overlay2-amd64.tar.lz4 ...
    > gcr.io/k8s-minikube/kicbase...: 58.60 MiB / 357.67 MiB  16.38% 5.01 MiB pI0420 12:05:39.252875   79734 cache.go:57] Finished verifying existence of preloaded tar for  v1.18.18 on docker
I0420 12:05:39.253059   79734 profile.go:148] Saving config to /home/etix/.minikube/profiles/minikube/config.json ...
I0420 12:05:39.253074   79734 lock.go:36] WriteFile acquiring /home/etix/.minikube/profiles/minikube/config.json: {Name:mk211bd84d854fea1d6fff2b3065e355dfb157b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
    > gcr.io/k8s-minikube/kicbase...: 357.67 MiB / 357.67 MiB  100.00% 8.24 MiB
I0420 12:06:16.610678   79734 cache.go:148] successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6
I0420 12:06:16.610695   79734 cache.go:185] Successfully downloaded all kic artifacts
I0420 12:06:16.610721   79734 start.go:313] acquiring machines lock for minikube: {Name:mk433c3bfb4f2189afb70058cf0ca504910f2a6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0420 12:06:16.610791   79734 start.go:317] acquired machines lock for "minikube" in 55.101ยตs
I0420 12:06:16.610814   79734 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 Memory:12288 CPUs:6 DiskSize:40000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.18 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.18 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.18.18 ControlPlane:true Worker:true}
I0420 12:06:16.610901   79734 start.go:126] createHost starting for "" (driver="docker")
I0420 12:06:16.612007   79734 out.go:184] ๐Ÿ”ฅ  Creating docker container (CPUs=6, Memory=12288MB) ...
๐Ÿ”ฅ  Creating docker container (CPUs=6, Memory=12288MB) ...| I0420 12:06:16.612274   79734 start.go:160] libmachine.API.Create for "minikube" (driver="docker")
I0420 12:06:16.612302   79734 client.go:168] LocalClient.Create starting
I0420 12:06:16.612380   79734 main.go:126] libmachine: Creating CA: /home/etix/.minikube/certs/ca.pem
/ I0420 12:06:16.714535   79734 main.go:126] libmachine: Creating client certificate: /home/etix/.minikube/certs/cert.pem
- I0420 12:06:16.877286   79734 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0420 12:06:16.896229   79734 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0420 12:06:16.896292   79734 network_create.go:249] running [docker network inspect minikube] to gather additional debugging logs...
I0420 12:06:16.896307   79734 cli_runner.go:115] Run: docker network inspect minikube
\ W0420 12:06:16.915524   79734 cli_runner.go:162] docker network inspect minikube returned with exit code 1
I0420 12:06:16.915542   79734 network_create.go:252] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
stdout:
[]

stderr:
Error: No such network: minikube
I0420 12:06:16.915552   79734 network_create.go:254] output of [docker network inspect minikube]: -- stdout --
[]

-- /stdout --
** stderr ** 
Error: No such network: minikube

** /stderr **
I0420 12:06:16.915591   79734 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0420 12:06:16.933968   79734 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 12:06:16.934597   79734 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 12:06:16.935201   79734 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 12:06:16.935799   79734 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 12:06:16.936396   79734 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 12:06:16.936988   79734 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 12:06:16.937570   79734 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 12:06:16.938160   79734 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 12:06:16.938750   79734 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 12:06:16.939349   79734 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 12:06:16.939965   79734 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 12:06:16.940548   79734 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 12:06:16.941132   79734 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 12:06:16.941785   79734 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 12:06:16.942387   79734 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 12:06:16.942987   79734 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 12:06:16.943574   79734 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 12:06:16.944158   79734 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 12:06:16.944742   79734 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 12:06:16.945323   79734 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
E0420 12:06:16.945339   79734 network_create.go:79] failed to find free subnet for docker network minikube after 20 attempts: no free private network subnets found with given parameters (start: "192.168.9.0", step: 9, tries: 20)
W0420 12:06:16.945422   79734 out.go:222] โ—  Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: no free private network subnets found with given parameters (start: "192.168.9.0", step: 9, tries: 20)

โ—  Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: no free private network subnets found with given parameters (start: "192.168.9.0", step: 9, tries: 20)
I0420 12:06:16.945502   79734 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I0420 12:06:16.964721   79734 cli_runner.go:115] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0420 12:06:16.982861   79734 oci.go:102] Successfully created a docker volume minikube
I0420 12:06:16.982917   79734 cli_runner.go:115] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 -d /var/lib
I0420 12:06:17.472390   79734 oci.go:106] Successfully prepared a docker volume minikube
W0420 12:06:17.472436   79734 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0420 12:06:17.472440   79734 preload.go:97] Checking if preload exists for k8s version v1.18.18 and runtime docker
W0420 12:06:17.472448   79734 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
W0420 12:06:17.472466   79734 oci.go:233] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
I0420 12:06:17.472472   79734 preload.go:105] Found local preload: /home/etix/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.18.18-docker-overlay2-amd64.tar.lz4
I0420 12:06:17.472477   79734 kic.go:175] Starting extracting preloaded images to volume ...
I0420 12:06:17.472513   79734 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/etix/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.18.18-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 -I lz4 -xf /preloaded.tar -C /extractDir
I0420 12:06:17.472515   79734 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
I0420 12:06:17.522444   79734 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --security-opt apparmor=unconfined -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6
I0420 12:06:17.903815   79734 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Running}}
I0420 12:06:17.922833   79734 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0420 12:06:17.942028   79734 cli_runner.go:115] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
I0420 12:06:17.994764   79734 oci.go:278] the created container "minikube" has a running status.
I0420 12:06:17.994797   79734 kic.go:206] Creating ssh key for kic: /home/etix/.minikube/machines/minikube/id_rsa...
I0420 12:06:18.081951   79734 kic_runner.go:188] docker (temp): /home/etix/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0420 12:06:18.124128   79734 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0420 12:06:18.142321   79734 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0420 12:06:18.142335   79734 kic_runner.go:115] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0420 12:06:19.597442   79734 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/etix/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.18.18-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 -I lz4 -xf /preloaded.tar -C /extractDir: (2.12489654s)
I0420 12:06:19.597481   79734 kic.go:184] duration metric: took 2.125000 seconds to extract preloaded images to volume
I0420 12:06:19.597573   79734 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0420 12:06:19.615906   79734 machine.go:88] provisioning docker machine ...
I0420 12:06:19.615930   79734 ubuntu.go:169] provisioning hostname "minikube"
I0420 12:06:19.615981   79734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0420 12:06:19.635683   79734 main.go:126] libmachine: Using SSH client type: native
I0420 12:06:19.635801   79734 main.go:126] libmachine: &{{{<nil> 0 [] [] []} docker [0x5637c26c90e0] 0x5637c26c90a0 <nil>  [] 0s} 127.0.0.1 49184 <nil> <nil>}
I0420 12:06:19.635810   79734 main.go:126] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0420 12:06:19.747636   79734 main.go:126] libmachine: SSH cmd err, output: <nil>: minikube

I0420 12:06:19.747687   79734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0420 12:06:19.766794   79734 main.go:126] libmachine: Using SSH client type: native
I0420 12:06:19.766946   79734 main.go:126] libmachine: &{{{<nil> 0 [] [] []} docker [0x5637c26c90e0] 0x5637c26c90a0 <nil>  [] 0s} 127.0.0.1 49184 <nil> <nil>}
I0420 12:06:19.766969   79734 main.go:126] libmachine: About to run SSH command:

                if ! grep -xq '.*\sminikube' /etc/hosts; then
                        if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
                        else 
                                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
                        fi
                fi
I0420 12:06:19.871231   79734 main.go:126] libmachine: SSH cmd err, output: <nil>: 
I0420 12:06:19.871263   79734 ubuntu.go:175] set auth options {CertDir:/home/etix/.minikube CaCertPath:/home/etix/.minikube/certs/ca.pem CaPrivateKeyPath:/home/etix/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/etix/.minikube/machines/server.pem ServerKeyPath:/home/etix/.minikube/machines/server-key.pem ClientKeyPath:/home/etix/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/etix/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/etix/.minikube}
I0420 12:06:19.871284   79734 ubuntu.go:177] setting up certificates
I0420 12:06:19.871295   79734 provision.go:83] configureAuth start
I0420 12:06:19.871352   79734 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0420 12:06:19.890933   79734 provision.go:137] copyHostCerts
I0420 12:06:19.890987   79734 exec_runner.go:152] cp: /home/etix/.minikube/certs/cert.pem --> /home/etix/.minikube/cert.pem (1115 bytes)
I0420 12:06:19.891070   79734 exec_runner.go:152] cp: /home/etix/.minikube/certs/key.pem --> /home/etix/.minikube/key.pem (1675 bytes)
I0420 12:06:19.891120   79734 exec_runner.go:152] cp: /home/etix/.minikube/certs/ca.pem --> /home/etix/.minikube/ca.pem (1074 bytes)
I0420 12:06:19.891158   79734 provision.go:111] generating server cert: /home/etix/.minikube/machines/server.pem ca-key=/home/etix/.minikube/certs/ca.pem private-key=/home/etix/.minikube/certs/ca-key.pem org=etix.minikube san=[172.17.0.3 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I0420 12:06:19.967360   79734 provision.go:165] copyRemoteCerts
I0420 12:06:19.967403   79734 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0420 12:06:19.967431   79734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0420 12:06:19.986630   79734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49184 SSHKeyPath:/home/etix/.minikube/machines/minikube/id_rsa Username:docker}
I0420 12:06:20.059624   79734 ssh_runner.go:316] scp /home/etix/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1074 bytes)
I0420 12:06:20.069203   79734 ssh_runner.go:316] scp /home/etix/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
I0420 12:06:20.080478   79734 ssh_runner.go:316] scp /home/etix/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0420 12:06:20.090738   79734 provision.go:86] duration metric: configureAuth took 219.43089ms
I0420 12:06:20.090759   79734 ubuntu.go:193] setting minikube options for container-runtime
I0420 12:06:20.090950   79734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0420 12:06:20.110124   79734 main.go:126] libmachine: Using SSH client type: native
I0420 12:06:20.110233   79734 main.go:126] libmachine: &{{{<nil> 0 [] [] []} docker [0x5637c26c90e0] 0x5637c26c90a0 <nil>  [] 0s} 127.0.0.1 49184 <nil> <nil>}
I0420 12:06:20.110241   79734 main.go:126] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0420 12:06:20.215036   79734 main.go:126] libmachine: SSH cmd err, output: <nil>: overlay

I0420 12:06:20.215058   79734 ubuntu.go:71] root file system type: overlay
I0420 12:06:20.215219   79734 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
I0420 12:06:20.215280   79734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0420 12:06:20.233724   79734 main.go:126] libmachine: Using SSH client type: native
I0420 12:06:20.233835   79734 main.go:126] libmachine: &{{{<nil> 0 [] [] []} docker [0x5637c26c90e0] 0x5637c26c90a0 <nil>  [] 0s} 127.0.0.1 49184 <nil> <nil>}
I0420 12:06:20.233891   79734 main.go:126] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0420 12:06:20.341595   79734 main.go:126] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0420 12:06:20.341705   79734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0420 12:06:20.363081   79734 main.go:126] libmachine: Using SSH client type: native
I0420 12:06:20.363176   79734 main.go:126] libmachine: &{{{<nil> 0 [] [] []} docker [0x5637c26c90e0] 0x5637c26c90a0 <nil>  [] 0s} 127.0.0.1 49184 <nil> <nil>}
I0420 12:06:20.363189   79734 main.go:126] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0420 12:06:20.900375   79734 main.go:126] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service       2021-03-02 20:16:15.000000000 +0000
+++ /lib/systemd/system/docker.service.new      2021-04-20 10:06:20.338336573 +0000
@@ -1,30 +1,32 @@
 [Unit]
 Description=Docker Application Container Engine
 Documentation=https://docs.docker.com
+BindsTo=containerd.service
 After=network-online.target firewalld.service containerd.service
 Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60

 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure

-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
+ExecReload=/bin/kill -s HUP $MAINPID

 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
 LimitNPROC=infinity
 LimitCORE=infinity

-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0

 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes

 # kill only the docker process, not all processes in the cgroup
 KillMode=process
-OOMScoreAdjust=-500

 [Install]
 WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker

I0420 12:06:20.900446   79734 machine.go:91] provisioned docker machine in 1.284524413s
I0420 12:06:20.900456   79734 client.go:171] LocalClient.Create took 4.288145733s
I0420 12:06:20.900466   79734 start.go:168] duration metric: libmachine.API.Create for "minikube" took 4.288192613s
I0420 12:06:20.900475   79734 start.go:267] post-start starting for "minikube" (driver="docker")
I0420 12:06:20.900480   79734 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0420 12:06:20.900521   79734 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0420 12:06:20.900555   79734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0420 12:06:20.918676   79734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49184 SSHKeyPath:/home/etix/.minikube/machines/minikube/id_rsa Username:docker}
I0420 12:06:20.993323   79734 ssh_runner.go:149] Run: cat /etc/os-release
I0420 12:06:20.994706   79734 main.go:126] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0420 12:06:20.994731   79734 main.go:126] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0420 12:06:20.994753   79734 main.go:126] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0420 12:06:20.994768   79734 info.go:137] Remote host: Ubuntu 20.04.1 LTS
I0420 12:06:20.994781   79734 filesync.go:118] Scanning /home/etix/.minikube/addons for local assets ...
I0420 12:06:20.994832   79734 filesync.go:118] Scanning /home/etix/.minikube/files for local assets ...
I0420 12:06:20.994866   79734 start.go:270] post-start completed in 94.385318ms
I0420 12:06:20.995123   79734 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0420 12:06:21.015075   79734 profile.go:148] Saving config to /home/etix/.minikube/profiles/minikube/config.json ...
I0420 12:06:21.015313   79734 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0420 12:06:21.015359   79734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0420 12:06:21.034752   79734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49184 SSHKeyPath:/home/etix/.minikube/machines/minikube/id_rsa Username:docker}
I0420 12:06:21.107883   79734 start.go:129] duration metric: createHost completed in 4.496967563s
I0420 12:06:21.107908   79734 start.go:80] releasing machines lock for "minikube", held for 4.497105906s
I0420 12:06:21.107980   79734 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0420 12:06:21.127413   79734 ssh_runner.go:149] Run: systemctl --version
I0420 12:06:21.127450   79734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0420 12:06:21.127462   79734 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0420 12:06:21.127511   79734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0420 12:06:21.146501   79734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49184 SSHKeyPath:/home/etix/.minikube/machines/minikube/id_rsa Username:docker}
I0420 12:06:21.146581   79734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49184 SSHKeyPath:/home/etix/.minikube/machines/minikube/id_rsa Username:docker}
I0420 12:06:21.267449   79734 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0420 12:06:21.274036   79734 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0420 12:06:21.279751   79734 cruntime.go:219] skipping containerd shutdown because we are bound to it
I0420 12:06:21.279778   79734 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
I0420 12:06:21.284706   79734 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0420 12:06:21.291401   79734 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0420 12:06:21.296139   79734 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0420 12:06:21.369815   79734 ssh_runner.go:149] Run: sudo systemctl start docker
I0420 12:06:21.377093   79734 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
I0420 12:06:21.404967   79734 out.go:184] ๐Ÿณ  Preparing Kubernetes v1.18.18 on Docker 20.10.5 ...
๐Ÿณ  Preparing Kubernetes v1.18.18 on Docker 20.10.5 ...I0420 12:06:21.405021   79734 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
| W0420 12:06:21.422202   79734 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0420 12:06:21.422249   79734 network_create.go:249] running [docker network inspect minikube] to gather additional debugging logs...
I0420 12:06:21.422260   79734 cli_runner.go:115] Run: docker network inspect minikube
W0420 12:06:21.441470   79734 cli_runner.go:162] docker network inspect minikube returned with exit code 1
I0420 12:06:21.441488   79734 network_create.go:252] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
stdout:
[]

stderr:
Error: No such network: minikube
I0420 12:06:21.441498   79734 network_create.go:254] output of [docker network inspect minikube]: -- stdout --
[]

-- /stdout --
** stderr ** 
Error: No such network: minikube

** /stderr **
I0420 12:06:21.441505   79734 network.go:41] The container minikube is not attached to a network, this could be because the cluster was created by minikube <v1.14, will try to get the IP using container gatway
I0420 12:06:21.441535   79734 cli_runner.go:115] Run: docker container inspect --format {{.NetworkSettings.Gateway}} minikube
I0420 12:06:21.459777   79734 ssh_runner.go:149] Run: grep 172.17.0.1   host.minikube.internal$ /etc/hosts
I0420 12:06:21.461922   79734 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "172.17.0.1  host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0420 12:06:21.467805   79734 preload.go:97] Checking if preload exists for k8s version v1.18.18 and runtime docker
I0420 12:06:21.467836   79734 preload.go:105] Found local preload: /home/etix/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.18.18-docker-overlay2-amd64.tar.lz4
I0420 12:06:21.467901   79734 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0420 12:06:21.489463   79734 docker.go:455] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.18
k8s.gcr.io/kube-apiserver:v1.18.18
k8s.gcr.io/kube-controller-manager:v1.18.18
k8s.gcr.io/kube-scheduler:v1.18.18
gcr.io/k8s-minikube/storage-provisioner:v5
kubernetesui/dashboard:v2.1.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0

-- /stdout --
I0420 12:06:21.489481   79734 docker.go:392] Images already preloaded, skipping extraction
I0420 12:06:21.489529   79734 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
/ I0420 12:06:21.510853   79734 docker.go:455] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.18
k8s.gcr.io/kube-controller-manager:v1.18.18
k8s.gcr.io/kube-apiserver:v1.18.18
k8s.gcr.io/kube-scheduler:v1.18.18
gcr.io/k8s-minikube/storage-provisioner:v5
kubernetesui/dashboard:v2.1.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0

-- /stdout --
I0420 12:06:21.510881   79734 cache_images.go:74] Images are preloaded, skipping loading
I0420 12:06:21.510950   79734 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
I0420 12:06:21.553466   79734 cni.go:81] Creating CNI manager for ""
I0420 12:06:21.553480   79734 cni.go:153] CNI unnecessary in this configuration, recommending no CNI
I0420 12:06:21.553485   79734 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0420 12:06:21.553494   79734 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.3 APIServerPort:8443 KubernetesVersion:v1.18.18 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.3 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0420 12:06:21.553570   79734 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.17.0.3
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 172.17.0.3
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "172.17.0.3"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
  extraArgs:
    allocate-node-cidrs: "true"
    leader-elect: "false"
scheduler:
  extraArgs:
    leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
    extraArgs:
      proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.18
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249

I0420 12:06:21.553666   79734 kubeadm.go:897] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.18/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.3

[Install]
 config:
{KubernetesVersion:v1.18.18 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0420 12:06:21.553711   79734 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.18.18
I0420 12:06:21.557351   79734 binaries.go:44] Found k8s binaries, skipping transfer
I0420 12:06:21.557376   79734 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0420 12:06:21.560675   79734 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (333 bytes)
I0420 12:06:21.567239   79734 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I0420 12:06:21.573921   79734 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1834 bytes)
I0420 12:06:21.580653   79734 ssh_runner.go:149] Run: grep 172.17.0.3   control-plane.minikube.internal$ /etc/hosts
I0420 12:06:21.581927   79734 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.3 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0420 12:06:21.586804   79734 certs.go:52] Setting up /home/etix/.minikube/profiles/minikube for IP: 172.17.0.3
I0420 12:06:21.586836   79734 certs.go:175] generating minikubeCA CA: /home/etix/.minikube/ca.key
- I0420 12:06:21.655500   79734 crypto.go:157] Writing cert to /home/etix/.minikube/ca.crt ...
I0420 12:06:21.655510   79734 lock.go:36] WriteFile acquiring /home/etix/.minikube/ca.crt: {Name:mk19f01d9d1b84bc5a2f8b23a06a321884b9b06e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0420 12:06:21.655600   79734 crypto.go:165] Writing key to /home/etix/.minikube/ca.key ...
I0420 12:06:21.655607   79734 lock.go:36] WriteFile acquiring /home/etix/.minikube/ca.key: {Name:mkc9cfa531f9053fc15cfe8c89396a60891fa9ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0420 12:06:21.655648   79734 certs.go:175] generating proxyClientCA CA: /home/etix/.minikube/proxy-client-ca.key
\ I0420 12:06:21.747282   79734 crypto.go:157] Writing cert to /home/etix/.minikube/proxy-client-ca.crt ...
I0420 12:06:21.747291   79734 lock.go:36] WriteFile acquiring /home/etix/.minikube/proxy-client-ca.crt: {Name:mk6899c8a9907c174bd30aeb1e0b37a180940ae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0420 12:06:21.747340   79734 crypto.go:165] Writing key to /home/etix/.minikube/proxy-client-ca.key ...
I0420 12:06:21.747345   79734 lock.go:36] WriteFile acquiring /home/etix/.minikube/proxy-client-ca.key: {Name:mk875f49a11418af47f681beeeeff30522c2bc65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0420 12:06:21.747402   79734 certs.go:286] generating minikube-user signed cert: /home/etix/.minikube/profiles/minikube/client.key
I0420 12:06:21.747412   79734 crypto.go:69] Generating cert /home/etix/.minikube/profiles/minikube/client.crt with IP's: []
\ I0420 12:06:22.137675   79734 crypto.go:157] Writing cert to /home/etix/.minikube/profiles/minikube/client.crt ...
I0420 12:06:22.137688   79734 lock.go:36] WriteFile acquiring /home/etix/.minikube/profiles/minikube/client.crt: {Name:mk8a56ac49c4d3f826ce794b4c6b85f719e8e7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0420 12:06:22.137785   79734 crypto.go:165] Writing key to /home/etix/.minikube/profiles/minikube/client.key ...
I0420 12:06:22.137793   79734 lock.go:36] WriteFile acquiring /home/etix/.minikube/profiles/minikube/client.key: {Name:mk7ccaf0a0cb10dc93e651045dd211b1aa1ba34c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0420 12:06:22.137849   79734 certs.go:286] generating minikube signed cert: /home/etix/.minikube/profiles/minikube/apiserver.key.0f3e66d0
I0420 12:06:22.137854   79734 crypto.go:69] Generating cert /home/etix/.minikube/profiles/minikube/apiserver.crt.0f3e66d0 with IP's: [172.17.0.3 10.96.0.1 127.0.0.1 10.0.0.1]
| I0420 12:06:22.288196   79734 crypto.go:157] Writing cert to /home/etix/.minikube/profiles/minikube/apiserver.crt.0f3e66d0 ...
I0420 12:06:22.288207   79734 lock.go:36] WriteFile acquiring /home/etix/.minikube/profiles/minikube/apiserver.crt.0f3e66d0: {Name:mk1059d89243c772cbe2f7ba7741d93c779fb5a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0420 12:06:22.288272   79734 crypto.go:165] Writing key to /home/etix/.minikube/profiles/minikube/apiserver.key.0f3e66d0 ...
I0420 12:06:22.288278   79734 lock.go:36] WriteFile acquiring /home/etix/.minikube/profiles/minikube/apiserver.key.0f3e66d0: {Name:mka0ed89815e026493edf2318dfe2d04d40d978d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0420 12:06:22.288328   79734 certs.go:297] copying /home/etix/.minikube/profiles/minikube/apiserver.crt.0f3e66d0 -> /home/etix/.minikube/profiles/minikube/apiserver.crt
I0420 12:06:22.288369   79734 certs.go:301] copying /home/etix/.minikube/profiles/minikube/apiserver.key.0f3e66d0 -> /home/etix/.minikube/profiles/minikube/apiserver.key
I0420 12:06:22.288404   79734 certs.go:286] generating aggregator signed cert: /home/etix/.minikube/profiles/minikube/proxy-client.key
I0420 12:06:22.288410   79734 crypto.go:69] Generating cert /home/etix/.minikube/profiles/minikube/proxy-client.crt with IP's: []
/ I0420 12:06:22.332608   79734 crypto.go:157] Writing cert to /home/etix/.minikube/profiles/minikube/proxy-client.crt ...
I0420 12:06:22.332617   79734 lock.go:36] WriteFile acquiring /home/etix/.minikube/profiles/minikube/proxy-client.crt: {Name:mk110a581e33a62ef288b8a64798c00f43938f23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0420 12:06:22.332680   79734 crypto.go:165] Writing key to /home/etix/.minikube/profiles/minikube/proxy-client.key ...
I0420 12:06:22.332686   79734 lock.go:36] WriteFile acquiring /home/etix/.minikube/profiles/minikube/proxy-client.key: {Name:mkc1a3139dbcbe25b02a664ae0922782da742815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0420 12:06:22.332791   79734 certs.go:361] found cert: /home/etix/.minikube/certs/home/etix/.minikube/certs/ca-key.pem (1679 bytes)
I0420 12:06:22.332813   79734 certs.go:361] found cert: /home/etix/.minikube/certs/home/etix/.minikube/certs/ca.pem (1074 bytes)
I0420 12:06:22.332836   79734 certs.go:361] found cert: /home/etix/.minikube/certs/home/etix/.minikube/certs/cert.pem (1115 bytes)
I0420 12:06:22.332856   79734 certs.go:361] found cert: /home/etix/.minikube/certs/home/etix/.minikube/certs/key.pem (1675 bytes)
I0420 12:06:22.333484   79734 ssh_runner.go:316] scp /home/etix/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0420 12:06:22.344897   79734 ssh_runner.go:316] scp /home/etix/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0420 12:06:22.353355   79734 ssh_runner.go:316] scp /home/etix/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0420 12:06:22.361881   79734 ssh_runner.go:316] scp /home/etix/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0420 12:06:22.370841   79734 ssh_runner.go:316] scp /home/etix/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0420 12:06:22.379395   79734 ssh_runner.go:316] scp /home/etix/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0420 12:06:22.389956   79734 ssh_runner.go:316] scp /home/etix/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0420 12:06:22.400748   79734 ssh_runner.go:316] scp /home/etix/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0420 12:06:22.409734   79734 ssh_runner.go:316] scp /home/etix/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
- I0420 12:06:22.418616   79734 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (740 bytes)
I0420 12:06:22.424644   79734 ssh_runner.go:149] Run: openssl version
I0420 12:06:22.426999   79734 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0420 12:06:22.430542   79734 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0420 12:06:22.431980   79734 certs.go:402] hashing: -rw-r--r-- 1 root root 1111 Apr 20 10:06 /usr/share/ca-certificates/minikubeCA.pem
I0420 12:06:22.432024   79734 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0420 12:06:22.434623   79734 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0420 12:06:22.438643   79734 kubeadm.go:386] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 Memory:12288 CPUs:6 DiskSize:40000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.18 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.18 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0420 12:06:22.438742   79734 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0420 12:06:22.458084   79734 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0420 12:06:22.461662   79734 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0420 12:06:22.464892   79734 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
I0420 12:06:22.464916   79734 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0420 12:06:22.468024   79734 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0420 12:06:22.468051   79734 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
/ I0420 12:08:19.908607   79734 out.go:184]     โ–ช Generating certificates and keys ...

    โ–ช Generating certificates and keys ...I0420 12:08:19.911027   79734 out.go:184]     โ–ช Booting up control plane ...

    โ–ช Booting up control plane ...W0420 12:08:19.914522   79734 out.go:222] ๐Ÿ’ข  initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.18
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

stderr:
W0420 10:06:22.493172     809 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0420 10:06:24.897972     809 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0420 10:06:24.898641     809 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

๐Ÿ’ข  initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.18
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

stderr:
W0420 10:06:22.493172     809 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0420 10:06:24.897972     809 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0420 10:06:24.898641     809 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

I0420 12:08:19.914798   79734 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0420 12:08:20.303140   79734 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
I0420 12:08:20.308903   79734 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0420 12:08:20.331394   79734 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
I0420 12:08:20.331431   79734 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0420 12:08:20.335010   79734 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0420 12:08:20.335035   79734 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0420 12:08:20.864097   79734 out.go:184]     โ–ช Generating certificates and keys ...
    โ–ช Generating certificates and keys ...- I0420 12:08:21.505058   79734 out.go:184]     โ–ช Booting up control plane ...

    โ–ช Booting up control plane .../ I0420 12:10:16.512520   79734 kubeadm.go:388] StartCluster complete in 3m54.073885021s
I0420 12:10:16.512584   79734 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0420 12:10:16.533435   79734 logs.go:256] 0 containers: []
W0420 12:10:16.533449   79734 logs.go:258] No container was found matching "kube-apiserver"
I0420 12:10:16.533494   79734 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
- I0420 12:10:16.553861   79734 logs.go:256] 0 containers: []
W0420 12:10:16.553875   79734 logs.go:258] No container was found matching "etcd"
I0420 12:10:16.553917   79734 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0420 12:10:16.573148   79734 logs.go:256] 0 containers: []
W0420 12:10:16.573159   79734 logs.go:258] No container was found matching "coredns"
I0420 12:10:16.573205   79734 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0420 12:10:16.592141   79734 logs.go:256] 0 containers: []
W0420 12:10:16.592157   79734 logs.go:258] No container was found matching "kube-scheduler"
I0420 12:10:16.592199   79734 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0420 12:10:16.611583   79734 logs.go:256] 0 containers: []
W0420 12:10:16.611600   79734 logs.go:258] No container was found matching "kube-proxy"
I0420 12:10:16.611638   79734 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0420 12:10:16.634149   79734 logs.go:256] 0 containers: []
W0420 12:10:16.634172   79734 logs.go:258] No container was found matching "kubernetes-dashboard"
I0420 12:10:16.634241   79734 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
\ I0420 12:10:16.655741   79734 logs.go:256] 0 containers: []
W0420 12:10:16.655760   79734 logs.go:258] No container was found matching "storage-provisioner"
I0420 12:10:16.655828   79734 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0420 12:10:16.675612   79734 logs.go:256] 0 containers: []
W0420 12:10:16.675623   79734 logs.go:258] No container was found matching "kube-controller-manager"
I0420 12:10:16.675632   79734 logs.go:122] Gathering logs for Docker ...
I0420 12:10:16.675640   79734 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0420 12:10:16.685731   79734 logs.go:122] Gathering logs for container status ...
I0420 12:10:16.685752   79734 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
\ I0420 12:10:18.716570   79734 ssh_runner.go:189] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.030803215s)
I0420 12:10:18.716669   79734 logs.go:122] Gathering logs for kubelet ...
I0420 12:10:18.716677   79734 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
| I0420 12:10:18.754428   79734 logs.go:122] Gathering logs for dmesg ...
I0420 12:10:18.754447   79734 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0420 12:10:18.766045   79734 logs.go:122] Gathering logs for describe nodes ...
I0420 12:10:18.766055   79734 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.18/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0420 12:10:18.803897   79734 logs.go:129] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.18/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.18/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
 output: 
** stderr ** 
The connection to the server localhost:8443 was refused - did you specify the right host or port?

** /stderr **
W0420 12:10:18.803931   79734 out.go:351] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.18
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

stderr:
W0420 10:08:20.360323    4847 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0420 10:08:21.504995    4847 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0420 10:08:21.506056    4847 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0420 12:10:18.804020   79734 out.go:222] 

W0420 12:10:18.804132   79734 out.go:222] ๐Ÿ’ฃ  Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.18
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

stderr:
W0420 10:08:20.360323    4847 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0420 10:08:21.504995    4847 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0420 10:08:21.506056    4847 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

๐Ÿ’ฃ  Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.18
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

stderr:
W0420 10:08:20.360323    4847 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0420 10:08:21.504995    4847 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0420 10:08:21.506056    4847 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

W0420 12:10:18.804254   79734 out.go:222] 

W0420 12:10:18.804293   79734 out.go:222] ๐Ÿ˜ฟ  minikube is exiting due to an error. If the above message is not useful, open an issue:
๐Ÿ˜ฟ  minikube is exiting due to an error. If the above message is not useful, open an issue:
W0420 12:10:18.804313   79734 out.go:222] ๐Ÿ‘‰  https://github.com/kubernetes/minikube/issues/new/choose
๐Ÿ‘‰  https://github.com/kubernetes/minikube/issues/new/choose
I0420 12:10:18.808156   79734 out.go:157] 

W0420 12:10:18.808248   79734 out.go:222] โŒ  Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.18
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

stderr:
W0420 10:08:20.360323    4847 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0420 10:08:21.504995    4847 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0420 10:08:21.506056    4847 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

โŒ  Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.18
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

stderr:
W0420 10:08:20.360323    4847 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0420 10:08:21.504995    4847 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0420 10:08:21.506056    4847 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

W0420 12:10:18.808394   79734 out.go:222] ๐Ÿ’ก  Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
๐Ÿ’ก  Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0420 12:10:18.808425   79734 out.go:222] ๐Ÿฟ  Related issue: https://github.com/kubernetes/minikube/issues/4172
๐Ÿฟ  Related issue: https://github.com/kubernetes/minikube/issues/4172
I0420 12:10:18.809928   79734 out.go:157]
etix commented 3 years ago

Since docker and kvm2 drivers don't work, the only workaround I've found so far is using --driver=virtualbox

afbjorklund commented 3 years ago

@prezha please take a look

prezha commented 3 years ago

@etix thanks for reporting the issue and providing the full log!

this is a strange issue that i haven't seen before - quite unexpected as looking up for a free network for docker driver starts from 192.168.49.0 and for kvm starts from 192.168.39.0 and i'd expect to see those as the first try (then they are incremented with steps of 9 and 11 respectively), whereas you get all 20 retries starting and staying with 192.168.0.0

are you using precompiled minikube (and docker-machine-driver-kvm2 - for kvm) binary or you are building from source?

does it report the same 20 messages for kvm as well? ie:

network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}

can you also please share the output of:

etix commented 3 years ago

Hi @prezha,

I should mention that my setup was working fine for months without any major issue. I think that I built my last successful k8s env two or three weeks ago. This morning I did my usual weekly minikube delete and minikube start when the problem occurred. Nothing really changed besides the usual Archlinux rolling updates and a bios upgrade I did around a week ago.

I'm using the minikube pre-built archlinux package same for docker. The docker-machine-driver-kvm2 plugin seems to be installed by minikube itself in ~/.minikube/bin/docker-machine-driver-kvm2.

I've noticed that the loop was not actually increasing the IP addresses too, I even looked at the code itself when I decided to open the issue. Previously I had my minikube always on the 192.168.49.x range. I still have my /etc/hosts populated with mappings to 192.168.49.2.

Of course the minikube configuration is not part of the equation, I did multiple minikube delete --all --purge and rm -fr ~/.minikube/ along the way.

KVM also reports the same 20 messages:

I0420 19:50:42.346217  192153 main.go:126] libmachine: (minikube) Creating KVM machine...
I0420 19:50:42.348820  192153 main.go:126] libmachine: (minikube) DBG | found existing default KVM network
I0420 19:50:42.349941  192153 main.go:126] libmachine: (minikube) DBG | I0420 19:50:42.349835  192425 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 19:50:42.350709  192153 main.go:126] libmachine: (minikube) DBG | I0420 19:50:42.350661  192425 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 19:50:42.351470  192153 main.go:126] libmachine: (minikube) DBG | I0420 19:50:42.351436  192425 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 19:50:42.352215  192153 main.go:126] libmachine: (minikube) DBG | I0420 19:50:42.352171  192425 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 19:50:42.352965  192153 main.go:126] libmachine: (minikube) DBG | I0420 19:50:42.352928  192425 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 19:50:42.353715  192153 main.go:126] libmachine: (minikube) DBG | I0420 19:50:42.353670  192425 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 19:50:42.354449  192153 main.go:126] libmachine: (minikube) DBG | I0420 19:50:42.354410  192425 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 19:50:42.355180  192153 main.go:126] libmachine: (minikube) DBG | I0420 19:50:42.355133  192425 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 19:50:42.355912  192153 main.go:126] libmachine: (minikube) DBG | I0420 19:50:42.355877  192425 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 19:50:42.356646  192153 main.go:126] libmachine: (minikube) DBG | I0420 19:50:42.356598  192425 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 19:50:42.357370  192153 main.go:126] libmachine: (minikube) DBG | I0420 19:50:42.357338  192425 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 19:50:42.358121  192153 main.go:126] libmachine: (minikube) DBG | I0420 19:50:42.358077  192425 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 19:50:42.358870  192153 main.go:126] libmachine: (minikube) DBG | I0420 19:50:42.358835  192425 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 19:50:42.359640  192153 main.go:126] libmachine: (minikube) DBG | I0420 19:50:42.359578  192425 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 19:50:42.360364  192153 main.go:126] libmachine: (minikube) DBG | I0420 19:50:42.360332  192425 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 19:50:42.361115  192153 main.go:126] libmachine: (minikube) DBG | I0420 19:50:42.361070  192425 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 19:50:42.361853  192153 main.go:126] libmachine: (minikube) DBG | I0420 19:50:42.361818  192425 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 19:50:42.363421  192153 main.go:126] libmachine: (minikube) DBG | I0420 19:50:42.363359  192425 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 19:50:42.364164  192153 main.go:126] libmachine: (minikube) DBG | I0420 19:50:42.364138  192425 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 19:50:42.364732  192153 main.go:126] libmachine: (minikube) DBG | I0420 19:50:42.364688  192425 network.go:215] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.4 ClientMin:192.168.0.5 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:enp38s0 IfaceIPv4:192.168.0.4 IfaceMTU:1500 IfaceMAC:00:d8:61:79:a1:21}}
I0420 19:50:42.364755  192153 main.go:126] libmachine: (minikube) DBG | failed to find free subnet for private KVM network mk-minikube after 20 attempts: no free private network subnets found with given parameters (start: "192.168.11.0", step: 11, tries: 20)

Command outputs

ip a s:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp38s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:d8:61:79:a1:21 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.4/16 brd 192.168.255.255 scope global dynamic enp38s0
       valid_lft 66533sec preferred_lft 66533sec
    inet6 2a01:e0a:2f2:1090:2d8:61ff:fe79:a121/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 2591984sec preferred_lft 604784sec
    inet6 fe80::2d8:61ff:fe79:a121/64 scope link 
       valid_lft forever preferred_lft forever
3: wlo1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether d0:ab:d5:1b:5c:97 brd ff:ff:ff:ff:ff:ff
    altname wlp40s0
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:c6:2a:4b brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:52:f9:32:08 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:52ff:fef9:3208/64 scope link 
       valid_lft forever preferred_lft forever
6: vboxnet0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
    link/ether 0a:00:27:00:00:00 brd ff:ff:ff:ff:ff:ff
    inet 192.168.99.1/24 brd 192.168.99.255 scope global vboxnet0
       valid_lft forever preferred_lft forever
    inet6 fe80::800:27ff:fe00:0/64 scope link 
       valid_lft forever preferred_lft forever

ip r s:

default via 192.168.0.1 dev enp38s0 proto dhcp src 192.168.0.4 metric 1024 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
192.168.0.0/16 dev enp38s0 proto kernel scope link src 192.168.0.4 
192.168.0.1 dev enp38s0 proto dhcp scope link src 192.168.0.4 metric 1024 
192.168.99.0/24 dev vboxnet0 proto kernel scope link src 192.168.99.1 linkdown 
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown

minikube version:

minikube version: v1.19.0
commit: 15cede53bdc5fe242228853e737333b09d4336b5-dirty

~/.minikube/bin/docker-machine-driver-kvm2 version:

version: v1.19.0
commit: 15cede53bdc5fe242228853e737333b09d4336b5
etix commented 3 years ago

I tried downgrading minikube to 1.17.1 which was known to be working and it seems the IP 192.168.49.2 is correctly selected but the control plane isn't able to start. So my guess is that something external to minikube broke.

``` I0420 20:07:33.274370 201038 out.go:229] Setting OutFile to fd 1 ... I0420 20:07:33.274436 201038 out.go:281] isatty.IsTerminal(1) = true I0420 20:07:33.274442 201038 out.go:242] Setting ErrFile to fd 2... I0420 20:07:33.274445 201038 out.go:281] isatty.IsTerminal(2) = true I0420 20:07:33.274511 201038 root.go:291] Updating PATH: /home/etix/.minikube/bin W0420 20:07:33.274582 201038 root.go:266] Error reading config file at /home/etix/.minikube/config/config.json: open /home/etix/.minikube/config/config.json: no such file or directory I0420 20:07:33.274803 201038 out.go:236] Setting JSON to false I0420 20:07:33.289894 201038 start.go:106] hostinfo: {"hostname":"ryzen","uptime":20664,"bootTime":1618921389,"procs":410,"os":"linux","platform":"arch","platformFamily":"arch","platformVersion":"rolling","kernelVersion":"5.11.15-arch1-2","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"5a508ea3-ff7e-4054-8ca3-624ea1702805"} I0420 20:07:33.289933 201038 start.go:116] virtualization: kvm host I0420 20:07:33.290092 201038 out.go:119] ๐Ÿ˜„ minikube v1.17.1 on Arch rolling ๐Ÿ˜„ minikube v1.17.1 on Arch rolling I0420 20:07:33.290154 201038 driver.go:315] Setting default libvirt URI to qemu:///system I0420 20:07:33.290166 201038 global.go:102] Querying for installed drivers using PATH=/home/etix/.minikube/bin:/home/etix/go/bin:/var/lib/snapd/snap/bin:/home/etix/Android/Sdk/platform-tools:/home/etix/bin/:/opt/google-cloud-sdk/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/opt/cuda/bin:/opt/cuda/nsight_compute:/opt/cuda/nsight_systems/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl I0420 20:07:33.290187 201038 notify.go:126] Checking for updates... I0420 20:07:33.367826 201038 lock.go:36] WriteFile acquiring /home/etix/.minikube/last_update_check: {Name:mkef13beeafdb5d8fdd965b38761226202971944 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0420 20:07:33.368042 201038 out.go:119] ๐ŸŽ‰ minikube 1.19.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.19.0 ๐ŸŽ‰ minikube 1.19.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.19.0 I0420 20:07:33.368072 201038 out.go:119] ๐Ÿ’ก To disable this notice, run: 'minikube config set WantUpdateNotification false' ๐Ÿ’ก To disable this notice, run: 'minikube config set WantUpdateNotification false' I0420 20:07:33.370307 201038 global.go:110] kvm2 priority: 8, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I0420 20:07:33.374969 201038 global.go:110] none priority: 4, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:running the 'none' driver as a regular user requires sudo permissions Reason: Fix: Doc:} I0420 20:07:33.375022 201038 global.go:110] podman priority: 3, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/} I0420 20:07:33.375039 201038 global.go:110] ssh priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I0420 20:07:33.553147 201038 global.go:110] virtualbox priority: 6, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I0420 20:07:33.553232 201038 global.go:110] vmware priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Reason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/} I0420 20:07:33.601055 201038 docker.go:115] docker version: linux-20.10.6 I0420 20:07:33.601106 201038 cli_runner.go:111] Run: docker system info --format "{{json .}}" I0420 20:07:33.796769 201038 info.go:253] docker info: {ID:W6YK:5ARX:XVU6:6MNS:LQWS:SEC3:2TN2:3RGQ:5AT4:AS5M:ZDZ5:JJIX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff false] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:false NGoroutines:40 SystemTime:2021-04-20 20:07:33.616954985 +0200 CEST LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.11.15-arch1-2 OperatingSystem:Arch Linux OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33648889856 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ryzen Labels:[] ExperimentalBuild:true ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e.m Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e.m} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Experimental:true Name:buildx Path:/usr/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-tp-docker]] Warnings:}} I0420 20:07:33.796842 201038 docker.go:145] overlay module found I0420 20:07:33.796850 201038 global.go:110] docker priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I0420 20:07:33.796861 201038 driver.go:257] not recommending "ssh" due to priority: 4 I0420 20:07:33.796873 201038 driver.go:286] Picked: docker I0420 20:07:33.796879 201038 driver.go:287] Alternatives: [kvm2 virtualbox ssh] I0420 20:07:33.796883 201038 driver.go:288] Rejects: [none podman vmware] I0420 20:07:33.796955 201038 out.go:119] โœจ Automatically selected the docker driver. Other choices: kvm2, virtualbox, ssh โœจ Automatically selected the docker driver. Other choices: kvm2, virtualbox, ssh I0420 20:07:33.796963 201038 start.go:279] selected driver: docker I0420 20:07:33.796968 201038 start.go:702] validating driver "docker" against I0420 20:07:33.796979 201038 start.go:713] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I0420 20:07:33.797681 201038 cli_runner.go:111] Run: docker system info --format "{{json .}}" I0420 20:07:33.847640 201038 info.go:253] docker info: {ID:W6YK:5ARX:XVU6:6MNS:LQWS:SEC3:2TN2:3RGQ:5AT4:AS5M:ZDZ5:JJIX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff false] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:false NGoroutines:40 SystemTime:2021-04-20 20:07:33.813588039 +0200 CEST LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.11.15-arch1-2 OperatingSystem:Arch Linux OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33648889856 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ryzen Labels:[] ExperimentalBuild:true ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e.m Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e.m} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Experimental:true Name:buildx Path:/usr/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-tp-docker]] Warnings:}} I0420 20:07:33.847726 201038 start_flags.go:249] no existing cluster config was found, will generate one from the flags I0420 20:07:33.847807 201038 start_flags.go:671] Wait components to verify : map[apiserver:true system_pods:true] I0420 20:07:33.847816 201038 cni.go:74] Creating CNI manager for "" I0420 20:07:33.847822 201038 cni.go:139] CNI unnecessary in this configuration, recommending no CNI I0420 20:07:33.847827 201038 start_flags.go:390] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:12288 CPUs:6 DiskSize:40000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.18 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false} I0420 20:07:33.847938 201038 out.go:119] ๐Ÿ‘ Starting control plane node minikube in cluster minikube ๐Ÿ‘ Starting control plane node minikube in cluster minikube I0420 20:07:33.867004 201038 cache.go:120] Beginning downloading kic base image for docker with docker I0420 20:07:33.867061 201038 out.go:119] ๐Ÿšœ Pulling base image ... ๐Ÿšœ Pulling base image ... I0420 20:07:33.867069 201038 preload.go:97] Checking if preload exists for k8s version v1.18.18 and runtime docker I0420 20:07:33.867175 201038 cache.go:145] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local daemon I0420 20:07:33.867182 201038 image.go:140] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local daemon W0420 20:07:33.984925 201038 preload.go:118] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v8-v1.18.18-docker-overlay2-amd64.tar.lz4 status code: 404 I0420 20:07:33.985067 201038 cache.go:93] acquiring lock: {Name:mkf038541f3389d906e61d0606a16c2b8258501f Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0420 20:07:33.985084 201038 cache.go:93] acquiring lock: {Name:mk544c599757e918de8554d3d6f4ea723ecfbcc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0420 20:07:33.985093 201038 cache.go:93] acquiring lock: {Name:mk56528c0eab45e8d9c55658e920996bb8e3541a Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0420 20:07:33.985127 201038 cache.go:93] acquiring lock: {Name:mk29f8fc52cc1f8bf42ccfe71f934d87956165b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0420 20:07:33.985191 201038 cache.go:93] acquiring lock: {Name:mk5d28982b32d852fa6697b7a60ca83a61da38d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0420 20:07:33.985222 201038 image.go:168] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4 I0420 20:07:33.985219 201038 cache.go:93] acquiring lock: {Name:mkcbfa9468ff1de6a7a1bf31e75d6301dcf3fee4 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0420 20:07:33.985275 201038 image.go:168] retrieving image: k8s.gcr.io/coredns:1.6.7 I0420 20:07:33.985275 201038 image.go:168] retrieving image: k8s.gcr.io/kube-scheduler:v1.18.18 I0420 20:07:33.985230 201038 cache.go:93] acquiring lock: {Name:mk5b1e5019d9fd03777606b7d7e9297ad96faf6d Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0420 20:07:33.985256 201038 cache.go:93] acquiring lock: {Name:mk64a9087a9c1e63bd143580934518fd0de45c6a Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0420 20:07:33.985319 201038 image.go:168] retrieving image: k8s.gcr.io/kube-proxy:v1.18.18 I0420 20:07:33.985350 201038 profile.go:148] Saving config to /home/etix/.minikube/profiles/minikube/config.json ... I0420 20:07:33.985303 201038 image.go:168] retrieving image: k8s.gcr.io/kube-controller-manager:v1.18.18 I0420 20:07:33.985378 201038 lock.go:36] WriteFile acquiring /home/etix/.minikube/profiles/minikube/config.json: {Name:mk211bd84d854fea1d6fff2b3065e355dfb157b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0420 20:07:33.985444 201038 image.go:168] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0 I0420 20:07:33.985430 201038 cache.go:93] acquiring lock: {Name:mk05d1a5d1c1c7b6f2f8091128685025258cd33e Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0420 20:07:33.985429 201038 cache.go:93] acquiring lock: {Name:mk2b6a30bc10d9685444123bd80ffd0e919d1523 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0420 20:07:33.985485 201038 image.go:168] retrieving image: k8s.gcr.io/etcd:3.4.3-0 I0420 20:07:33.985552 201038 image.go:168] retrieving image: k8s.gcr.io/pause:3.2 I0420 20:07:33.985575 201038 image.go:168] retrieving image: k8s.gcr.io/kube-apiserver:v1.18.18 I0420 20:07:33.985574 201038 image.go:168] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v4 I0420 20:07:33.986504 201038 image.go:176] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.4: Error response from daemon: reference does not exist I0420 20:07:33.986556 201038 image.go:176] daemon lookup for k8s.gcr.io/kube-scheduler:v1.18.18: Error response from daemon: reference does not exist I0420 20:07:33.986766 201038 image.go:176] daemon lookup for k8s.gcr.io/kube-apiserver:v1.18.18: Error response from daemon: reference does not exist I0420 20:07:33.986790 201038 image.go:176] daemon lookup for docker.io/kubernetesui/dashboard:v2.1.0: Error response from daemon: reference does not exist I0420 20:07:33.986805 201038 image.go:176] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error response from daemon: reference does not exist I0420 20:07:33.986812 201038 image.go:176] daemon lookup for k8s.gcr.io/coredns:1.6.7: Error response from daemon: reference does not exist I0420 20:07:33.987039 201038 image.go:176] daemon lookup for k8s.gcr.io/pause:3.2: Error response from daemon: reference does not exist I0420 20:07:33.986796 201038 image.go:176] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.18.18: Error response from daemon: reference does not exist I0420 20:07:33.987101 201038 image.go:176] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v4: Error response from daemon: reference does not exist I0420 20:07:33.986795 201038 image.go:176] daemon lookup for k8s.gcr.io/kube-proxy:v1.18.18: Error response from daemon: reference does not exist I0420 20:07:34.380404 201038 cache.go:135] opening: /home/etix/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.18 I0420 20:07:34.433637 201038 cache.go:135] opening: /home/etix/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.18 I0420 20:07:34.474087 201038 cache.go:135] opening: /home/etix/.minikube/cache/images/k8s.gcr.io/pause_3.2 I0420 20:07:34.474341 201038 cache.go:135] opening: /home/etix/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 I0420 20:07:34.474551 201038 cache.go:135] opening: /home/etix/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.18 I0420 20:07:34.474785 201038 cache.go:135] opening: /home/etix/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.18 I0420 20:07:34.474791 201038 cache.go:135] opening: /home/etix/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 I0420 20:07:34.545097 201038 cache.go:130] /home/etix/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists I0420 20:07:34.545136 201038 cache.go:82] cache image "k8s.gcr.io/pause:3.2" -> "/home/etix/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 559.734705ms I0420 20:07:34.545152 201038 cache.go:66] save to tar file k8s.gcr.io/pause:3.2 -> /home/etix/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded I0420 20:07:35.007046 201038 cache.go:135] opening: /home/etix/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 I0420 20:07:35.009307 201038 cache.go:135] opening: /home/etix/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 I0420 20:07:35.443934 201038 cache.go:135] opening: /home/etix/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4 I0420 20:07:35.585454 201038 cache.go:130] /home/etix/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 exists I0420 20:07:35.585492 201038 cache.go:82] cache image "k8s.gcr.io/coredns:1.6.7" -> "/home/etix/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7" took 1.600413248s I0420 20:07:35.585509 201038 cache.go:66] save to tar file k8s.gcr.io/coredns:1.6.7 -> /home/etix/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 succeeded I0420 20:07:36.851746 201038 cache.go:130] /home/etix/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists I0420 20:07:36.851784 201038 cache.go:82] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/etix/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 2.86672183s I0420 20:07:36.851801 201038 cache.go:66] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/etix/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded I0420 20:07:36.852109 201038 cache.go:130] /home/etix/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.18 exists I0420 20:07:36.852130 201038 cache.go:82] cache image "k8s.gcr.io/kube-scheduler:v1.18.18" -> "/home/etix/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.18" took 2.867044069s I0420 20:07:36.852145 201038 cache.go:66] save to tar file k8s.gcr.io/kube-scheduler:v1.18.18 -> /home/etix/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.18 succeeded I0420 20:07:37.291169 201038 cache.go:130] /home/etix/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4 exists I0420 20:07:37.291212 201038 cache.go:82] cache image "gcr.io/k8s-minikube/storage-provisioner:v4" -> "/home/etix/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4" took 3.306010247s I0420 20:07:37.291233 201038 cache.go:66] save to tar file gcr.io/k8s-minikube/storage-provisioner:v4 -> /home/etix/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4 succeeded I0420 20:07:37.297528 201038 cache.go:130] /home/etix/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.18 exists I0420 20:07:37.297558 201038 cache.go:82] cache image "k8s.gcr.io/kube-controller-manager:v1.18.18" -> "/home/etix/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.18" took 3.312445098s I0420 20:07:37.297574 201038 cache.go:66] save to tar file k8s.gcr.io/kube-controller-manager:v1.18.18 -> /home/etix/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.18 succeeded I0420 20:07:37.504291 201038 cache.go:130] /home/etix/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.18 exists I0420 20:07:37.504325 201038 cache.go:82] cache image "k8s.gcr.io/kube-apiserver:v1.18.18" -> "/home/etix/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.18" took 3.51895056s I0420 20:07:37.504343 201038 cache.go:66] save to tar file k8s.gcr.io/kube-apiserver:v1.18.18 -> /home/etix/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.18 succeeded I0420 20:07:37.931650 201038 cache.go:130] /home/etix/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.18 exists I0420 20:07:37.931673 201038 cache.go:82] cache image "k8s.gcr.io/kube-proxy:v1.18.18" -> "/home/etix/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.18" took 3.946508435s I0420 20:07:37.931685 201038 cache.go:66] save to tar file k8s.gcr.io/kube-proxy:v1.18.18 -> /home/etix/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.18 succeeded I0420 20:07:38.158910 201038 cache.go:130] /home/etix/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 exists I0420 20:07:38.158942 201038 cache.go:82] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/home/etix/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 4.173760609s I0420 20:07:38.158956 201038 cache.go:66] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /home/etix/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 succeeded I0420 20:07:38.932728 201038 cache.go:130] /home/etix/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists I0420 20:07:38.932756 201038 cache.go:82] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/etix/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 4.947539653s I0420 20:07:38.932768 201038 cache.go:66] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/etix/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded I0420 20:07:38.932779 201038 cache.go:73] Successfully saved all images to host disk. I0420 20:08:10.510538 201038 cache.go:148] successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e I0420 20:08:10.510566 201038 cache.go:185] Successfully downloaded all kic artifacts I0420 20:08:10.510599 201038 start.go:313] acquiring machines lock for minikube: {Name:mk433c3bfb4f2189afb70058cf0ca504910f2a6b Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0420 20:08:10.510690 201038 start.go:317] acquired machines lock for "minikube" in 72.09ยตs I0420 20:08:10.510714 201038 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:12288 CPUs:6 DiskSize:40000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.18 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.18 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.18.18 ControlPlane:true Worker:true} I0420 20:08:10.510809 201038 start.go:126] createHost starting for "" (driver="docker") I0420 20:08:10.510941 201038 out.go:140] ๐Ÿ”ฅ Creating docker container (CPUs=6, Memory=12288MB) ... ๐Ÿ”ฅ Creating docker container (CPUs=6, Memory=12288MB) ...| I0420 20:08:10.511141 201038 start.go:160] libmachine.API.Create for "minikube" (driver="docker") I0420 20:08:10.511166 201038 client.go:168] LocalClient.Create starting I0420 20:08:10.511290 201038 main.go:119] libmachine: Creating CA: /home/etix/.minikube/certs/ca.pem / I0420 20:08:10.627388 201038 main.go:119] libmachine: Creating client certificate: /home/etix/.minikube/certs/cert.pem I0420 20:08:10.698547 201038 cli_runner.go:111] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}" - W0420 20:08:10.715750 201038 cli_runner.go:149] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}" returned with exit code 1 I0420 20:08:10.715802 201038 network_create.go:249] running [docker network inspect minikube] to gather additional debugging logs... I0420 20:08:10.715814 201038 cli_runner.go:111] Run: docker network inspect minikube W0420 20:08:10.733152 201038 cli_runner.go:149] docker network inspect minikube returned with exit code 1 I0420 20:08:10.733174 201038 network_create.go:252] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1 stdout: [] stderr: Error: No such network: minikube I0420 20:08:10.733185 201038 network_create.go:254] output of [docker network inspect minikube]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: minikube ** /stderr ** I0420 20:08:10.733223 201038 cli_runner.go:111] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}" I0420 20:08:10.752208 201038 network_create.go:104] attempt to create network 192.168.49.0/24 with subnet: minikube and gateway 192.168.49.1 and MTU of 1500 ... I0420 20:08:10.752265 201038 cli_runner.go:111] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube I0420 20:08:10.790788 201038 kic.go:100] calculated static IP "192.168.49.2" for the "minikube" container I0420 20:08:10.790847 201038 cli_runner.go:111] Run: docker ps -a --format {{.Names}} I0420 20:08:10.808275 201038 cli_runner.go:111] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true \ I0420 20:08:10.826638 201038 oci.go:102] Successfully created a docker volume minikube I0420 20:08:10.826710 201038 cli_runner.go:111] Run: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib - I0420 20:08:12.809726 201038 cli_runner.go:155] Completed: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib: (1.98298797s) I0420 20:08:12.809749 201038 oci.go:106] Successfully prepared a docker volume minikube W0420 20:08:12.809777 201038 oci.go:159] Your kernel does not support swap limit capabilities or the cgroup is not mounted. I0420 20:08:12.809778 201038 preload.go:97] Checking if preload exists for k8s version v1.18.18 and runtime docker W0420 20:08:12.809794 201038 oci.go:201] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted. I0420 20:08:12.809839 201038 cli_runner.go:111] Run: docker info --format "'{{json .SecurityOptions}}'" \ I0420 20:08:12.858828 201038 cli_runner.go:111] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=12288mb --memory-swap=12288mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e / W0420 20:08:13.018958 201038 preload.go:118] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v8-v1.18.18-docker-overlay2-amd64.tar.lz4 status code: 404 - I0420 20:08:13.152147 201038 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Running}} I0420 20:08:13.169979 201038 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}} I0420 20:08:13.187612 201038 cli_runner.go:111] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables \ I0420 20:08:13.248066 201038 oci.go:246] the created container "minikube" has a running status. I0420 20:08:13.248083 201038 kic.go:194] Creating ssh key for kic: /home/etix/.minikube/machines/minikube/id_rsa... | I0420 20:08:13.359739 201038 kic_runner.go:188] docker (temp): /home/etix/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0420 20:08:13.404100 201038 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}} / I0420 20:08:13.422481 201038 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0420 20:08:13.422494 201038 kic_runner.go:115] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I0420 20:08:13.456729 201038 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}} I0420 20:08:13.474771 201038 machine.go:88] provisioning docker machine ... I0420 20:08:13.474802 201038 ubuntu.go:169] provisioning hostname "minikube" I0420 20:08:13.474858 201038 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0420 20:08:13.492752 201038 main.go:119] libmachine: Using SSH client type: native I0420 20:08:13.492872 201038 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x55b4d6ac52e0] 0x55b4d6ac52a0 [] 0s} 127.0.0.1 49156 } I0420 20:08:13.492883 201038 main.go:119] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname - I0420 20:08:13.599239 201038 main.go:119] libmachine: SSH cmd err, output: : minikube I0420 20:08:13.599313 201038 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube \ I0420 20:08:13.617963 201038 main.go:119] libmachine: Using SSH client type: native I0420 20:08:13.618055 201038 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x55b4d6ac52e0] 0x55b4d6ac52a0 [] 0s} 127.0.0.1 49156 } I0420 20:08:13.618068 201038 main.go:119] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi | I0420 20:08:13.721575 201038 main.go:119] libmachine: SSH cmd err, output: : I0420 20:08:13.721599 201038 ubuntu.go:175] set auth options {CertDir:/home/etix/.minikube CaCertPath:/home/etix/.minikube/certs/ca.pem CaPrivateKeyPath:/home/etix/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/etix/.minikube/machines/server.pem ServerKeyPath:/home/etix/.minikube/machines/server-key.pem ClientKeyPath:/home/etix/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/etix/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/etix/.minikube} I0420 20:08:13.721618 201038 ubuntu.go:177] setting up certificates I0420 20:08:13.721628 201038 provision.go:83] configureAuth start I0420 20:08:13.721684 201038 cli_runner.go:111] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0420 20:08:13.739663 201038 provision.go:137] copyHostCerts I0420 20:08:13.739721 201038 exec_runner.go:152] cp: /home/etix/.minikube/certs/key.pem --> /home/etix/.minikube/key.pem (1675 bytes) I0420 20:08:13.739808 201038 exec_runner.go:152] cp: /home/etix/.minikube/certs/ca.pem --> /home/etix/.minikube/ca.pem (1074 bytes) I0420 20:08:13.739860 201038 exec_runner.go:152] cp: /home/etix/.minikube/certs/cert.pem --> /home/etix/.minikube/cert.pem (1115 bytes) I0420 20:08:13.739900 201038 provision.go:111] generating server cert: /home/etix/.minikube/machines/server.pem ca-key=/home/etix/.minikube/certs/ca.pem private-key=/home/etix/.minikube/certs/ca-key.pem org=etix.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] / I0420 20:08:13.853272 201038 provision.go:165] copyRemoteCerts I0420 20:08:13.853306 201038 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0420 20:08:13.853333 201038 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0420 20:08:13.872144 201038 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:49156 SSHKeyPath:/home/etix/.minikube/machines/minikube/id_rsa Username:docker} - I0420 20:08:13.946262 201038 ssh_runner.go:310] scp /home/etix/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1074 bytes) I0420 20:08:13.957200 201038 ssh_runner.go:310] scp /home/etix/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes) I0420 20:08:13.965721 201038 ssh_runner.go:310] scp /home/etix/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0420 20:08:13.976125 201038 provision.go:86] duration metric: configureAuth took 254.48911ms I0420 20:08:13.976138 201038 ubuntu.go:193] setting minikube options for container-runtime I0420 20:08:13.976254 201038 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0420 20:08:13.994259 201038 main.go:119] libmachine: Using SSH client type: native I0420 20:08:13.994383 201038 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x55b4d6ac52e0] 0x55b4d6ac52a0 [] 0s} 127.0.0.1 49156 } I0420 20:08:13.994395 201038 main.go:119] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 \ I0420 20:08:14.101866 201038 main.go:119] libmachine: SSH cmd err, output: : overlay I0420 20:08:14.101889 201038 ubuntu.go:71] root file system type: overlay I0420 20:08:14.102056 201038 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ... I0420 20:08:14.102117 201038 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube | I0420 20:08:14.120528 201038 main.go:119] libmachine: Using SSH client type: native I0420 20:08:14.120643 201038 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x55b4d6ac52e0] 0x55b4d6ac52a0 [] 0s} 127.0.0.1 49156 } I0420 20:08:14.120697 201038 main.go:119] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %s "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket [Service] Type=notify Restart=on-failure StartLimitBurst=3 StartLimitIntervalSec=60 # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new / I0420 20:08:14.228097 201038 main.go:119] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket [Service] Type=notify Restart=on-failure StartLimitBurst=3 StartLimitIntervalSec=60 # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0420 20:08:14.228208 201038 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0420 20:08:14.245873 201038 main.go:119] libmachine: Using SSH client type: native I0420 20:08:14.245985 201038 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x55b4d6ac52e0] 0x55b4d6ac52a0 [] 0s} 127.0.0.1 49156 } I0420 20:08:14.246001 201038 main.go:119] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } - I0420 20:08:14.769544 201038 main.go:119] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2020-12-28 16:15:19.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2021-04-20 18:08:14.225192245 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. +Restart=on-failure StartLimitBurst=3 +StartLimitIntervalSec=60 + -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0420 20:08:14.769646 201038 machine.go:91] provisioned docker machine in 1.294856533s I0420 20:08:14.769660 201038 client.go:171] LocalClient.Create took 4.25848532s I0420 20:08:14.769674 201038 start.go:168] duration metric: libmachine.API.Create for "minikube" took 4.25853227s I0420 20:08:14.769687 201038 start.go:267] post-start starting for "minikube" (driver="docker") I0420 20:08:14.769698 201038 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0420 20:08:14.769762 201038 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0420 20:08:14.769807 201038 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0420 20:08:14.787333 201038 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:49156 SSHKeyPath:/home/etix/.minikube/machines/minikube/id_rsa Username:docker} \ I0420 20:08:14.863250 201038 ssh_runner.go:149] Run: cat /etc/os-release I0420 20:08:14.865001 201038 main.go:119] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0420 20:08:14.865022 201038 main.go:119] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0420 20:08:14.865042 201038 main.go:119] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0420 20:08:14.865051 201038 info.go:137] Remote host: Ubuntu 20.04.1 LTS I0420 20:08:14.865061 201038 filesync.go:118] Scanning /home/etix/.minikube/addons for local assets ... I0420 20:08:14.865104 201038 filesync.go:118] Scanning /home/etix/.minikube/files for local assets ... I0420 20:08:14.865129 201038 start.go:270] post-start completed in 95.431617ms I0420 20:08:14.865406 201038 cli_runner.go:111] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0420 20:08:14.884745 201038 profile.go:148] Saving config to /home/etix/.minikube/profiles/minikube/config.json ... I0420 20:08:14.884911 201038 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0420 20:08:14.884941 201038 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0420 20:08:14.902511 201038 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:49156 SSHKeyPath:/home/etix/.minikube/machines/minikube/id_rsa Username:docker} | I0420 20:08:14.975112 201038 start.go:129] duration metric: createHost completed in 4.464290095s I0420 20:08:14.975132 201038 start.go:80] releasing machines lock for "minikube", held for 4.464431674s I0420 20:08:14.975204 201038 cli_runner.go:111] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0420 20:08:14.995794 201038 ssh_runner.go:149] Run: systemctl --version I0420 20:08:14.995833 201038 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0420 20:08:14.995847 201038 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/ I0420 20:08:14.995878 201038 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0420 20:08:15.013944 201038 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:49156 SSHKeyPath:/home/etix/.minikube/machines/minikube/id_rsa Username:docker} I0420 20:08:15.014339 201038 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:49156 SSHKeyPath:/home/etix/.minikube/machines/minikube/id_rsa Username:docker} - I0420 20:08:15.135240 201038 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd I0420 20:08:15.141455 201038 ssh_runner.go:149] Run: sudo systemctl cat docker.service I0420 20:08:15.147244 201038 cruntime.go:200] skipping containerd shutdown because we are bound to it I0420 20:08:15.147274 201038 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio I0420 20:08:15.153167 201038 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0420 20:08:15.161576 201038 ssh_runner.go:149] Run: sudo systemctl cat docker.service I0420 20:08:15.166329 201038 ssh_runner.go:149] Run: sudo systemctl daemon-reload \ I0420 20:08:15.242442 201038 ssh_runner.go:149] Run: sudo systemctl start docker I0420 20:08:15.248534 201038 ssh_runner.go:149] Run: docker version --format {{.Server.Version}} I0420 20:08:15.289983 201038 out.go:140] ๐Ÿณ Preparing Kubernetes v1.18.18 on Docker 20.10.2 ... ๐Ÿณ Preparing Kubernetes v1.18.18 on Docker 20.10.2 ...I0420 20:08:15.290056 201038 cli_runner.go:111] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}" I0420 20:08:15.308251 201038 ssh_runner.go:149] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0420 20:08:15.310097 201038 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts" I0420 20:08:15.314738 201038 preload.go:97] Checking if preload exists for k8s version v1.18.18 and runtime docker - W0420 20:08:15.532623 201038 preload.go:118] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v8-v1.18.18-docker-overlay2-amd64.tar.lz4 status code: 404 I0420 20:08:15.532694 201038 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}} I0420 20:08:15.554847 201038 docker.go:389] Got preloaded images: I0420 20:08:15.554869 201038 docker.go:395] k8s.gcr.io/kube-proxy:v1.18.18 wasn't preloaded I0420 20:08:15.554878 201038 cache_images.go:76] LoadImages start: [k8s.gcr.io/kube-proxy:v1.18.18 k8s.gcr.io/kube-scheduler:v1.18.18 k8s.gcr.io/kube-controller-manager:v1.18.18 k8s.gcr.io/kube-apiserver:v1.18.18 k8s.gcr.io/coredns:1.6.7 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/pause:3.2 gcr.io/k8s-minikube/storage-provisioner:v4 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4] I0420 20:08:15.555532 201038 image.go:168] retrieving image: k8s.gcr.io/kube-controller-manager:v1.18.18 I0420 20:08:15.555546 201038 image.go:168] retrieving image: k8s.gcr.io/kube-apiserver:v1.18.18 I0420 20:08:15.555552 201038 image.go:168] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4 I0420 20:08:15.555564 201038 image.go:168] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0 I0420 20:08:15.555578 201038 image.go:168] retrieving image: k8s.gcr.io/pause:3.2 I0420 20:08:15.555578 201038 image.go:168] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v4 I0420 20:08:15.555535 201038 image.go:168] retrieving image: k8s.gcr.io/kube-proxy:v1.18.18 I0420 20:08:15.555634 201038 image.go:168] retrieving image: k8s.gcr.io/coredns:1.6.7 I0420 20:08:15.555638 201038 image.go:168] retrieving image: k8s.gcr.io/etcd:3.4.3-0 I0420 20:08:15.555644 201038 image.go:168] retrieving image: k8s.gcr.io/kube-scheduler:v1.18.18 I0420 20:08:15.555806 201038 image.go:176] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.18.18: Error response from daemon: reference does not exist I0420 20:08:15.555848 201038 image.go:176] daemon lookup for docker.io/kubernetesui/dashboard:v2.1.0: Error response from daemon: reference does not exist I0420 20:08:15.555867 201038 image.go:176] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.4: Error response from daemon: reference does not exist I0420 20:08:15.555885 201038 image.go:176] daemon lookup for k8s.gcr.io/pause:3.2: Error response from daemon: reference does not exist I0420 20:08:15.555896 201038 image.go:176] daemon lookup for k8s.gcr.io/kube-proxy:v1.18.18: Error response from daemon: reference does not exist I0420 20:08:15.555872 201038 image.go:176] daemon lookup for k8s.gcr.io/kube-apiserver:v1.18.18: Error response from daemon: reference does not exist I0420 20:08:15.555951 201038 image.go:176] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error response from daemon: reference does not exist I0420 20:08:15.555965 201038 image.go:176] daemon lookup for k8s.gcr.io/kube-scheduler:v1.18.18: Error response from daemon: reference does not exist I0420 20:08:15.555969 201038 image.go:176] daemon lookup for k8s.gcr.io/coredns:1.6.7: Error response from daemon: reference does not exist I0420 20:08:15.555951 201038 image.go:176] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v4: Error response from daemon: reference does not exist - I0420 20:08:15.964902 201038 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.18.18 I0420 20:08:15.966470 201038 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.18.18 I0420 20:08:15.971674 201038 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0 I0420 20:08:15.979552 201038 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.2 I0420 20:08:15.981065 201038 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.7 I0420 20:08:15.982274 201038 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.18.18 I0420 20:08:15.992531 201038 cache_images.go:104] "k8s.gcr.io/kube-apiserver:v1.18.18" needs transfer: "k8s.gcr.io/kube-apiserver:v1.18.18" does not exist at hash "5745154baa89dc1c56895ee4cb64cf6c1500fa8f9ad35cc02cbcb642c9c90abb" in container runtime I0420 20:08:15.992565 201038 cache_images.go:241] Loading image from cache: /home/etix/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.18 I0420 20:08:15.992697 201038 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.18.18 I0420 20:08:15.996415 201038 cache_images.go:104] "k8s.gcr.io/kube-proxy:v1.18.18" needs transfer: "k8s.gcr.io/kube-proxy:v1.18.18" does not exist at hash "8bd0db6f4d0abbdc28efd8d23b7ad60b4d82f21c31126d599c7be50d61ca82ce" in container runtime I0420 20:08:15.996440 201038 cache_images.go:241] Loading image from cache: /home/etix/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.18 I0420 20:08:15.996504 201038 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.18.18 I0420 20:08:15.998427 201038 cache_images.go:104] "k8s.gcr.io/etcd:3.4.3-0" needs transfer: "k8s.gcr.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime I0420 20:08:15.998449 201038 cache_images.go:241] Loading image from cache: /home/etix/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 I0420 20:08:15.998506 201038 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0 I0420 20:08:16.007475 201038 cache_images.go:104] "k8s.gcr.io/coredns:1.6.7" needs transfer: "k8s.gcr.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime I0420 20:08:16.007492 201038 cache_images.go:241] Loading image from cache: /home/etix/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 I0420 20:08:16.007475 201038 cache_images.go:104] "k8s.gcr.io/pause:3.2" needs transfer: "k8s.gcr.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime I0420 20:08:16.007508 201038 cache_images.go:241] Loading image from cache: /home/etix/.minikube/cache/images/k8s.gcr.io/pause_3.2 I0420 20:08:16.007557 201038 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.7 I0420 20:08:16.007557 201038 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.2 I0420 20:08:16.008691 201038 cache_images.go:104] "k8s.gcr.io/kube-scheduler:v1.18.18" needs transfer: "k8s.gcr.io/kube-scheduler:v1.18.18" does not exist at hash "fe100f0c69843a2437338a0f61af227be8b69d033b05ed9b3f438cae7f668d5e" in container runtime I0420 20:08:16.008704 201038 cache_images.go:241] Loading image from cache: /home/etix/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.18 I0420 20:08:16.008740 201038 ssh_runner.go:300] existence check for /var/lib/minikube/images/kube-apiserver_v1.18.18: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.18.18: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.18.18': No such file or directory I0420 20:08:16.008752 201038 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.18.18 I0420 20:08:16.008754 201038 ssh_runner.go:310] scp /home/etix/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.18 --> /var/lib/minikube/images/kube-apiserver_v1.18.18 (51118080 bytes) I0420 20:08:16.008773 201038 ssh_runner.go:300] existence check for /var/lib/minikube/images/kube-proxy_v1.18.18: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.18.18: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.18.18': No such file or directory I0420 20:08:16.008784 201038 ssh_runner.go:310] scp /home/etix/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.18 --> /var/lib/minikube/images/kube-proxy_v1.18.18 (49400832 bytes) I0420 20:08:16.008792 201038 ssh_runner.go:300] existence check for /var/lib/minikube/images/etcd_3.4.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/etcd_3.4.3-0': No such file or directory I0420 20:08:16.008800 201038 ssh_runner.go:310] scp /home/etix/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 --> /var/lib/minikube/images/etcd_3.4.3-0 (100950016 bytes) I0420 20:08:16.009927 201038 ssh_runner.go:300] existence check for /var/lib/minikube/images/coredns_1.6.7: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.7: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/coredns_1.6.7': No such file or directory I0420 20:08:16.009942 201038 ssh_runner.go:310] scp /home/etix/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 --> /var/lib/minikube/images/coredns_1.6.7 (13600256 bytes) I0420 20:08:16.009944 201038 ssh_runner.go:300] existence check for /var/lib/minikube/images/pause_3.2: stat -c "%s %y" /var/lib/minikube/images/pause_3.2: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/pause_3.2': No such file or directory I0420 20:08:16.009961 201038 ssh_runner.go:310] scp /home/etix/.minikube/cache/images/k8s.gcr.io/pause_3.2 --> /var/lib/minikube/images/pause_3.2 (301056 bytes) I0420 20:08:16.011084 201038 ssh_runner.go:300] existence check for /var/lib/minikube/images/kube-scheduler_v1.18.18: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.18.18: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.18.18': No such file or directory I0420 20:08:16.011100 201038 ssh_runner.go:310] scp /home/etix/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.18 --> /var/lib/minikube/images/kube-scheduler_v1.18.18 (34301440 bytes) \ I0420 20:08:16.034304 201038 docker.go:159] Loading image: /var/lib/minikube/images/pause_3.2 I0420 20:08:16.034353 201038 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/pause_3.2 I0420 20:08:16.075394 201038 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.18.18 | I0420 20:08:16.157361 201038 cache_images.go:263] Transferred and loaded /home/etix/.minikube/cache/images/k8s.gcr.io/pause_3.2 from cache I0420 20:08:16.157378 201038 docker.go:159] Loading image: /var/lib/minikube/images/coredns_1.6.7 I0420 20:08:16.157414 201038 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/coredns_1.6.7 I0420 20:08:16.157360 201038 cache_images.go:104] "k8s.gcr.io/kube-controller-manager:v1.18.18" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.18.18" does not exist at hash "9fb627f53264e2c60dff3d11ca9a1d60b3e2fb0e0ed7fb9bf204f200a6dfee97" in container runtime I0420 20:08:16.157454 201038 cache_images.go:241] Loading image from cache: /home/etix/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.18 I0420 20:08:16.157505 201038 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.18.18 \ I0420 20:08:16.487516 201038 cache_images.go:263] Transferred and loaded /home/etix/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 from cache I0420 20:08:16.487536 201038 docker.go:159] Loading image: /var/lib/minikube/images/kube-scheduler_v1.18.18 I0420 20:08:16.487578 201038 ssh_runner.go:300] existence check for /var/lib/minikube/images/kube-controller-manager_v1.18.18: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.18.18: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.18.18': No such file or directory I0420 20:08:16.487607 201038 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/kube-scheduler_v1.18.18 I0420 20:08:16.487609 201038 ssh_runner.go:310] scp /home/etix/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.18 --> /var/lib/minikube/images/kube-controller-manager_v1.18.18 (49177600 bytes) - I0420 20:08:16.771282 201038 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.4 \ I0420 20:08:16.853216 201038 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0 | I0420 20:08:16.973763 201038 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v4 / I0420 20:08:17.058459 201038 cache_images.go:263] Transferred and loaded /home/etix/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.18 from cache I0420 20:08:17.058471 201038 docker.go:159] Loading image: /var/lib/minikube/images/kube-proxy_v1.18.18 I0420 20:08:17.058507 201038 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/kube-proxy_v1.18.18 I0420 20:08:17.058524 201038 cache_images.go:104] "docker.io/kubernetesui/metrics-scraper:v1.0.4" needs transfer: "docker.io/kubernetesui/metrics-scraper:v1.0.4" does not exist at hash "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4" in container runtime I0420 20:08:17.058538 201038 cache_images.go:241] Loading image from cache: /home/etix/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 I0420 20:08:17.058542 201038 cache_images.go:104] "docker.io/kubernetesui/dashboard:v2.1.0" needs transfer: "docker.io/kubernetesui/dashboard:v2.1.0" does not exist at hash "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db" in container runtime I0420 20:08:17.058552 201038 cache_images.go:241] Loading image from cache: /home/etix/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 I0420 20:08:17.058559 201038 cache_images.go:104] "gcr.io/k8s-minikube/storage-provisioner:v4" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v4" does not exist at hash "85069258b98ac4e9f9fbd51dfba3b4212d8cd1d79df7d2ecff44b1319ed641cb" in container runtime I0420 20:08:17.058570 201038 cache_images.go:241] Loading image from cache: /home/etix/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4 I0420 20:08:17.058597 201038 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/dashboard_v2.1.0 I0420 20:08:17.058598 201038 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/metrics-scraper_v1.0.4 I0420 20:08:17.058621 201038 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v4 \ I0420 20:08:17.621681 201038 cache_images.go:263] Transferred and loaded /home/etix/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.18 from cache I0420 20:08:17.621693 201038 docker.go:159] Loading image: /var/lib/minikube/images/kube-apiserver_v1.18.18 I0420 20:08:17.621727 201038 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/kube-apiserver_v1.18.18 I0420 20:08:17.621736 201038 ssh_runner.go:300] existence check for /var/lib/minikube/images/dashboard_v2.1.0: stat -c "%s %y" /var/lib/minikube/images/dashboard_v2.1.0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/dashboard_v2.1.0': No such file or directory I0420 20:08:17.621752 201038 ssh_runner.go:310] scp /home/etix/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 --> /var/lib/minikube/images/dashboard_v2.1.0 (67993600 bytes) I0420 20:08:17.621807 201038 ssh_runner.go:300] existence check for /var/lib/minikube/images/storage-provisioner_v4: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v4: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v4': No such file or directory I0420 20:08:17.621821 201038 ssh_runner.go:300] existence check for /var/lib/minikube/images/metrics-scraper_v1.0.4: stat -c "%s %y" /var/lib/minikube/images/metrics-scraper_v1.0.4: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/metrics-scraper_v1.0.4': No such file or directory I0420 20:08:17.621835 201038 ssh_runner.go:310] scp /home/etix/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 --> /var/lib/minikube/images/metrics-scraper_v1.0.4 (16022528 bytes) I0420 20:08:17.621836 201038 ssh_runner.go:310] scp /home/etix/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4 --> /var/lib/minikube/images/storage-provisioner_v4 (8882688 bytes) / I0420 20:08:18.283742 201038 cache_images.go:263] Transferred and loaded /home/etix/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.18 from cache I0420 20:08:18.283756 201038 docker.go:159] Loading image: /var/lib/minikube/images/etcd_3.4.3-0 I0420 20:08:18.283786 201038 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/etcd_3.4.3-0 / I0420 20:08:19.856854 201038 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/etcd_3.4.3-0: (1.573051574s) I0420 20:08:19.856878 201038 cache_images.go:263] Transferred and loaded /home/etix/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 from cache I0420 20:08:19.856890 201038 docker.go:159] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.18.18 I0420 20:08:19.856942 201038 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/kube-controller-manager_v1.18.18 \ I0420 20:08:20.479934 201038 cache_images.go:263] Transferred and loaded /home/etix/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.18 from cache I0420 20:08:20.479949 201038 docker.go:159] Loading image: /var/lib/minikube/images/storage-provisioner_v4 I0420 20:08:20.479979 201038 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/storage-provisioner_v4 / I0420 20:08:20.662992 201038 cache_images.go:263] Transferred and loaded /home/etix/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4 from cache I0420 20:08:20.663005 201038 docker.go:159] Loading image: /var/lib/minikube/images/metrics-scraper_v1.0.4 I0420 20:08:20.663039 201038 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/metrics-scraper_v1.0.4 | I0420 20:08:20.947996 201038 cache_images.go:263] Transferred and loaded /home/etix/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 from cache I0420 20:08:20.948012 201038 docker.go:159] Loading image: /var/lib/minikube/images/dashboard_v2.1.0 I0420 20:08:20.948043 201038 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/dashboard_v2.1.0 - I0420 20:08:22.331957 201038 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/dashboard_v2.1.0: (1.383901805s) I0420 20:08:22.331978 201038 cache_images.go:263] Transferred and loaded /home/etix/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 from cache I0420 20:08:22.331992 201038 cache_images.go:111] Successfully loaded all cached images I0420 20:08:22.331999 201038 cache_images.go:80] LoadImages completed in 6.777106609s I0420 20:08:22.332080 201038 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}} I0420 20:08:22.396479 201038 cni.go:74] Creating CNI manager for "" I0420 20:08:22.396499 201038 cni.go:139] CNI unnecessary in this configuration, recommending no CNI I0420 20:08:22.396509 201038 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16 I0420 20:08:22.396527 201038 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.18 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0420 20:08:22.396652 201038 kubeadm.go:154] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.18.18 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%" nodefs.inodesFree: "0%" imagefs.available: "0%" failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 192.168.49.2:10249 I0420 20:08:22.396774 201038 kubeadm.go:868] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.18.18/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.18.18 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0420 20:08:22.397182 201038 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.18.18 I0420 20:08:22.401985 201038 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.18.18: Process exited with status 2 stdout: stderr: ls: cannot access '/var/lib/minikube/binaries/v1.18.18': No such file or directory Initiating transfer... I0420 20:08:22.402024 201038 ssh_runner.go:149] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.18.18 I0420 20:08:22.405382 201038 download.go:78] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.18.18/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.18.18/bin/linux/amd64/kubectl.sha256 -> /home/etix/.minikube/cache/linux/v1.18.18/kubectl I0420 20:08:22.405404 201038 download.go:78] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.18.18/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.18.18/bin/linux/amd64/kubelet.sha256 -> /home/etix/.minikube/cache/linux/v1.18.18/kubelet I0420 20:08:22.405414 201038 download.go:78] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.18.18/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.18.18/bin/linux/amd64/kubeadm.sha256 -> /home/etix/.minikube/cache/linux/v1.18.18/kubeadm > kubelet.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s > kubectl.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s > kubeadm.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s > kubectl: 41.92 MiB / 41.92 MiB [---------------] 100.00% 47.92 MiB p/s 1s > kubelet: 18.55 MiB / 108.03 MiB [->_________] 17.17% 26.00 MiB p/s ETA | I0420 20:08:24.538745 201038 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.18.18/kubectl I0420 20:08:24.541011 201038 ssh_runner.go:300] existence check for /var/lib/minikube/binaries/v1.18.18/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.18.18/kubectl: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/binaries/v1.18.18/kubectl': No such file or directory I0420 20:08:24.541029 201038 ssh_runner.go:310] scp /home/etix/.minikube/cache/linux/v1.18.18/kubectl --> /var/lib/minikube/binaries/v1.18.18/kubectl (43958272 bytes) > kubeadm: 37.91 MiB / 37.91 MiB [---------------] 100.00% 17.50 MiB p/s 2s > kubelet: 65.20 MiB / 108.03 MiB [------>____] 60.36% 25.98 MiB p/s ETA 1sI0420 20:08:25.902543 201038 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.18.18/kubeadm I0420 20:08:25.904980 201038 ssh_runner.go:300] existence check for /var/lib/minikube/binaries/v1.18.18/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.18.18/kubeadm: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/binaries/v1.18.18/kubeadm': No such file or directory I0420 20:08:25.905004 201038 ssh_runner.go:310] scp /home/etix/.minikube/cache/linux/v1.18.18/kubeadm --> /var/lib/minikube/binaries/v1.18.18/kubeadm (39747584 bytes) > kubelet: 108.03 MiB / 108.03 MiB [-------------] 100.00% 34.33 MiB p/s 3s / I0420 20:08:27.053009 201038 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet I0420 20:08:27.060056 201038 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.18.18/kubelet I0420 20:08:27.061979 201038 ssh_runner.go:300] existence check for /var/lib/minikube/binaries/v1.18.18/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.18.18/kubelet: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/binaries/v1.18.18/kubelet': No such file or directory I0420 20:08:27.061992 201038 ssh_runner.go:310] scp /home/etix/.minikube/cache/linux/v1.18.18/kubelet --> /var/lib/minikube/binaries/v1.18.18/kubelet (113279864 bytes) - I0420 20:08:27.202461 201038 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0420 20:08:27.206426 201038 ssh_runner.go:310] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (335 bytes) I0420 20:08:27.213687 201038 ssh_runner.go:310] scp memory --> /lib/systemd/system/kubelet.service (350 bytes) I0420 20:08:27.219626 201038 ssh_runner.go:310] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1845 bytes) I0420 20:08:27.225925 201038 ssh_runner.go:149] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0420 20:08:27.227246 201038 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts" \ I0420 20:08:27.231967 201038 certs.go:52] Setting up /home/etix/.minikube/profiles/minikube for IP: 192.168.49.2 I0420 20:08:27.231991 201038 certs.go:175] generating minikubeCA CA: /home/etix/.minikube/ca.key | I0420 20:08:27.405435 201038 crypto.go:157] Writing cert to /home/etix/.minikube/ca.crt ... I0420 20:08:27.405450 201038 lock.go:36] WriteFile acquiring /home/etix/.minikube/ca.crt: {Name:mk19f01d9d1b84bc5a2f8b23a06a321884b9b06e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0420 20:08:27.405575 201038 crypto.go:165] Writing key to /home/etix/.minikube/ca.key ... I0420 20:08:27.405582 201038 lock.go:36] WriteFile acquiring /home/etix/.minikube/ca.key: {Name:mkc9cfa531f9053fc15cfe8c89396a60891fa9ab Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0420 20:08:27.405641 201038 certs.go:175] generating proxyClientCA CA: /home/etix/.minikube/proxy-client-ca.key - I0420 20:08:27.535419 201038 crypto.go:157] Writing cert to /home/etix/.minikube/proxy-client-ca.crt ... I0420 20:08:27.535432 201038 lock.go:36] WriteFile acquiring /home/etix/.minikube/proxy-client-ca.crt: {Name:mk6899c8a9907c174bd30aeb1e0b37a180940ae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0420 20:08:27.535532 201038 crypto.go:165] Writing key to /home/etix/.minikube/proxy-client-ca.key ... I0420 20:08:27.535539 201038 lock.go:36] WriteFile acquiring /home/etix/.minikube/proxy-client-ca.key: {Name:mk875f49a11418af47f681beeeeff30522c2bc65 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0420 20:08:27.535599 201038 certs.go:279] generating minikube-user signed cert: /home/etix/.minikube/profiles/minikube/client.key I0420 20:08:27.535604 201038 crypto.go:69] Generating cert /home/etix/.minikube/profiles/minikube/client.crt with IP's: [] I0420 20:08:27.567864 201038 crypto.go:157] Writing cert to /home/etix/.minikube/profiles/minikube/client.crt ... I0420 20:08:27.567873 201038 lock.go:36] WriteFile acquiring /home/etix/.minikube/profiles/minikube/client.crt: {Name:mk8a56ac49c4d3f826ce794b4c6b85f719e8e7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0420 20:08:27.567937 201038 crypto.go:165] Writing key to /home/etix/.minikube/profiles/minikube/client.key ... I0420 20:08:27.567943 201038 lock.go:36] WriteFile acquiring /home/etix/.minikube/profiles/minikube/client.key: {Name:mk7ccaf0a0cb10dc93e651045dd211b1aa1ba34c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0420 20:08:27.567986 201038 certs.go:279] generating minikube signed cert: /home/etix/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I0420 20:08:27.567991 201038 crypto.go:69] Generating cert /home/etix/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] \ I0420 20:08:27.653275 201038 crypto.go:157] Writing cert to /home/etix/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ... I0420 20:08:27.653287 201038 lock.go:36] WriteFile acquiring /home/etix/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mkc95bc8f6617a34a07a520427da86d670b4d33d Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0420 20:08:27.653362 201038 crypto.go:165] Writing key to /home/etix/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ... I0420 20:08:27.653368 201038 lock.go:36] WriteFile acquiring /home/etix/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk68aff727cc4fc502f10058a5cfb579bd7680be Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0420 20:08:27.653405 201038 certs.go:290] copying /home/etix/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/etix/.minikube/profiles/minikube/apiserver.crt I0420 20:08:27.653440 201038 certs.go:294] copying /home/etix/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/etix/.minikube/profiles/minikube/apiserver.key I0420 20:08:27.653467 201038 certs.go:279] generating aggregator signed cert: /home/etix/.minikube/profiles/minikube/proxy-client.key I0420 20:08:27.653472 201038 crypto.go:69] Generating cert /home/etix/.minikube/profiles/minikube/proxy-client.crt with IP's: [] | I0420 20:08:27.784525 201038 crypto.go:157] Writing cert to /home/etix/.minikube/profiles/minikube/proxy-client.crt ... I0420 20:08:27.784539 201038 lock.go:36] WriteFile acquiring /home/etix/.minikube/profiles/minikube/proxy-client.crt: {Name:mk110a581e33a62ef288b8a64798c00f43938f23 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0420 20:08:27.784634 201038 crypto.go:165] Writing key to /home/etix/.minikube/profiles/minikube/proxy-client.key ... I0420 20:08:27.784640 201038 lock.go:36] WriteFile acquiring /home/etix/.minikube/profiles/minikube/proxy-client.key: {Name:mkc1a3139dbcbe25b02a664ae0922782da742815 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0420 20:08:27.784727 201038 certs.go:354] found cert: /home/etix/.minikube/certs/home/etix/.minikube/certs/ca-key.pem (1679 bytes) I0420 20:08:27.784749 201038 certs.go:354] found cert: /home/etix/.minikube/certs/home/etix/.minikube/certs/ca.pem (1074 bytes) I0420 20:08:27.784764 201038 certs.go:354] found cert: /home/etix/.minikube/certs/home/etix/.minikube/certs/cert.pem (1115 bytes) I0420 20:08:27.784780 201038 certs.go:354] found cert: /home/etix/.minikube/certs/home/etix/.minikube/certs/key.pem (1675 bytes) I0420 20:08:27.785358 201038 ssh_runner.go:310] scp /home/etix/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0420 20:08:27.796958 201038 ssh_runner.go:310] scp /home/etix/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0420 20:08:27.805760 201038 ssh_runner.go:310] scp /home/etix/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0420 20:08:27.816600 201038 ssh_runner.go:310] scp /home/etix/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0420 20:08:27.826158 201038 ssh_runner.go:310] scp /home/etix/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) / I0420 20:08:27.835749 201038 ssh_runner.go:310] scp /home/etix/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0420 20:08:27.847038 201038 ssh_runner.go:310] scp /home/etix/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0420 20:08:27.856351 201038 ssh_runner.go:310] scp /home/etix/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0420 20:08:27.867210 201038 ssh_runner.go:310] scp /home/etix/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0420 20:08:27.877172 201038 ssh_runner.go:310] scp memory --> /var/lib/minikube/kubeconfig (740 bytes) I0420 20:08:27.885589 201038 ssh_runner.go:149] Run: openssl version I0420 20:08:27.889549 201038 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0420 20:08:27.893787 201038 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0420 20:08:27.895218 201038 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Apr 20 18:08 /usr/share/ca-certificates/minikubeCA.pem I0420 20:08:27.895274 201038 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0420 20:08:27.897879 201038 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0420 20:08:27.902144 201038 kubeadm.go:370] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:12288 CPUs:6 DiskSize:40000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.18 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.18 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false} I0420 20:08:27.902256 201038 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0420 20:08:27.922146 201038 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0420 20:08:27.925769 201038 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0420 20:08:27.929441 201038 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver I0420 20:08:27.929468 201038 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0420 20:08:27.932646 201038 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0420 20:08:27.932681 201038 ssh_runner.go:236] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" \ I0420 20:08:28.468374 201038 out.go:140] โ–ช Generating certificates and keys ... โ–ช Generating certificates and keys ...| I0420 20:08:30.559308 201038 out.go:140] โ–ช Booting up control plane ... โ–ช Booting up control plane .../ W0420 20:10:25.570742 201038 out.go:181] ๐Ÿ’ข initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.18 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [192.168.49.2 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [192.168.49.2 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: ๐Ÿ’ข initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.18 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [192.168.49.2 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [192.168.49.2 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: I0420 20:10:25.570940 201038 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force" I0420 20:10:25.961701 201038 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet I0420 20:10:25.967541 201038 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0420 20:10:25.990030 201038 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver I0420 20:10:25.990073 201038 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0420 20:10:25.993705 201038 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0420 20:10:25.993728 201038 ssh_runner.go:236] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0420 20:10:26.554431 201038 out.go:140] โ–ช Generating certificates and keys ... โ–ช Generating certificates and keys ...| I0420 20:10:26.976237 201038 out.go:140] โ–ช Booting up control plane ... โ–ช Booting up control plane .../ I0420 20:12:21.987122 201038 kubeadm.go:372] StartCluster complete in 3m54.084978412s I0420 20:12:21.987192 201038 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0420 20:12:22.008219 201038 logs.go:206] 0 containers: [] W0420 20:12:22.008236 201038 logs.go:208] No container was found matching "kube-apiserver" I0420 20:12:22.008274 201038 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0420 20:12:22.029562 201038 logs.go:206] 0 containers: [] W0420 20:12:22.029577 201038 logs.go:208] No container was found matching "etcd" I0420 20:12:22.029614 201038 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0420 20:12:22.050422 201038 logs.go:206] 0 containers: [] W0420 20:12:22.050433 201038 logs.go:208] No container was found matching "coredns" I0420 20:12:22.050470 201038 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0420 20:12:22.070617 201038 logs.go:206] 0 containers: [] W0420 20:12:22.070628 201038 logs.go:208] No container was found matching "kube-scheduler" I0420 20:12:22.070664 201038 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} - I0420 20:12:22.091057 201038 logs.go:206] 0 containers: [] W0420 20:12:22.091071 201038 logs.go:208] No container was found matching "kube-proxy" I0420 20:12:22.091120 201038 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0420 20:12:22.111869 201038 logs.go:206] 0 containers: [] W0420 20:12:22.111884 201038 logs.go:208] No container was found matching "kubernetes-dashboard" I0420 20:12:22.111919 201038 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0420 20:12:22.132411 201038 logs.go:206] 0 containers: [] W0420 20:12:22.132426 201038 logs.go:208] No container was found matching "storage-provisioner" I0420 20:12:22.132460 201038 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0420 20:12:22.152775 201038 logs.go:206] 0 containers: [] W0420 20:12:22.152788 201038 logs.go:208] No container was found matching "kube-controller-manager" I0420 20:12:22.152797 201038 logs.go:120] Gathering logs for container status ... I0420 20:12:22.152807 201038 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" \ I0420 20:12:24.185321 201038 ssh_runner.go:189] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.032499968s) I0420 20:12:24.185503 201038 logs.go:120] Gathering logs for kubelet ... I0420 20:12:24.185516 201038 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0420 20:12:24.230322 201038 logs.go:120] Gathering logs for dmesg ... I0420 20:12:24.230345 201038 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0420 20:12:24.241550 201038 logs.go:120] Gathering logs for describe nodes ... I0420 20:12:24.241572 201038 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.18/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" W0420 20:12:24.276478 201038 logs.go:127] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.18/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.18/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1 stdout: stderr: The connection to the server localhost:8443 was refused - did you specify the right host or port? output: ** stderr ** The connection to the server localhost:8443 was refused - did you specify the right host or port? ** /stderr ** I0420 20:12:24.276493 201038 logs.go:120] Gathering logs for Docker ... I0420 20:12:24.276501 201038 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400" | W0420 20:12:24.284511 201038 out.go:302] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.18 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W0420 20:12:24.284600 201038 out.go:181] W0420 20:12:24.284692 201038 out.go:181] ๐Ÿ’ฃ Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.18 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: ๐Ÿ’ฃ Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.18 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W0420 20:12:24.284831 201038 out.go:181] W0420 20:12:24.284854 201038 out.go:181] ๐Ÿ˜ฟ minikube is exiting due to an error. If the above message is not useful, open an issue: ๐Ÿ˜ฟ minikube is exiting due to an error. If the above message is not useful, open an issue: W0420 20:12:24.284880 201038 out.go:181] ๐Ÿ‘‰ https://github.com/kubernetes/minikube/issues/new/choose ๐Ÿ‘‰ https://github.com/kubernetes/minikube/issues/new/choose I0420 20:12:24.285858 201038 out.go:119] W0420 20:12:24.285940 201038 out.go:181] โŒ Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.18 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: โŒ Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.18 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W0420 20:12:24.286072 201038 out.go:181] ๐Ÿ’ก Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start ๐Ÿ’ก Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start W0420 20:12:24.286100 201038 out.go:181] ๐Ÿฟ Related issue: https://github.com/kubernetes/minikube/issues/4172 ๐Ÿฟ Related issue: https://github.com/kubernetes/minikube/issues/4172 I0420 20:12:24.286111 201038 out.go:119] ```
prezha commented 3 years ago

@etix thanks a lot for the input, that was useful - i think i know what the problem might be:

ip a s:

2: enp38s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:d8:61:79:a1:21 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.4/16 brd 192.168.255.255 scope global dynamic enp38s0

so, the 192.168.0.0/16 network would overlap with any 192.168.x.0/24 subnet we are trying to reserve, and that then makes sense

would you be able to set/use any 192.168.y.0/24 network for your enp38s0 interface?

etix commented 3 years ago

This network configuration has been working for more than a year with minikube < 1.19 but I see your point :+1:

sudo ip address del 192.168.0.4/16 dev enp38s0
sudo ip address add 192.168.0.4/24 dev enp38s0

There's some good progress indeed, the weird subnet reservation loop is now gone (:tada:) but I end up like in 1.17.1 with a non-functional control-plane. Might be a separate issue but the coincidence is troubling.

``` I0420 20:44:56.368325 293956 out.go:278] Setting OutFile to fd 1 ... I0420 20:44:56.368394 293956 out.go:330] isatty.IsTerminal(1) = true I0420 20:44:56.368398 293956 out.go:291] Setting ErrFile to fd 2... I0420 20:44:56.368401 293956 out.go:330] isatty.IsTerminal(2) = true I0420 20:44:56.368690 293956 root.go:317] Updating PATH: /home/etix/.minikube/bin W0420 20:44:56.368826 293956 root.go:292] Error reading config file at /home/etix/.minikube/config/config.json: open /home/etix/.minikube/config/config.json: no such file or directory I0420 20:44:56.369137 293956 out.go:285] Setting JSON to false I0420 20:44:56.389845 293956 start.go:108] hostinfo: {"hostname":"ryzen","uptime":22907,"bootTime":1618921389,"procs":446,"os":"linux","platform":"arch","platformFamily":"arch","platformVersion":"rolling","kernelVersion":"5.11.15-arch1-2","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"5a508ea3-ff7e-4054-8ca3-624ea1702805"} I0420 20:44:56.389897 293956 start.go:118] virtualization: kvm host I0420 20:44:56.394080 293956 out.go:157] ๐Ÿ˜„ minikube v1.19.0 on Arch rolling ๐Ÿ˜„ minikube v1.19.0 on Arch rolling I0420 20:44:56.394176 293956 driver.go:322] Setting default libvirt URI to qemu:///system I0420 20:44:56.394182 293956 notify.go:126] Checking for updates... I0420 20:44:56.394189 293956 global.go:103] Querying for installed drivers using PATH=/home/etix/.minikube/bin:/home/etix/go/bin:/var/lib/snapd/snap/bin:/home/etix/Android/Sdk/platform-tools:/home/etix/bin/:/opt/google-cloud-sdk/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/opt/cuda/bin:/opt/cuda/nsight_compute:/opt/cuda/nsight_systems/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl I0420 20:44:56.394227 293956 global.go:111] vmware default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Reason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/} I0420 20:44:56.416217 293956 docker.go:119] docker version: linux-20.10.6 I0420 20:44:56.416268 293956 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0420 20:44:56.465755 293956 info.go:261] docker info: {ID:W6YK:5ARX:XVU6:6MNS:LQWS:SEC3:2TN2:3RGQ:5AT4:AS5M:ZDZ5:JJIX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff false] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:false NGoroutines:40 SystemTime:2021-04-20 20:44:56.433167377 +0200 CEST LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.11.15-arch1-2 OperatingSystem:Arch Linux OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33648889856 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ryzen Labels:[] ExperimentalBuild:true ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e.m Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e.m} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Experimental:true Name:buildx Path:/usr/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-tp-docker]] Warnings:}} I0420 20:44:56.465834 293956 docker.go:225] overlay module found I0420 20:44:56.465841 293956 global.go:111] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I0420 20:44:56.694059 293956 global.go:111] kvm2 default: true priority: 8, state: {Installed:true Healthy:true Running:true NeedsImprovement:false Error: Reason: Fix: Doc:} I0420 20:44:56.705791 293956 global.go:111] none default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I0420 20:44:56.705835 293956 global.go:111] podman default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/} I0420 20:44:56.705845 293956 global.go:111] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I0420 20:44:56.880160 293956 global.go:111] virtualbox default: true priority: 6, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I0420 20:44:56.880190 293956 driver.go:258] not recommending "none" due to default: false I0420 20:44:56.880195 293956 driver.go:258] not recommending "ssh" due to default: false I0420 20:44:56.880208 293956 driver.go:292] Picked: docker I0420 20:44:56.880214 293956 driver.go:293] Alternatives: [kvm2 virtualbox none ssh] I0420 20:44:56.880221 293956 driver.go:294] Rejects: [podman vmware] I0420 20:44:56.881405 293956 out.go:157] โœจ Automatically selected the docker driver. Other choices: kvm2, virtualbox, none, ssh โœจ Automatically selected the docker driver. Other choices: kvm2, virtualbox, none, ssh I0420 20:44:56.881415 293956 start.go:276] selected driver: docker I0420 20:44:56.881420 293956 start.go:718] validating driver "docker" against I0420 20:44:56.881431 293956 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} W0420 20:44:56.881463 293956 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0420 20:44:56.881486 293956 out.go:222] โ— Your cgroup does not allow setting memory. โ— Your cgroup does not allow setting memory. I0420 20:44:56.883003 293956 out.go:157] โ–ช More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities โ–ช More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0420 20:44:56.899543 293956 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0420 20:44:56.947240 293956 info.go:261] docker info: {ID:W6YK:5ARX:XVU6:6MNS:LQWS:SEC3:2TN2:3RGQ:5AT4:AS5M:ZDZ5:JJIX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff false] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:false NGoroutines:40 SystemTime:2021-04-20 20:44:56.914689947 +0200 CEST LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.11.15-arch1-2 OperatingSystem:Arch Linux OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33648889856 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ryzen Labels:[] ExperimentalBuild:true ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e.m Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e.m} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Experimental:true Name:buildx Path:/usr/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-tp-docker]] Warnings:}} I0420 20:44:56.947379 293956 start_flags.go:253] no existing cluster config was found, will generate one from the flags I0420 20:44:56.947523 293956 start_flags.go:730] Wait components to verify : map[apiserver:true system_pods:true] I0420 20:44:56.947547 293956 cni.go:81] Creating CNI manager for "" I0420 20:44:56.947560 293956 cni.go:153] CNI unnecessary in this configuration, recommending no CNI I0420 20:44:56.947569 293956 start_flags.go:270] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 Memory:12288 CPUs:6 DiskSize:40000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.18 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0420 20:44:56.948429 293956 out.go:157] ๐Ÿ‘ Starting control plane node minikube in cluster minikube ๐Ÿ‘ Starting control plane node minikube in cluster minikube I0420 20:44:56.948461 293956 image.go:107] Checking for gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 in local docker daemon I0420 20:44:56.966110 293956 image.go:111] Found gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 in local docker daemon, skipping pull I0420 20:44:56.966125 293956 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 exists in daemon, skipping pull I0420 20:44:56.966133 293956 preload.go:97] Checking if preload exists for k8s version v1.18.18 and runtime docker I0420 20:44:57.080399 293956 preload.go:122] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.18.18-docker-overlay2-amd64.tar.lz4 I0420 20:44:57.080421 293956 cache.go:54] Caching tarball of preloaded images I0420 20:44:57.080446 293956 preload.go:97] Checking if preload exists for k8s version v1.18.18 and runtime docker I0420 20:44:57.193973 293956 preload.go:122] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.18.18-docker-overlay2-amd64.tar.lz4 I0420 20:44:57.197901 293956 out.go:157] ๐Ÿ’พ Downloading Kubernetes v1.18.18 preload ... ๐Ÿ’พ Downloading Kubernetes v1.18.18 preload ... I0420 20:44:57.197984 293956 download.go:78] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.18.18-docker-overlay2-amd64.tar.lz4 -> /home/etix/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.18.18-docker-overlay2-amd64.tar.lz4 > preloaded-images-k8s-v10-v1...: 512.86 MiB / 512.86 MiB 100.00% 86.04 Mi I0420 20:45:04.517237 293956 preload.go:160] saving checksum for preloaded-images-k8s-v10-v1.18.18-docker-overlay2-amd64.tar.lz4 ... I0420 20:45:04.645593 293956 preload.go:177] verifying checksumm of /home/etix/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.18.18-docker-overlay2-amd64.tar.lz4 ... I0420 20:45:05.371263 293956 cache.go:57] Finished verifying existence of preloaded tar for v1.18.18 on docker I0420 20:45:05.371465 293956 profile.go:148] Saving config to /home/etix/.minikube/profiles/minikube/config.json ... I0420 20:45:05.371485 293956 lock.go:36] WriteFile acquiring /home/etix/.minikube/profiles/minikube/config.json: {Name:mk211bd84d854fea1d6fff2b3065e355dfb157b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0420 20:45:05.371657 293956 cache.go:185] Successfully downloaded all kic artifacts I0420 20:45:05.371670 293956 start.go:313] acquiring machines lock for minikube: {Name:mk433c3bfb4f2189afb70058cf0ca504910f2a6b Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0420 20:45:05.371696 293956 start.go:317] acquired machines lock for "minikube" in 18.47ยตs I0420 20:45:05.371709 293956 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 Memory:12288 CPUs:6 DiskSize:40000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.18 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.18 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.18.18 ControlPlane:true Worker:true} I0420 20:45:05.371747 293956 start.go:126] createHost starting for "" (driver="docker") I0420 20:45:05.376430 293956 out.go:184] ๐Ÿ”ฅ Creating docker container (CPUs=6, Memory=12288MB) ... ๐Ÿ”ฅ Creating docker container (CPUs=6, Memory=12288MB) ...| I0420 20:45:05.376548 293956 start.go:160] libmachine.API.Create for "minikube" (driver="docker") I0420 20:45:05.376564 293956 client.go:168] LocalClient.Create starting I0420 20:45:05.376612 293956 main.go:126] libmachine: Creating CA: /home/etix/.minikube/certs/ca.pem / I0420 20:45:05.510413 293956 main.go:126] libmachine: Creating client certificate: /home/etix/.minikube/certs/cert.pem \ I0420 20:45:05.723298 293956 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0420 20:45:05.740824 293956 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0420 20:45:05.740872 293956 network_create.go:249] running [docker network inspect minikube] to gather additional debugging logs... I0420 20:45:05.740884 293956 cli_runner.go:115] Run: docker network inspect minikube W0420 20:45:05.759589 293956 cli_runner.go:162] docker network inspect minikube returned with exit code 1 I0420 20:45:05.759617 293956 network_create.go:252] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1 stdout: [] stderr: Error: No such network: minikube I0420 20:45:05.759633 293956 network_create.go:254] output of [docker network inspect minikube]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: minikube ** /stderr ** I0420 20:45:05.759688 293956 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0420 20:45:05.777245 293956 network.go:263] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000114008] misses:0} I0420 20:45:05.777276 293956 network.go:210] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0420 20:45:05.777291 293956 network_create.go:100] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ... I0420 20:45:05.777328 293956 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube | I0420 20:45:05.815307 293956 network_create.go:84] docker network minikube 192.168.49.0/24 created I0420 20:45:05.815326 293956 kic.go:102] calculated static IP "192.168.49.2" for the "minikube" container I0420 20:45:05.815376 293956 cli_runner.go:115] Run: docker ps -a --format {{.Names}} I0420 20:45:05.831658 293956 cli_runner.go:115] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0420 20:45:05.849230 293956 oci.go:102] Successfully created a docker volume minikube I0420 20:45:05.849282 293956 cli_runner.go:115] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 -d /var/lib \ I0420 20:45:06.545423 293956 oci.go:106] Successfully prepared a docker volume minikube W0420 20:45:06.545472 293956 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. I0420 20:45:06.545487 293956 preload.go:97] Checking if preload exists for k8s version v1.18.18 and runtime docker W0420 20:45:06.545493 293956 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0420 20:45:06.545530 293956 oci.go:233] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted. I0420 20:45:06.545534 293956 preload.go:105] Found local preload: /home/etix/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.18.18-docker-overlay2-amd64.tar.lz4 I0420 20:45:06.545548 293956 kic.go:175] Starting extracting preloaded images to volume ... I0420 20:45:06.545586 293956 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'" I0420 20:45:06.545605 293956 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/etix/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.18.18-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 -I lz4 -xf /preloaded.tar -C /extractDir | I0420 20:45:06.595617 293956 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 \ I0420 20:45:06.936623 293956 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Running}} I0420 20:45:06.954668 293956 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0420 20:45:06.973905 293956 cli_runner.go:115] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables | I0420 20:45:07.052401 293956 oci.go:278] the created container "minikube" has a running status. I0420 20:45:07.052423 293956 kic.go:206] Creating ssh key for kic: /home/etix/.minikube/machines/minikube/id_rsa... / I0420 20:45:07.095912 293956 kic_runner.go:188] docker (temp): /home/etix/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0420 20:45:07.146844 293956 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0420 20:45:07.165785 293956 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0420 20:45:07.165801 293956 kic_runner.go:115] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] - I0420 20:45:08.470500 293956 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/etix/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.18.18-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 -I lz4 -xf /preloaded.tar -C /extractDir: (1.924860149s) I0420 20:45:08.470528 293956 kic.go:184] duration metric: took 1.924978 seconds to extract preloaded images to volume I0420 20:45:08.470612 293956 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} \ I0420 20:45:08.488415 293956 machine.go:88] provisioning docker machine ... I0420 20:45:08.488437 293956 ubuntu.go:169] provisioning hostname "minikube" I0420 20:45:08.488484 293956 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0420 20:45:08.514330 293956 main.go:126] libmachine: Using SSH client type: native I0420 20:45:08.514453 293956 main.go:126] libmachine: &{{{ 0 [] [] []} docker [0x560e5a03a0e0] 0x560e5a03a0a0 [] 0s} 127.0.0.1 49169 } I0420 20:45:08.514463 293956 main.go:126] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname | I0420 20:45:08.631771 293956 main.go:126] libmachine: SSH cmd err, output: : minikube I0420 20:45:08.631822 293956 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0420 20:45:08.649751 293956 main.go:126] libmachine: Using SSH client type: native I0420 20:45:08.649849 293956 main.go:126] libmachine: &{{{ 0 [] [] []} docker [0x560e5a03a0e0] 0x560e5a03a0a0 [] 0s} 127.0.0.1 49169 } I0420 20:45:08.649861 293956 main.go:126] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi / I0420 20:45:08.751211 293956 main.go:126] libmachine: SSH cmd err, output: : I0420 20:45:08.751237 293956 ubuntu.go:175] set auth options {CertDir:/home/etix/.minikube CaCertPath:/home/etix/.minikube/certs/ca.pem CaPrivateKeyPath:/home/etix/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/etix/.minikube/machines/server.pem ServerKeyPath:/home/etix/.minikube/machines/server-key.pem ClientKeyPath:/home/etix/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/etix/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/etix/.minikube} I0420 20:45:08.751274 293956 ubuntu.go:177] setting up certificates I0420 20:45:08.751286 293956 provision.go:83] configureAuth start I0420 20:45:08.751363 293956 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0420 20:45:08.768970 293956 provision.go:137] copyHostCerts I0420 20:45:08.769012 293956 exec_runner.go:152] cp: /home/etix/.minikube/certs/ca.pem --> /home/etix/.minikube/ca.pem (1070 bytes) I0420 20:45:08.769081 293956 exec_runner.go:152] cp: /home/etix/.minikube/certs/cert.pem --> /home/etix/.minikube/cert.pem (1115 bytes) I0420 20:45:08.769117 293956 exec_runner.go:152] cp: /home/etix/.minikube/certs/key.pem --> /home/etix/.minikube/key.pem (1675 bytes) I0420 20:45:08.769145 293956 provision.go:111] generating server cert: /home/etix/.minikube/machines/server.pem ca-key=/home/etix/.minikube/certs/ca.pem private-key=/home/etix/.minikube/certs/ca-key.pem org=etix.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] \ I0420 20:45:08.916333 293956 provision.go:165] copyRemoteCerts I0420 20:45:08.916373 293956 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0420 20:45:08.916404 293956 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0420 20:45:08.935167 293956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49169 SSHKeyPath:/home/etix/.minikube/machines/minikube/id_rsa Username:docker} | I0420 20:45:09.009891 293956 ssh_runner.go:316] scp /home/etix/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes) I0420 20:45:09.023805 293956 ssh_runner.go:316] scp /home/etix/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0420 20:45:09.035058 293956 ssh_runner.go:316] scp /home/etix/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1070 bytes) I0420 20:45:09.045012 293956 provision.go:86] duration metric: configureAuth took 293.714262ms I0420 20:45:09.045032 293956 ubuntu.go:193] setting minikube options for container-runtime I0420 20:45:09.045233 293956 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0420 20:45:09.064119 293956 main.go:126] libmachine: Using SSH client type: native I0420 20:45:09.064268 293956 main.go:126] libmachine: &{{{ 0 [] [] []} docker [0x560e5a03a0e0] 0x560e5a03a0a0 [] 0s} 127.0.0.1 49169 } I0420 20:45:09.064282 293956 main.go:126] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 / I0420 20:45:09.165412 293956 main.go:126] libmachine: SSH cmd err, output: : overlay I0420 20:45:09.165436 293956 ubuntu.go:71] root file system type: overlay I0420 20:45:09.165609 293956 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ... I0420 20:45:09.165672 293956 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0420 20:45:09.184635 293956 main.go:126] libmachine: Using SSH client type: native I0420 20:45:09.184734 293956 main.go:126] libmachine: &{{{ 0 [] [] []} docker [0x560e5a03a0e0] 0x560e5a03a0a0 [] 0s} 127.0.0.1 49169 } I0420 20:45:09.184799 293956 main.go:126] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %s "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new \ I0420 20:45:09.294274 293956 main.go:126] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0420 20:45:09.294406 293956 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0420 20:45:09.313997 293956 main.go:126] libmachine: Using SSH client type: native I0420 20:45:09.314107 293956 main.go:126] libmachine: &{{{ 0 [] [] []} docker [0x560e5a03a0e0] 0x560e5a03a0a0 [] 0s} 127.0.0.1 49169 } I0420 20:45:09.314121 293956 main.go:126] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } / I0420 20:45:09.896826 293956 main.go:126] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-03-02 20:16:15.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2021-04-20 18:45:09.288941953 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0420 20:45:09.896897 293956 machine.go:91] provisioned docker machine in 1.408470076s I0420 20:45:09.896907 293956 client.go:171] LocalClient.Create took 4.520338559s I0420 20:45:09.896919 293956 start.go:168] duration metric: libmachine.API.Create for "minikube" took 4.520369839s I0420 20:45:09.896927 293956 start.go:267] post-start starting for "minikube" (driver="docker") I0420 20:45:09.896933 293956 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0420 20:45:09.896983 293956 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0420 20:45:09.897033 293956 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0420 20:45:09.914703 293956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49169 SSHKeyPath:/home/etix/.minikube/machines/minikube/id_rsa Username:docker} I0420 20:45:09.989977 293956 ssh_runner.go:149] Run: cat /etc/os-release I0420 20:45:09.991763 293956 main.go:126] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0420 20:45:09.991783 293956 main.go:126] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0420 20:45:09.991798 293956 main.go:126] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0420 20:45:09.991807 293956 info.go:137] Remote host: Ubuntu 20.04.1 LTS I0420 20:45:09.991817 293956 filesync.go:118] Scanning /home/etix/.minikube/addons for local assets ... I0420 20:45:09.991867 293956 filesync.go:118] Scanning /home/etix/.minikube/files for local assets ... I0420 20:45:09.991891 293956 start.go:270] post-start completed in 94.957217ms I0420 20:45:09.992171 293956 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube - I0420 20:45:10.009722 293956 profile.go:148] Saving config to /home/etix/.minikube/profiles/minikube/config.json ... I0420 20:45:10.009888 293956 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0420 20:45:10.009919 293956 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0420 20:45:10.027199 293956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49169 SSHKeyPath:/home/etix/.minikube/machines/minikube/id_rsa Username:docker} \ I0420 20:45:10.101805 293956 start.go:129] duration metric: createHost completed in 4.730045376s I0420 20:45:10.101821 293956 start.go:80] releasing machines lock for "minikube", held for 4.730116686s I0420 20:45:10.101906 293956 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0420 20:45:10.120502 293956 ssh_runner.go:149] Run: systemctl --version I0420 20:45:10.120542 293956 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/ I0420 20:45:10.120564 293956 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0420 20:45:10.120570 293956 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0420 20:45:10.139495 293956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49169 SSHKeyPath:/home/etix/.minikube/machines/minikube/id_rsa Username:docker} I0420 20:45:10.139781 293956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49169 SSHKeyPath:/home/etix/.minikube/machines/minikube/id_rsa Username:docker} | I0420 20:45:10.282988 293956 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd I0420 20:45:10.289828 293956 ssh_runner.go:149] Run: sudo systemctl cat docker.service / I0420 20:45:10.296649 293956 cruntime.go:219] skipping containerd shutdown because we are bound to it I0420 20:45:10.296692 293956 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio I0420 20:45:10.302782 293956 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0420 20:45:10.311363 293956 ssh_runner.go:149] Run: sudo systemctl cat docker.service I0420 20:45:10.316499 293956 ssh_runner.go:149] Run: sudo systemctl daemon-reload - I0420 20:45:10.403464 293956 ssh_runner.go:149] Run: sudo systemctl start docker I0420 20:45:10.409754 293956 ssh_runner.go:149] Run: docker version --format {{.Server.Version}} \ I0420 20:45:10.525214 293956 out.go:184] ๐Ÿณ Preparing Kubernetes v1.18.18 on Docker 20.10.5 ... ๐Ÿณ Preparing Kubernetes v1.18.18 on Docker 20.10.5 ...I0420 20:45:10.525270 293956 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0420 20:45:10.543366 293956 ssh_runner.go:149] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0420 20:45:10.545304 293956 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts" I0420 20:45:10.550312 293956 preload.go:97] Checking if preload exists for k8s version v1.18.18 and runtime docker I0420 20:45:10.550337 293956 preload.go:105] Found local preload: /home/etix/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.18.18-docker-overlay2-amd64.tar.lz4 I0420 20:45:10.550392 293956 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}} I0420 20:45:10.573821 293956 docker.go:455] Got preloaded images: -- stdout -- k8s.gcr.io/kube-proxy:v1.18.18 k8s.gcr.io/kube-scheduler:v1.18.18 k8s.gcr.io/kube-apiserver:v1.18.18 k8s.gcr.io/kube-controller-manager:v1.18.18 gcr.io/k8s-minikube/storage-provisioner:v5 kubernetesui/dashboard:v2.1.0 kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/pause:3.2 k8s.gcr.io/coredns:1.6.7 k8s.gcr.io/etcd:3.4.3-0 -- /stdout -- I0420 20:45:10.573846 293956 docker.go:392] Images already preloaded, skipping extraction I0420 20:45:10.573893 293956 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}} I0420 20:45:10.594433 293956 docker.go:455] Got preloaded images: -- stdout -- k8s.gcr.io/kube-proxy:v1.18.18 k8s.gcr.io/kube-apiserver:v1.18.18 k8s.gcr.io/kube-controller-manager:v1.18.18 k8s.gcr.io/kube-scheduler:v1.18.18 gcr.io/k8s-minikube/storage-provisioner:v5 kubernetesui/dashboard:v2.1.0 kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/pause:3.2 k8s.gcr.io/coredns:1.6.7 k8s.gcr.io/etcd:3.4.3-0 -- /stdout -- I0420 20:45:10.594460 293956 cache_images.go:74] Images are preloaded, skipping loading I0420 20:45:10.594513 293956 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}} / I0420 20:45:10.782099 293956 cni.go:81] Creating CNI manager for "" I0420 20:45:10.782119 293956 cni.go:153] CNI unnecessary in this configuration, recommending no CNI I0420 20:45:10.782128 293956 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0420 20:45:10.782156 293956 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.18 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0420 20:45:10.782279 293956 kubeadm.go:157] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.18.18 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%" nodefs.inodesFree: "0%" imagefs.available: "0%" failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 I0420 20:45:10.782397 293956 kubeadm.go:897] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.18.18/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.18.18 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0420 20:45:10.782472 293956 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.18.18 I0420 20:45:10.786927 293956 binaries.go:44] Found k8s binaries, skipping transfer I0420 20:45:10.786984 293956 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0420 20:45:10.790752 293956 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (335 bytes) I0420 20:45:10.796781 293956 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (353 bytes) - I0420 20:45:10.804310 293956 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1840 bytes) I0420 20:45:10.810540 293956 ssh_runner.go:149] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0420 20:45:10.811961 293956 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts" I0420 20:45:10.818223 293956 certs.go:52] Setting up /home/etix/.minikube/profiles/minikube for IP: 192.168.49.2 I0420 20:45:10.818259 293956 certs.go:175] generating minikubeCA CA: /home/etix/.minikube/ca.key \ I0420 20:45:10.936756 293956 crypto.go:157] Writing cert to /home/etix/.minikube/ca.crt ... I0420 20:45:10.936772 293956 lock.go:36] WriteFile acquiring /home/etix/.minikube/ca.crt: {Name:mk19f01d9d1b84bc5a2f8b23a06a321884b9b06e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0420 20:45:10.936870 293956 crypto.go:165] Writing key to /home/etix/.minikube/ca.key ... I0420 20:45:10.936877 293956 lock.go:36] WriteFile acquiring /home/etix/.minikube/ca.key: {Name:mkc9cfa531f9053fc15cfe8c89396a60891fa9ab Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0420 20:45:10.936936 293956 certs.go:175] generating proxyClientCA CA: /home/etix/.minikube/proxy-client-ca.key | I0420 20:45:11.005424 293956 crypto.go:157] Writing cert to /home/etix/.minikube/proxy-client-ca.crt ... I0420 20:45:11.005436 293956 lock.go:36] WriteFile acquiring /home/etix/.minikube/proxy-client-ca.crt: {Name:mk6899c8a9907c174bd30aeb1e0b37a180940ae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0420 20:45:11.005538 293956 crypto.go:165] Writing key to /home/etix/.minikube/proxy-client-ca.key ... I0420 20:45:11.005545 293956 lock.go:36] WriteFile acquiring /home/etix/.minikube/proxy-client-ca.key: {Name:mk875f49a11418af47f681beeeeff30522c2bc65 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0420 20:45:11.005609 293956 certs.go:286] generating minikube-user signed cert: /home/etix/.minikube/profiles/minikube/client.key I0420 20:45:11.005615 293956 crypto.go:69] Generating cert /home/etix/.minikube/profiles/minikube/client.crt with IP's: [] / I0420 20:45:11.192407 293956 crypto.go:157] Writing cert to /home/etix/.minikube/profiles/minikube/client.crt ... I0420 20:45:11.192429 293956 lock.go:36] WriteFile acquiring /home/etix/.minikube/profiles/minikube/client.crt: {Name:mk8a56ac49c4d3f826ce794b4c6b85f719e8e7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0420 20:45:11.192549 293956 crypto.go:165] Writing key to /home/etix/.minikube/profiles/minikube/client.key ... I0420 20:45:11.192558 293956 lock.go:36] WriteFile acquiring /home/etix/.minikube/profiles/minikube/client.key: {Name:mk7ccaf0a0cb10dc93e651045dd211b1aa1ba34c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0420 20:45:11.192626 293956 certs.go:286] generating minikube signed cert: /home/etix/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I0420 20:45:11.192633 293956 crypto.go:69] Generating cert /home/etix/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] - I0420 20:45:11.292810 293956 crypto.go:157] Writing cert to /home/etix/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ... I0420 20:45:11.292824 293956 lock.go:36] WriteFile acquiring /home/etix/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mkc95bc8f6617a34a07a520427da86d670b4d33d Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0420 20:45:11.292899 293956 crypto.go:165] Writing key to /home/etix/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ... I0420 20:45:11.292906 293956 lock.go:36] WriteFile acquiring /home/etix/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk68aff727cc4fc502f10058a5cfb579bd7680be Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0420 20:45:11.292943 293956 certs.go:297] copying /home/etix/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/etix/.minikube/profiles/minikube/apiserver.crt I0420 20:45:11.292975 293956 certs.go:301] copying /home/etix/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/etix/.minikube/profiles/minikube/apiserver.key I0420 20:45:11.293003 293956 certs.go:286] generating aggregator signed cert: /home/etix/.minikube/profiles/minikube/proxy-client.key I0420 20:45:11.293008 293956 crypto.go:69] Generating cert /home/etix/.minikube/profiles/minikube/proxy-client.crt with IP's: [] \ I0420 20:45:11.374610 293956 crypto.go:157] Writing cert to /home/etix/.minikube/profiles/minikube/proxy-client.crt ... I0420 20:45:11.374624 293956 lock.go:36] WriteFile acquiring /home/etix/.minikube/profiles/minikube/proxy-client.crt: {Name:mk110a581e33a62ef288b8a64798c00f43938f23 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0420 20:45:11.374694 293956 crypto.go:165] Writing key to /home/etix/.minikube/profiles/minikube/proxy-client.key ... I0420 20:45:11.374700 293956 lock.go:36] WriteFile acquiring /home/etix/.minikube/profiles/minikube/proxy-client.key: {Name:mkc1a3139dbcbe25b02a664ae0922782da742815 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0420 20:45:11.374786 293956 certs.go:361] found cert: /home/etix/.minikube/certs/home/etix/.minikube/certs/ca-key.pem (1675 bytes) I0420 20:45:11.374808 293956 certs.go:361] found cert: /home/etix/.minikube/certs/home/etix/.minikube/certs/ca.pem (1070 bytes) I0420 20:45:11.374823 293956 certs.go:361] found cert: /home/etix/.minikube/certs/home/etix/.minikube/certs/cert.pem (1115 bytes) I0420 20:45:11.374836 293956 certs.go:361] found cert: /home/etix/.minikube/certs/home/etix/.minikube/certs/key.pem (1675 bytes) I0420 20:45:11.375500 293956 ssh_runner.go:316] scp /home/etix/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0420 20:45:11.386157 293956 ssh_runner.go:316] scp /home/etix/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0420 20:45:11.398189 293956 ssh_runner.go:316] scp /home/etix/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) | I0420 20:45:11.407849 293956 ssh_runner.go:316] scp /home/etix/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0420 20:45:11.417353 293956 ssh_runner.go:316] scp /home/etix/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0420 20:45:11.427688 293956 ssh_runner.go:316] scp /home/etix/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I0420 20:45:11.436591 293956 ssh_runner.go:316] scp /home/etix/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0420 20:45:11.447608 293956 ssh_runner.go:316] scp /home/etix/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes) I0420 20:45:11.457356 293956 ssh_runner.go:316] scp /home/etix/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0420 20:45:11.467014 293956 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (740 bytes) I0420 20:45:11.473432 293956 ssh_runner.go:149] Run: openssl version I0420 20:45:11.477566 293956 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0420 20:45:11.482107 293956 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0420 20:45:11.484048 293956 certs.go:402] hashing: -rw-r--r-- 1 root root 1111 Apr 20 18:45 /usr/share/ca-certificates/minikubeCA.pem I0420 20:45:11.484088 293956 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0420 20:45:11.486395 293956 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0420 20:45:11.490161 293956 kubeadm.go:386] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 Memory:12288 CPUs:6 DiskSize:40000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.18 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.18 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0420 20:45:11.490289 293956 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} / I0420 20:45:11.509928 293956 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0420 20:45:11.513496 293956 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0420 20:45:11.517353 293956 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver I0420 20:45:11.517379 293956 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0420 20:45:11.520559 293956 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0420 20:45:11.520587 293956 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" - I0420 20:45:12.043892 293956 out.go:184] โ–ช Generating certificates and keys ... โ–ช Generating certificates and keys ...\ I0420 20:45:13.794017 293956 out.go:184] โ–ช Booting up control plane ... โ–ช Booting up control plane ...\ W0420 20:47:08.806947 293956 out.go:222] ๐Ÿ’ข initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.18 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [192.168.49.2 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [192.168.49.2 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W0420 18:45:11.546016 807 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0420 18:45:13.797677 807 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" W0420 18:45:13.798890 807 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher ๐Ÿ’ข initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.18 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [192.168.49.2 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [192.168.49.2 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W0420 18:45:11.546016 807 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0420 18:45:13.797677 807 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" W0420 18:45:13.798890 807 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher I0420 20:47:08.807079 293956 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force" I0420 20:47:09.198774 293956 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet I0420 20:47:09.204793 293956 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0420 20:47:09.224420 293956 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver I0420 20:47:09.224460 293956 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0420 20:47:09.228275 293956 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0420 20:47:09.228308 293956 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0420 20:49:05.356202 293956 out.go:184] โ–ช Generating certificates and keys ... โ–ช Generating certificates and keys ...| I0420 20:49:05.357996 293956 out.go:184] โ–ช Booting up control plane ... โ–ช Booting up control plane ...I0420 20:49:05.360731 293956 kubeadm.go:388] StartCluster complete in 3m53.870573367s I0420 20:49:05.360812 293956 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0420 20:49:05.381328 293956 logs.go:256] 0 containers: [] W0420 20:49:05.381344 293956 logs.go:258] No container was found matching "kube-apiserver" I0420 20:49:05.381384 293956 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0420 20:49:05.404792 293956 logs.go:256] 0 containers: [] W0420 20:49:05.404807 293956 logs.go:258] No container was found matching "etcd" I0420 20:49:05.404845 293956 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0420 20:49:05.425559 293956 logs.go:256] 0 containers: [] W0420 20:49:05.425573 293956 logs.go:258] No container was found matching "coredns" I0420 20:49:05.425609 293956 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0420 20:49:05.445863 293956 logs.go:256] 0 containers: [] W0420 20:49:05.445877 293956 logs.go:258] No container was found matching "kube-scheduler" I0420 20:49:05.445914 293956 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} / I0420 20:49:05.473418 293956 logs.go:256] 0 containers: [] W0420 20:49:05.473434 293956 logs.go:258] No container was found matching "kube-proxy" I0420 20:49:05.473471 293956 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0420 20:49:05.493740 293956 logs.go:256] 0 containers: [] W0420 20:49:05.493755 293956 logs.go:258] No container was found matching "kubernetes-dashboard" I0420 20:49:05.493799 293956 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0420 20:49:05.513743 293956 logs.go:256] 0 containers: [] W0420 20:49:05.513758 293956 logs.go:258] No container was found matching "storage-provisioner" I0420 20:49:05.513797 293956 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0420 20:49:05.533348 293956 logs.go:256] 0 containers: [] W0420 20:49:05.533363 293956 logs.go:258] No container was found matching "kube-controller-manager" I0420 20:49:05.533372 293956 logs.go:122] Gathering logs for kubelet ... I0420 20:49:05.533382 293956 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" - I0420 20:49:05.560194 293956 logs.go:122] Gathering logs for dmesg ... I0420 20:49:05.560207 293956 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0420 20:49:05.577005 293956 logs.go:122] Gathering logs for describe nodes ... I0420 20:49:05.577039 293956 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.18/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" W0420 20:49:05.612311 293956 logs.go:129] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.18/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.18/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1 stdout: stderr: The connection to the server localhost:8443 was refused - did you specify the right host or port? output: ** stderr ** The connection to the server localhost:8443 was refused - did you specify the right host or port? ** /stderr ** I0420 20:49:05.612334 293956 logs.go:122] Gathering logs for Docker ... I0420 20:49:05.612345 293956 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0420 20:49:05.621179 293956 logs.go:122] Gathering logs for container status ... I0420 20:49:05.621197 293956 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" \ I0420 20:49:07.686734 293956 ssh_runner.go:189] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.0655249s) W0420 20:49:07.686871 293956 out.go:351] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.18 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W0420 18:47:09.253331 4855 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0420 18:47:10.342242 4855 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" W0420 18:47:10.343755 4855 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher W0420 20:49:07.686977 293956 out.go:222] W0420 20:49:07.687083 293956 out.go:222] ๐Ÿ’ฃ Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.18 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W0420 18:47:09.253331 4855 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0420 18:47:10.342242 4855 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" W0420 18:47:10.343755 4855 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher ๐Ÿ’ฃ Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.18 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W0420 18:47:09.253331 4855 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0420 18:47:10.342242 4855 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" W0420 18:47:10.343755 4855 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher W0420 20:49:07.687247 293956 out.go:222] W0420 20:49:07.687269 293956 out.go:222] ๐Ÿ˜ฟ minikube is exiting due to an error. If the above message is not useful, open an issue: ๐Ÿ˜ฟ minikube is exiting due to an error. If the above message is not useful, open an issue: W0420 20:49:07.687290 293956 out.go:222] ๐Ÿ‘‰ https://github.com/kubernetes/minikube/issues/new/choose ๐Ÿ‘‰ https://github.com/kubernetes/minikube/issues/new/choose I0420 20:49:07.689791 293956 out.go:157] W0420 20:49:07.689905 293956 out.go:222] โŒ Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.18 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W0420 18:47:09.253331 4855 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0420 18:47:10.342242 4855 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" W0420 18:47:10.343755 4855 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher โŒ Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.18:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.18 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W0420 18:47:09.253331 4855 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0420 18:47:10.342242 4855 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" W0420 18:47:10.343755 4855 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher W0420 20:49:07.690123 293956 out.go:222] ๐Ÿ’ก Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start ๐Ÿ’ก Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start W0420 20:49:07.690162 293956 out.go:222] ๐Ÿฟ Related issue: https://github.com/kubernetes/minikube/issues/4172 ๐Ÿฟ Related issue: https://github.com/kubernetes/minikube/issues/4172 I0420 20:49:07.691475 293956 out.go:157] ```
etix commented 3 years ago

After further investigations about the control plane "not starting" I see this error looping:

The connection to the server localhost:8443 was refused - did you specify the right host or port?

If I'm guessing correctly docker-proxy should be the one listening and indeed the process is running:

root 341573 0.0 0.0 1148956 4084 ? Sl 21:04 0:00 /usr/bin/docker-proxy -proto tcp -host-ip 127.0.0.1 -host-port 49171 -container-ip 192.168.49.2 -container-port 8443

BUT ss doesn't list any application listening on that port (I could really be wrong here, I was more used to netstat back in the days) but telnet confirms that the connection is indeed refused.

prezha commented 3 years ago

great progress, glad it helped!

now, it could be that the /routes/firewall/... are now a bit confused in mixed state after just changing iface address with ip del/add

if you hate playing with it manually, you could try setting the iface ip with netmask 24 using higher-level tools (not sure what arch is using) that will do all the necessary changes for you, then maybe even reboot and try again

prezha commented 3 years ago

if that doesn't help, please close this one and open another issue, again with complete logs (ie, with --alsologtostderr -v=9) so that we leave this one with a solution to the original problem for future reference

etix commented 3 years ago

Splitting my local network from one large 192.168.0.0/16 to smaller 192.168.0.0/24 prevents the no free private network subnets found problem that appears in minikube 1.19.

I'm closing this issue even if I'm still unable to start the control plane successfully. I'll reference this issue in a new one if they are somehow related.