kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.23k stars 4.87k forks source link

docker: K8S_KUBELET_NOT_RUNNING on v1.15.1: timed out waiting for the condition #9826

Closed bri-pug closed 3 years ago

bri-pug commented 3 years ago

I was working on a Windows 10 PC running Ubuntu 20.04.1 in VirtualBox (version 6.1.14) and was using minikube in the VM and having no issues. After having [completely unrelated] computer issues, I got a new Windows 10 PC and I am running Ubuntu 20.04.1 in VirtualBox (version 6.1.16), but I cannot get minikube to start on my new computer. I even saved my VM and imported it into VirtualBox on my new computer and had the exact same issues. Previously, I was running minikube v1.12.3 and I tried that version of minikube on the my computer as well as various newer ones. When I try older versions of minikube, v1.13.x and older, minikube will start but is incredibly slow and slows down the whole VM to the point of being unusable. When trying new versions of minikube, v1.14.x and newer, I cannot get minikube to start.

Steps to reproduce the issue: Since this is only happening on one computer, but not the other, I'm not sure how to reproduce the issue for another person.

When running just minikube start, I get various errors, and one of them suggested adding this: --extra-config=kubelet.cgroup-driver=systemd, so I tried that.

**Full output of minikube start -v=5 --alsologtostderr --extra-config=kubelet.cgroup-driver=systemd:***

``` I1202 12:54:56.235158 33981 out.go:185] Setting OutFile to fd 1 ... I1202 12:54:56.235327 33981 out.go:237] isatty.IsTerminal(1) = true I1202 12:54:56.235333 33981 out.go:198] Setting ErrFile to fd 2... I1202 12:54:56.235336 33981 out.go:237] isatty.IsTerminal(2) = true I1202 12:54:56.238023 33981 root.go:279] Updating PATH: /home/user/.minikube/bin I1202 12:54:56.238197 33981 oci.go:526] shell is pointing to dockerd inside minikube. will unset to use host I1202 12:54:56.238454 33981 out.go:192] Setting JSON to false I1202 12:54:56.240120 33981 start.go:103] hostinfo: {"hostname":"ubuntu200","uptime":11893,"bootTime":1606919803,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.4.0-54-generic","virtualizationSystem":"vbox","virtualizationRole":"guest","hostid":"24fe8349-5ca1-4ce9-8edd-8dc762af16d0"} I1202 12:54:56.241189 33981 start.go:113] virtualization: vbox guest I1202 12:54:56.242173 33981 out.go:110] πŸ˜„ minikube v1.15.1 on Ubuntu 20.04 (vbox/amd64) πŸ˜„ minikube v1.15.1 on Ubuntu 20.04 (vbox/amd64) I1202 12:54:56.245948 33981 out.go:110] β–ͺ MINIKUBE_ACTIVE_DOCKERD=minikube β–ͺ MINIKUBE_ACTIVE_DOCKERD=minikube I1202 12:54:56.248595 33981 notify.go:126] Checking for updates... I1202 12:54:56.255706 33981 driver.go:302] Setting default libvirt URI to qemu:///system I1202 12:54:56.255803 33981 global.go:102] Querying for installed drivers using PATH=/home/user/.minikube/bin:/home/user/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/user/.local/bin:/home/user/Share/Scripts:/home/user/Software/anaconda3/bin I1202 12:54:56.477435 33981 docker.go:117] docker version: linux-19.03.13 I1202 12:54:56.480696 33981 cli_runner.go:110] Run: docker system info --format "{{json .}}" I1202 12:54:56.934536 33981 info.go:253] docker info: {ID:GLTS:M5YB:MRS6:IHBQ:VNRR:2AGG:K2YU:XUCV:GKWF:GX6G:U26Z:NQOD Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:82 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2020-12-02 12:54:56.700932789 -0500 EST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.4.0-54-generic OperatingSystem:Ubuntu 20.04.1 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:24456843264 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu200 Labels:[] ExperimentalBuild:false ServerVersion:19.03.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fba4e9a7d01810a393d5d25a3621dc101981175 Expected:8fba4e9a7d01810a393d5d25a3621dc101981175} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I1202 12:54:56.934665 33981 docker.go:147] overlay module found I1202 12:54:56.934708 33981 global.go:110] docker priority: 8, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Fix: Doc:} I1202 12:54:56.934879 33981 global.go:110] kvm2 priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "virsh": executable file not found in $PATH Fix:Install libvirt Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/} I1202 12:54:56.934962 33981 global.go:110] none priority: 3, state: {Installed:true Healthy:false Running:false NeedsImprovement:false Error:the 'none' driver must be run as the root user Fix:For non-root usage, try the newer 'docker' driver Doc:} I1202 12:54:56.934989 33981 global.go:110] podman priority: 2, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/} I1202 12:54:56.935103 33981 global.go:110] virtualbox priority: 5, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/} I1202 12:54:56.935131 33981 global.go:110] vmware priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/} I1202 12:54:56.935188 33981 driver.go:248] "docker" has a higher priority (8) than "" (0) I1202 12:54:56.935196 33981 driver.go:239] not recommending "none" due to health: the 'none' driver must be run as the root user I1202 12:54:56.935204 33981 driver.go:273] Picked: docker I1202 12:54:56.935208 33981 driver.go:274] Alternatives: [] I1202 12:54:56.935211 33981 driver.go:275] Rejects: [kvm2 none podman virtualbox vmware] I1202 12:54:56.935326 33981 out.go:110] ✨ Automatically selected the docker driver ✨ Automatically selected the docker driver I1202 12:54:56.935381 33981 start.go:272] selected driver: docker I1202 12:54:56.935385 33981 start.go:686] validating driver "docker" against I1202 12:54:56.935397 33981 start.go:697] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Fix: Doc:} I1202 12:54:56.935436 33981 cli_runner.go:110] Run: docker system info --format "{{json .}}" I1202 12:54:57.259806 33981 info.go:253] docker info: {ID:GLTS:M5YB:MRS6:IHBQ:VNRR:2AGG:K2YU:XUCV:GKWF:GX6G:U26Z:NQOD Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:82 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2020-12-02 12:54:57.138370029 -0500 EST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.4.0-54-generic OperatingSystem:Ubuntu 20.04.1 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:24456843264 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu200 Labels:[] ExperimentalBuild:false ServerVersion:19.03.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fba4e9a7d01810a393d5d25a3621dc101981175 Expected:8fba4e9a7d01810a393d5d25a3621dc101981175} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I1202 12:54:57.259992 33981 start_flags.go:233] no existing cluster config was found, will generate one from the flags I1202 12:54:57.260489 33981 start_flags.go:251] Using suggested 5800MB memory alloc based on sys=23323MB, container=23323MB I1202 12:54:57.260662 33981 start_flags.go:641] Wait components to verify : map[apiserver:true system_pods:true] I1202 12:54:57.260777 33981 cni.go:74] Creating CNI manager for "" I1202 12:54:57.260832 33981 cni.go:117] CNI unnecessary in this configuration, recommending no CNI I1202 12:54:57.260839 33981 start_flags.go:364] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e Memory:5800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubelet Key:cgroup-driver Value:systemd}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[]} I1202 12:54:57.261061 33981 out.go:110] πŸ‘ Starting control plane node minikube in cluster minikube πŸ‘ Starting control plane node minikube in cluster minikube I1202 12:54:57.464205 33981 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e in local docker daemon, skipping pull I1202 12:54:57.464282 33981 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e exists in daemon, skipping pull I1202 12:54:57.464301 33981 preload.go:97] Checking if preload exists for k8s version v1.19.4 and runtime docker I1202 12:54:57.464333 33981 preload.go:105] Found local preload: /home/user/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4 I1202 12:54:57.464380 33981 cache.go:54] Caching tarball of preloaded images I1202 12:54:57.464396 33981 preload.go:131] Found /home/user/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download I1202 12:54:57.464400 33981 cache.go:57] Finished verifying existence of preloaded tar for v1.19.4 on docker I1202 12:54:57.464603 33981 profile.go:150] Saving config to /home/user/.minikube/profiles/minikube/config.json ... I1202 12:54:57.464705 33981 lock.go:36] WriteFile acquiring /home/user/.minikube/profiles/minikube/config.json: {Name:mkbf001bd079b04059d4c5f7f8058a2bed13ed21 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1202 12:54:57.464989 33981 cache.go:184] Successfully downloaded all kic artifacts I1202 12:54:57.465088 33981 start.go:314] acquiring machines lock for minikube: {Name:mk6738e1369e8dfc0288a81ee57fcb38cc2f557c Clock:{} Delay:500ms Timeout:10m0s Cancel:} I1202 12:54:57.465227 33981 start.go:318] acquired machines lock for "minikube" in 81.93Β΅s I1202 12:54:57.465288 33981 start.go:90] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e Memory:5800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubelet Key:cgroup-driver Value:systemd}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.19.4 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[]} &{Name: IP: Port:8443 KubernetesVersion:v1.19.4 ControlPlane:true Worker:true} I1202 12:54:57.465499 33981 start.go:127] createHost starting for "" (driver="docker") I1202 12:54:57.465627 33981 out.go:110] πŸ”₯ Creating docker container (CPUs=2, Memory=5800MB) ... πŸ”₯ Creating docker container (CPUs=2, Memory=5800MB) ... I1202 12:54:57.465778 33981 start.go:164] libmachine.API.Create for "minikube" (driver="docker") I1202 12:54:57.465792 33981 client.go:165] LocalClient.Create starting I1202 12:54:57.465817 33981 main.go:119] libmachine: Reading certificate data from /home/user/.minikube/certs/ca.pem I1202 12:54:57.465837 33981 main.go:119] libmachine: Decoding PEM data... I1202 12:54:57.465849 33981 main.go:119] libmachine: Parsing certificate... I1202 12:54:57.465924 33981 main.go:119] libmachine: Reading certificate data from /home/user/.minikube/certs/cert.pem I1202 12:54:57.465996 33981 main.go:119] libmachine: Decoding PEM data... I1202 12:54:57.466006 33981 main.go:119] libmachine: Parsing certificate... I1202 12:54:57.466247 33981 cli_runner.go:110] Run: docker network inspect minikube --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}" W1202 12:54:57.646484 33981 cli_runner.go:148] docker network inspect minikube --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}" returned with exit code 1 I1202 12:54:57.651016 33981 network_create.go:178] running [docker network inspect minikube] to gather additional debugging logs... I1202 12:54:57.651122 33981 cli_runner.go:110] Run: docker network inspect minikube W1202 12:54:57.830456 33981 cli_runner.go:148] docker network inspect minikube returned with exit code 1 I1202 12:54:57.830582 33981 network_create.go:181] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1 stdout: [] stderr: Error: No such network: minikube I1202 12:54:57.830594 33981 network_create.go:183] output of [docker network inspect minikube]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: minikube ** /stderr ** I1202 12:54:57.830637 33981 cli_runner.go:110] Run: docker network inspect bridge --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}" I1202 12:54:58.044711 33981 network_create.go:96] attempt to create network 192.168.49.0/24 with subnet: minikube and gateway 192.168.49.1 and MTU of 1500 ... I1202 12:54:58.047717 33981 cli_runner.go:110] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true minikube -o com.docker.network.driver.mtu=1500 I1202 12:54:58.537657 33981 kic.go:93] calculated static IP "192.168.49.2" for the "minikube" container I1202 12:54:58.549596 33981 cli_runner.go:110] Run: docker ps -a --format {{.Names}} I1202 12:54:58.820613 33981 cli_runner.go:110] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I1202 12:54:59.062090 33981 oci.go:102] Successfully created a docker volume minikube I1202 12:54:59.062380 33981 cli_runner.go:110] Run: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e -d /var/lib I1202 12:55:00.767669 33981 cli_runner.go:154] Completed: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e -d /var/lib: (1.70526312s) I1202 12:55:00.767764 33981 oci.go:106] Successfully prepared a docker volume minikube W1202 12:55:00.767831 33981 oci.go:153] Your kernel does not support swap limit capabilities or the cgroup is not mounted. I1202 12:55:00.767882 33981 cli_runner.go:110] Run: docker info --format "'{{json .SecurityOptions}}'" I1202 12:55:00.768499 33981 preload.go:97] Checking if preload exists for k8s version v1.19.4 and runtime docker I1202 12:55:00.769622 33981 preload.go:105] Found local preload: /home/user/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4 I1202 12:55:00.769628 33981 kic.go:148] Starting extracting preloaded images to volume ... I1202 12:55:00.769654 33981 cli_runner.go:110] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/user/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e -I lz4 -xvf /preloaded.tar -C /extractDir I1202 12:55:01.656648 33981 cli_runner.go:110] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=5800mb --memory-swap=5800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e I1202 12:55:04.369851 33981 cli_runner.go:154] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=5800mb --memory-swap=5800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e: (2.713105541s) I1202 12:55:04.372893 33981 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Running}} I1202 12:55:04.965838 33981 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}} I1202 12:55:05.494275 33981 cli_runner.go:110] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables I1202 12:55:07.394547 33981 cli_runner.go:154] Completed: docker exec minikube stat /var/lib/dpkg/alternatives/iptables: (1.900240047s) I1202 12:55:07.400010 33981 oci.go:245] the created container "minikube" has a running status. I1202 12:55:07.404511 33981 kic.go:179] Creating ssh key for kic: /home/user/.minikube/machines/minikube/id_rsa... I1202 12:55:08.152091 33981 vm_assets.go:96] NewFileAsset: /home/user/.minikube/machines/minikube/id_rsa.pub -> /home/docker/.ssh/authorized_keys I1202 12:55:08.161296 33981 kic_runner.go:179] docker (temp): /home/user/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I1202 12:55:10.198468 33981 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}} I1202 12:55:11.047009 33981 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I1202 12:55:11.050896 33981 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I1202 12:55:12.368658 33981 kic_runner.go:123] Done: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]: (1.3176996s) I1202 12:55:22.972830 33981 cli_runner.go:154] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/user/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e -I lz4 -xvf /preloaded.tar -C /extractDir: (22.203140847s) I1202 12:55:22.972957 33981 kic.go:157] duration metric: took 22.203326 seconds to extract preloaded images to volume I1202 12:55:22.973100 33981 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}} I1202 12:55:23.204430 33981 machine.go:88] provisioning docker machine ... I1202 12:55:23.204601 33981 ubuntu.go:166] provisioning hostname "minikube" I1202 12:55:23.204715 33981 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1202 12:55:23.445363 33981 main.go:119] libmachine: Using SSH client type: native I1202 12:55:23.445695 33981 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x808c20] 0x808be0 [] 0s} 127.0.0.1 32783 } I1202 12:55:23.445783 33981 main.go:119] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I1202 12:55:23.723660 33981 main.go:119] libmachine: SSH cmd err, output: : minikube I1202 12:55:23.723779 33981 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1202 12:55:23.926856 33981 main.go:119] libmachine: Using SSH client type: native I1202 12:55:23.927153 33981 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x808c20] 0x808be0 [] 0s} 127.0.0.1 32783 } I1202 12:55:23.927216 33981 main.go:119] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I1202 12:55:24.169680 33981 main.go:119] libmachine: SSH cmd err, output: : I1202 12:55:24.169782 33981 ubuntu.go:172] set auth options {CertDir:/home/user/.minikube CaCertPath:/home/user/.minikube/certs/ca.pem CaPrivateKeyPath:/home/user/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/user/.minikube/machines/server.pem ServerKeyPath:/home/user/.minikube/machines/server-key.pem ClientKeyPath:/home/user/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/user/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/user/.minikube} I1202 12:55:24.169801 33981 ubuntu.go:174] setting up certificates I1202 12:55:24.169809 33981 provision.go:82] configureAuth start I1202 12:55:24.169850 33981 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I1202 12:55:24.355800 33981 provision.go:131] copyHostCerts I1202 12:55:24.355978 33981 vm_assets.go:96] NewFileAsset: /home/user/.minikube/certs/ca.pem -> /home/user/.minikube/ca.pem I1202 12:55:24.356308 33981 exec_runner.go:91] found /home/user/.minikube/ca.pem, removing ... I1202 12:55:24.356418 33981 exec_runner.go:98] cp: /home/user/.minikube/certs/ca.pem --> /home/user/.minikube/ca.pem (1070 bytes) I1202 12:55:24.356591 33981 vm_assets.go:96] NewFileAsset: /home/user/.minikube/certs/cert.pem -> /home/user/.minikube/cert.pem I1202 12:55:24.356690 33981 exec_runner.go:91] found /home/user/.minikube/cert.pem, removing ... I1202 12:55:24.356715 33981 exec_runner.go:98] cp: /home/user/.minikube/certs/cert.pem --> /home/user/.minikube/cert.pem (1115 bytes) I1202 12:55:24.356754 33981 vm_assets.go:96] NewFileAsset: /home/user/.minikube/certs/key.pem -> /home/user/.minikube/key.pem I1202 12:55:24.356814 33981 exec_runner.go:91] found /home/user/.minikube/key.pem, removing ... I1202 12:55:24.356937 33981 exec_runner.go:98] cp: /home/user/.minikube/certs/key.pem --> /home/user/.minikube/key.pem (1675 bytes) I1202 12:55:24.357145 33981 provision.go:105] generating server cert: /home/user/.minikube/machines/server.pem ca-key=/home/user/.minikube/certs/ca.pem private-key=/home/user/.minikube/certs/ca-key.pem org=bri.minikube san=[192.168.49.2 localhost 127.0.0.1 minikube minikube] I1202 12:55:24.572052 33981 provision.go:159] copyRemoteCerts I1202 12:55:24.572306 33981 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I1202 12:55:24.572452 33981 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1202 12:55:24.746672 33981 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/user/.minikube/machines/minikube/id_rsa Username:docker} I1202 12:55:24.949497 33981 vm_assets.go:96] NewFileAsset: /home/user/.minikube/certs/ca.pem -> /etc/docker/ca.pem I1202 12:55:24.950985 33981 ssh_runner.go:215] scp /home/user/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1070 bytes) I1202 12:55:24.990552 33981 vm_assets.go:96] NewFileAsset: /home/user/.minikube/machines/server.pem -> /etc/docker/server.pem I1202 12:55:24.990715 33981 ssh_runner.go:215] scp /home/user/.minikube/machines/server.pem --> /etc/docker/server.pem (1184 bytes) I1202 12:55:25.056093 33981 vm_assets.go:96] NewFileAsset: /home/user/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem I1202 12:55:25.056770 33981 ssh_runner.go:215] scp /home/user/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I1202 12:55:25.116425 33981 provision.go:85] duration metric: configureAuth took 946.587025ms I1202 12:55:25.116595 33981 ubuntu.go:190] setting minikube options for container-runtime I1202 12:55:25.117177 33981 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1202 12:55:25.298718 33981 main.go:119] libmachine: Using SSH client type: native I1202 12:55:25.299318 33981 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x808c20] 0x808be0 [] 0s} 127.0.0.1 32783 } I1202 12:55:25.299517 33981 main.go:119] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I1202 12:55:25.526546 33981 main.go:119] libmachine: SSH cmd err, output: : overlay I1202 12:55:25.526562 33981 ubuntu.go:71] root file system type: overlay I1202 12:55:25.526661 33981 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ... I1202 12:55:25.526771 33981 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1202 12:55:25.711759 33981 main.go:119] libmachine: Using SSH client type: native I1202 12:55:25.712172 33981 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x808c20] 0x808be0 [] 0s} 127.0.0.1 32783 } I1202 12:55:25.712241 33981 main.go:119] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %s "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket [Service] Type=notify # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I1202 12:55:25.969634 33981 main.go:119] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket [Service] Type=notify # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I1202 12:55:25.979403 33981 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1202 12:55:26.248636 33981 main.go:119] libmachine: Using SSH client type: native I1202 12:55:26.248917 33981 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x808c20] 0x808be0 [] 0s} 127.0.0.1 32783 } I1202 12:55:26.249048 33981 main.go:119] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I1202 12:55:27.975129 33981 main.go:119] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2020-09-16 17:01:20.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2020-12-02 17:55:25.964810104 +0000 @@ -8,24 +8,22 @@ [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 - -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -33,9 +31,10 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I1202 12:55:27.975220 33981 machine.go:91] provisioned docker machine in 4.770639984s I1202 12:55:27.975229 33981 client.go:168] LocalClient.Create took 30.509434485s I1202 12:55:27.975236 33981 start.go:172] duration metric: libmachine.API.Create for "minikube" took 30.509459714s I1202 12:55:27.975241 33981 start.go:268] post-start starting for "minikube" (driver="docker") I1202 12:55:27.975245 33981 start.go:278] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I1202 12:55:27.975287 33981 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I1202 12:55:27.975395 33981 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1202 12:55:28.227888 33981 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/user/.minikube/machines/minikube/id_rsa Username:docker} I1202 12:55:28.374168 33981 ssh_runner.go:148] Run: cat /etc/os-release I1202 12:55:28.379543 33981 main.go:119] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I1202 12:55:28.379651 33981 main.go:119] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I1202 12:55:28.379750 33981 main.go:119] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I1202 12:55:28.379805 33981 info.go:97] Remote host: Ubuntu 20.04.1 LTS I1202 12:55:28.379818 33981 filesync.go:118] Scanning /home/user/.minikube/addons for local assets ... I1202 12:55:28.379862 33981 filesync.go:118] Scanning /home/user/.minikube/files for local assets ... I1202 12:55:28.379942 33981 start.go:271] post-start completed in 404.695145ms I1202 12:55:28.380497 33981 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I1202 12:55:28.590725 33981 profile.go:150] Saving config to /home/user/.minikube/profiles/minikube/config.json ... I1202 12:55:28.591374 33981 ssh_runner.go:148] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I1202 12:55:28.591502 33981 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1202 12:55:28.765187 33981 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/user/.minikube/machines/minikube/id_rsa Username:docker} I1202 12:55:28.908022 33981 start.go:130] duration metric: createHost completed in 31.442510434s I1202 12:55:28.919690 33981 start.go:81] releasing machines lock for "minikube", held for 31.454398076s I1202 12:55:28.919846 33981 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I1202 12:55:29.108957 33981 ssh_runner.go:148] Run: systemctl --version I1202 12:55:29.109177 33981 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1202 12:55:29.109520 33981 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/ I1202 12:55:29.109610 33981 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1202 12:55:29.465632 33981 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/user/.minikube/machines/minikube/id_rsa Username:docker} I1202 12:55:29.609958 33981 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/user/.minikube/machines/minikube/id_rsa Username:docker} I1202 12:55:29.920768 33981 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service containerd I1202 12:55:29.933802 33981 ssh_runner.go:148] Run: sudo systemctl cat docker.service I1202 12:55:29.958024 33981 cruntime.go:193] skipping containerd shutdown because we are bound to it I1202 12:55:29.958572 33981 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio I1202 12:55:29.986951 33981 ssh_runner.go:148] Run: sudo systemctl cat docker.service I1202 12:55:30.039976 33981 ssh_runner.go:148] Run: sudo systemctl daemon-reload I1202 12:55:30.200452 33981 ssh_runner.go:148] Run: sudo systemctl start docker I1202 12:55:30.211218 33981 ssh_runner.go:148] Run: docker version --format {{.Server.Version}} I1202 12:55:30.423895 33981 out.go:110] 🐳 Preparing Kubernetes v1.19.4 on Docker 19.03.13 ... 🐳 Preparing Kubernetes v1.19.4 on Docker 19.03.13 ... I1202 12:55:30.425491 33981 cli_runner.go:110] Run: docker network inspect minikube --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}" I1202 12:55:30.664824 33981 ssh_runner.go:148] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I1202 12:55:30.670117 33981 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts" I1202 12:55:30.681388 33981 out.go:110] β–ͺ kubelet.cgroup-driver=systemd β–ͺ kubelet.cgroup-driver=systemd I1202 12:55:30.681484 33981 preload.go:97] Checking if preload exists for k8s version v1.19.4 and runtime docker I1202 12:55:30.681503 33981 preload.go:105] Found local preload: /home/user/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4 I1202 12:55:30.681540 33981 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}} I1202 12:55:30.838954 33981 docker.go:382] Got preloaded images: -- stdout -- k8s.gcr.io/kube-proxy:v1.19.4 k8s.gcr.io/kube-controller-manager:v1.19.4 k8s.gcr.io/kube-apiserver:v1.19.4 k8s.gcr.io/kube-scheduler:v1.19.4 gcr.io/k8s-minikube/storage-provisioner:v3 k8s.gcr.io/etcd:3.4.13-0 kubernetesui/dashboard:v2.0.3 k8s.gcr.io/coredns:1.7.0 kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/pause:3.2 -- /stdout -- I1202 12:55:30.839065 33981 docker.go:319] Images already preloaded, skipping extraction I1202 12:55:30.839111 33981 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}} I1202 12:55:31.065779 33981 docker.go:382] Got preloaded images: -- stdout -- k8s.gcr.io/kube-proxy:v1.19.4 k8s.gcr.io/kube-apiserver:v1.19.4 k8s.gcr.io/kube-controller-manager:v1.19.4 k8s.gcr.io/kube-scheduler:v1.19.4 gcr.io/k8s-minikube/storage-provisioner:v3 k8s.gcr.io/etcd:3.4.13-0 kubernetesui/dashboard:v2.0.3 k8s.gcr.io/coredns:1.7.0 kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/pause:3.2 -- /stdout -- I1202 12:55:31.065858 33981 cache_images.go:74] Images are preloaded, skipping loading I1202 12:55:31.065907 33981 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}} I1202 12:55:31.377478 33981 cni.go:74] Creating CNI manager for "" I1202 12:55:31.377721 33981 cni.go:117] CNI unnecessary in this configuration, recommending no CNI I1202 12:55:31.377770 33981 kubeadm.go:84] Using pod CIDR: I1202 12:55:31.377785 33981 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.19.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I1202 12:55:31.377927 33981 kubeadm.go:154] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.19.4 networking: dnsDomain: cluster.local podSubnet: "" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%" nodefs.inodesFree: "0%" imagefs.available: "0%" failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "" metricsBindAddress: 192.168.49.2:10249 I1202 12:55:31.378277 33981 kubeadm.go:822] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.19.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=systemd --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.19.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubelet Key:cgroup-driver Value:systemd}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I1202 12:55:31.378523 33981 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.19.4 I1202 12:55:31.395094 33981 binaries.go:44] Found k8s binaries, skipping transfer I1202 12:55:31.395233 33981 ssh_runner.go:148] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I1202 12:55:31.416757 33981 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (358 bytes) I1202 12:55:31.477735 33981 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes) I1202 12:55:31.529536 33981 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1787 bytes) I1202 12:55:31.574961 33981 ssh_runner.go:148] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I1202 12:55:31.604420 33981 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts" I1202 12:55:31.667076 33981 certs.go:52] Setting up /home/user/.minikube/profiles/minikube for IP: 192.168.49.2 I1202 12:55:31.676765 33981 certs.go:169] skipping minikubeCA CA generation: /home/user/.minikube/ca.key I1202 12:55:31.684067 33981 certs.go:169] skipping proxyClientCA CA generation: /home/user/.minikube/proxy-client-ca.key I1202 12:55:31.688423 33981 certs.go:273] generating minikube-user signed cert: /home/user/.minikube/profiles/minikube/client.key I1202 12:55:31.692771 33981 crypto.go:69] Generating cert /home/user/.minikube/profiles/minikube/client.crt with IP's: [] I1202 12:55:31.888728 33981 crypto.go:157] Writing cert to /home/user/.minikube/profiles/minikube/client.crt ... I1202 12:55:31.889058 33981 lock.go:36] WriteFile acquiring /home/user/.minikube/profiles/minikube/client.crt: {Name:mka0ef7eddbe7d9ec6c179d29f5dfe60acbc8eb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1202 12:55:31.889670 33981 crypto.go:165] Writing key to /home/user/.minikube/profiles/minikube/client.key ... I1202 12:55:31.889858 33981 lock.go:36] WriteFile acquiring /home/user/.minikube/profiles/minikube/client.key: {Name:mk6e5bd751ae74838233ee1d10abd7d6a1eed187 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1202 12:55:31.890095 33981 certs.go:273] generating minikube signed cert: /home/user/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I1202 12:55:31.890152 33981 crypto.go:69] Generating cert /home/user/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] I1202 12:55:32.183136 33981 crypto.go:157] Writing cert to /home/user/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ... I1202 12:55:32.183313 33981 lock.go:36] WriteFile acquiring /home/user/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mke5e1b18ed91420b0ede56ca4b3c3476dc54e5f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1202 12:55:32.183463 33981 crypto.go:165] Writing key to /home/user/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ... I1202 12:55:32.183542 33981 lock.go:36] WriteFile acquiring /home/user/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk3c646e13cb54a71aefb2a32814eae2a5e3133c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1202 12:55:32.183781 33981 certs.go:284] copying /home/user/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/user/.minikube/profiles/minikube/apiserver.crt I1202 12:55:32.183917 33981 certs.go:288] copying /home/user/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/user/.minikube/profiles/minikube/apiserver.key I1202 12:55:32.184011 33981 certs.go:273] generating aggregator signed cert: /home/user/.minikube/profiles/minikube/proxy-client.key I1202 12:55:32.184018 33981 crypto.go:69] Generating cert /home/user/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I1202 12:55:32.378774 33981 crypto.go:157] Writing cert to /home/user/.minikube/profiles/minikube/proxy-client.crt ... I1202 12:55:32.378910 33981 lock.go:36] WriteFile acquiring /home/user/.minikube/profiles/minikube/proxy-client.crt: {Name:mke824afe02447ff146d8328146ddcc4476c5cec Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1202 12:55:32.379346 33981 crypto.go:165] Writing key to /home/user/.minikube/profiles/minikube/proxy-client.key ... I1202 12:55:32.379366 33981 lock.go:36] WriteFile acquiring /home/user/.minikube/profiles/minikube/proxy-client.key: {Name:mk03c609c274e53483e2f97ae723d15eeea4746e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1202 12:55:32.379570 33981 vm_assets.go:96] NewFileAsset: /home/user/.minikube/profiles/minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt I1202 12:55:32.379587 33981 vm_assets.go:96] NewFileAsset: /home/user/.minikube/profiles/minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key I1202 12:55:32.379598 33981 vm_assets.go:96] NewFileAsset: /home/user/.minikube/profiles/minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt I1202 12:55:32.379607 33981 vm_assets.go:96] NewFileAsset: /home/user/.minikube/profiles/minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key I1202 12:55:32.379616 33981 vm_assets.go:96] NewFileAsset: /home/user/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt I1202 12:55:32.379626 33981 vm_assets.go:96] NewFileAsset: /home/user/.minikube/ca.key -> /var/lib/minikube/certs/ca.key I1202 12:55:32.379635 33981 vm_assets.go:96] NewFileAsset: /home/user/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt I1202 12:55:32.379644 33981 vm_assets.go:96] NewFileAsset: /home/user/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key I1202 12:55:32.379694 33981 certs.go:348] found cert: /home/user/.minikube/certs/home/user/.minikube/certs/ca-key.pem (1675 bytes) I1202 12:55:32.379734 33981 certs.go:348] found cert: /home/user/.minikube/certs/home/user/.minikube/certs/ca.pem (1070 bytes) I1202 12:55:32.379758 33981 certs.go:348] found cert: /home/user/.minikube/certs/home/user/.minikube/certs/cert.pem (1115 bytes) I1202 12:55:32.379777 33981 certs.go:348] found cert: /home/user/.minikube/certs/home/user/.minikube/certs/key.pem (1675 bytes) I1202 12:55:32.379799 33981 vm_assets.go:96] NewFileAsset: /home/user/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem I1202 12:55:32.380516 33981 ssh_runner.go:215] scp /home/user/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I1202 12:55:32.447896 33981 ssh_runner.go:215] scp /home/user/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I1202 12:55:32.528216 33981 ssh_runner.go:215] scp /home/user/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I1202 12:55:32.595623 33981 ssh_runner.go:215] scp /home/user/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I1202 12:55:32.655181 33981 ssh_runner.go:215] scp /home/user/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I1202 12:55:32.741637 33981 ssh_runner.go:215] scp /home/user/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I1202 12:55:32.825772 33981 ssh_runner.go:215] scp /home/user/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I1202 12:55:32.890268 33981 ssh_runner.go:215] scp /home/user/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes) I1202 12:55:32.951814 33981 ssh_runner.go:215] scp /home/user/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I1202 12:55:33.039121 33981 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes) I1202 12:55:33.148232 33981 ssh_runner.go:148] Run: openssl version I1202 12:55:33.161988 33981 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I1202 12:55:33.211014 33981 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I1202 12:55:33.253834 33981 certs.go:389] hashing: -rw-r--r-- 1 root root 1111 Dec 2 17:30 /usr/share/ca-certificates/minikubeCA.pem I1202 12:55:33.257935 33981 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I1202 12:55:33.268300 33981 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I1202 12:55:33.327281 33981 kubeadm.go:324] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e Memory:5800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubelet Key:cgroup-driver Value:systemd}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.19.4 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[]} I1202 12:55:33.328694 33981 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I1202 12:55:33.567359 33981 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I1202 12:55:33.585374 33981 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I1202 12:55:33.608778 33981 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver I1202 12:55:33.613618 33981 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I1202 12:55:33.668778 33981 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I1202 12:55:33.679267 33981 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I1202 12:57:34.181203 33981 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (2m0.501163506s) W1202 12:57:34.208566 33981 out.go:146] πŸ’’ initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.19.4 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.4.0-54-generic DOCKER_VERSION: 19.03.13 DOCKER_GRAPH_DRIVER: overlay2 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_HUGETLB: enabled CGROUPS_PIDS: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W1202 17:55:34.140200 782 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-54-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher πŸ’’ initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.19.4 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.4.0-54-generic DOCKER_VERSION: 19.03.13 DOCKER_GRAPH_DRIVER: overlay2 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_HUGETLB: enabled CGROUPS_PIDS: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W1202 17:55:34.140200 782 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-54-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher I1202 12:57:34.236265 33981 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force" I1202 12:57:35.608666 33981 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1.372374084s) I1202 12:57:35.608760 33981 ssh_runner.go:148] Run: sudo systemctl stop -f kubelet I1202 12:57:35.636767 33981 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I1202 12:57:35.825963 33981 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver I1202 12:57:35.826332 33981 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I1202 12:57:35.846327 33981 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I1202 12:57:35.846611 33981 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I1202 12:59:34.143660 33981 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (1m58.296977009s) I1202 12:59:34.143769 33981 kubeadm.go:326] StartCluster complete in 4m0.816494539s I1202 12:59:34.143870 33981 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I1202 12:59:34.359037 33981 logs.go:206] 0 containers: [] W1202 12:59:34.359138 33981 logs.go:208] No container was found matching "kube-apiserver" I1202 12:59:34.359308 33981 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I1202 12:59:34.493022 33981 logs.go:206] 0 containers: [] W1202 12:59:34.493046 33981 logs.go:208] No container was found matching "etcd" I1202 12:59:34.493232 33981 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I1202 12:59:34.656493 33981 logs.go:206] 0 containers: [] W1202 12:59:34.656750 33981 logs.go:208] No container was found matching "coredns" I1202 12:59:34.656794 33981 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I1202 12:59:34.793347 33981 logs.go:206] 0 containers: [] W1202 12:59:34.793414 33981 logs.go:208] No container was found matching "kube-scheduler" I1202 12:59:34.793453 33981 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I1202 12:59:34.986269 33981 logs.go:206] 0 containers: [] W1202 12:59:34.986480 33981 logs.go:208] No container was found matching "kube-proxy" I1202 12:59:34.986517 33981 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I1202 12:59:35.578153 33981 logs.go:206] 0 containers: [] W1202 12:59:35.578309 33981 logs.go:208] No container was found matching "kubernetes-dashboard" I1202 12:59:35.578385 33981 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I1202 12:59:36.001932 33981 logs.go:206] 0 containers: [] W1202 12:59:36.001951 33981 logs.go:208] No container was found matching "storage-provisioner" I1202 12:59:36.005112 33981 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I1202 12:59:36.492830 33981 logs.go:206] 0 containers: [] W1202 12:59:36.492847 33981 logs.go:208] No container was found matching "kube-controller-manager" I1202 12:59:36.492857 33981 logs.go:120] Gathering logs for kubelet ... I1202 12:59:36.492905 33981 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I1202 12:59:36.655299 33981 logs.go:120] Gathering logs for dmesg ... I1202 12:59:36.659894 33981 ssh_runner.go:148] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I1202 12:59:36.714053 33981 logs.go:120] Gathering logs for describe nodes ... I1202 12:59:36.721258 33981 ssh_runner.go:148] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.19.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" W1202 12:59:37.705165 33981 logs.go:127] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.19.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.19.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1 stdout: stderr: The connection to the server localhost:8443 was refused - did you specify the right host or port? output: ** stderr ** The connection to the server localhost:8443 was refused - did you specify the right host or port? ** /stderr ** I1202 12:59:37.705410 33981 logs.go:120] Gathering logs for Docker ... I1202 12:59:37.705424 33981 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I1202 12:59:37.817694 33981 logs.go:120] Gathering logs for container status ... I1202 12:59:37.829107 33981 ssh_runner.go:148] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I1202 12:59:40.178901 33981 ssh_runner.go:188] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.342287713s) W1202 12:59:40.179112 33981 out.go:258] Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.19.4 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.4.0-54-generic DOCKER_VERSION: 19.03.13 DOCKER_GRAPH_DRIVER: overlay2 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_HUGETLB: enabled CGROUPS_PIDS: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W1202 17:57:36.343561 8514 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-54-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher W1202 12:59:40.183751 33981 out.go:146] W1202 12:59:40.183937 33981 out.go:146] πŸ’£ Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.19.4 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.4.0-54-generic DOCKER_VERSION: 19.03.13 DOCKER_GRAPH_DRIVER: overlay2 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_HUGETLB: enabled CGROUPS_PIDS: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W1202 17:57:36.343561 8514 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-54-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher πŸ’£ Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.19.4 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.4.0-54-generic DOCKER_VERSION: 19.03.13 DOCKER_GRAPH_DRIVER: overlay2 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_HUGETLB: enabled CGROUPS_PIDS: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W1202 17:57:36.343561 8514 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-54-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher W1202 12:59:40.184147 33981 out.go:146] W1202 12:59:40.184223 33981 out.go:146] 😿 minikube is exiting due to an error. If the above message is not useful, open an issue: 😿 minikube is exiting due to an error. If the above message is not useful, open an issue: W1202 12:59:40.184352 33981 out.go:146] πŸ‘‰ https://github.com/kubernetes/minikube/issues/new/choose πŸ‘‰ https://github.com/kubernetes/minikube/issues/new/choose I1202 12:59:40.188552 33981 out.go:110] W1202 12:59:40.188651 33981 out.go:146] ❌ Exiting due to K8S_KUBELET_NOT_RUNNING: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.19.4 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.4.0-54-generic DOCKER_VERSION: 19.03.13 DOCKER_GRAPH_DRIVER: overlay2 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_HUGETLB: enabled CGROUPS_PIDS: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W1202 17:57:36.343561 8514 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-54-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher ❌ Exiting due to K8S_KUBELET_NOT_RUNNING: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.19.4 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.4.0-54-generic DOCKER_VERSION: 19.03.13 DOCKER_GRAPH_DRIVER: overlay2 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_HUGETLB: enabled CGROUPS_PIDS: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W1202 17:57:36.343561 8514 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-54-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher W1202 12:59:40.191680 33981 out.go:146] πŸ’‘ Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start πŸ’‘ Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start W1202 12:59:40.192036 33981 out.go:146] 🍿 Related issue: https://github.com/kubernetes/minikube/issues/4172 🍿 Related issue: https://github.com/kubernetes/minikube/issues/4172 I1202 12:59:40.192042 33981 out.go:110] ```

Various other things I've tried

lingsamuel commented 3 years ago

/kind support

tstromberg commented 3 years ago

Can you please include the output of minikube logs? We can then figure out why the kubelet is failing.

priyawadhwa commented 3 years ago

Hey @bri-pug looks like you tried to configure systemd as the cgroup manager by running minikube start -v=5 --alsologtostderr --extra-config=kubelet.cgroup-driver=systemd but it still is using cgroupfs as shown in the logs:

[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

Could you try running:

minikube delete
minikube start --force-systemd

and see if that resolves the issue?

medyagh commented 3 years ago

Hi @bri-pug c, I haven't heard back from you, I wonder if you still have this issue? Regrettably, there isn't enough information in this issue to make it actionable, and a long enough duration has passed, so this issue is likely difficult to replicate.

I will close this issue for now but please feel free to reopen whenever you feel ready to provide more information.