kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.28k stars 4.87k forks source link

ubuntu 20.04 with Docker driver: [kubelet-check] Initial timeout of 40s passed. #9110

Closed mramos-dev closed 4 years ago

mramos-dev commented 4 years ago

Steps to reproduce the issue:

  1. Running minikube start @ versions v1.12.2 and v1.12.3 both result in the follow error.

I've tried the following with similar results; I can include the output for that attempt if you think it will help. minikube start --extra-config=kubelet.cgroup-driver=systemd

Full output of failed command:

``` mramos@mramos-l-ubuntu:~$ minikube --alsologtostderr --v=7 start W0828 09:16:00.098438 361583 root.go:246] Error reading config file at /home/mramos/.minikube/config/config.json: open /home/mramos/.minikube/config/config.json: no such file or directory I0828 09:16:00.098813 361583 out.go:197] Setting JSON to false I0828 09:16:00.122243 361583 start.go:100] hostinfo: {"hostname":"mramos-l-ubuntu","uptime":50402,"bootTime":1598570158,"procs":450,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.4.0-42-generic","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"51ebc25a-b39b-4c28-8368-2bc9fea5f62d"} I0828 09:16:00.122739 361583 start.go:110] virtualization: kvm host I0828 09:16:00.132701 361583 out.go:105] 😄 minikube v1.12.3 on Ubuntu 20.04 😄 minikube v1.12.3 on Ubuntu 20.04 I0828 09:16:00.132857 361583 notify.go:125] Checking for updates... I0828 09:16:00.132860 361583 driver.go:287] Setting default libvirt URI to qemu:///system I0828 09:16:00.132885 361583 global.go:102] Querying for installed drivers using PATH=/home/mramos/.minikube/bin:/home/mramos/.nvm/versions/node/v12.14.1/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin I0828 09:16:00.172295 361583 docker.go:87] docker version: linux-19.03.12 I0828 09:16:00.173754 361583 global.go:110] docker priority: 8, state: {Installed:true Healthy:true NeedsImprovement:false Error: Fix: Doc:} I0828 09:16:00.173817 361583 global.go:110] kvm2 priority: 7, state: {Installed:false Healthy:false NeedsImprovement:false Error:exec: "virsh": executable file not found in $PATH Fix:Install libvirt Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/} I0828 09:16:00.173865 361583 global.go:110] none priority: 3, state: {Installed:true Healthy:false NeedsImprovement:false Error:the 'none' driver must be run as the root user Fix:For non-root usage, try the newer 'docker' driver Doc:} I0828 09:16:00.173905 361583 global.go:110] podman priority: 2, state: {Installed:false Healthy:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/} I0828 09:16:00.173952 361583 global.go:110] virtualbox priority: 5, state: {Installed:false Healthy:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/} I0828 09:16:00.173992 361583 global.go:110] vmware priority: 6, state: {Installed:false Healthy:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/} I0828 09:16:00.174012 361583 driver.go:244] "docker" has a higher priority (8) than "" (0) I0828 09:16:00.174028 361583 driver.go:235] not recommending "none" due to health: the 'none' driver must be run as the root user I0828 09:16:00.174041 361583 driver.go:269] Picked: docker I0828 09:16:00.174048 361583 driver.go:270] Alternatives: [] I0828 09:16:00.174054 361583 driver.go:271] Rejects: [kvm2 none podman virtualbox vmware] I0828 09:16:00.177116 361583 out.go:105] ✨ Automatically selected the docker driver ✨ Automatically selected the docker driver I0828 09:16:00.177133 361583 start.go:232] selected driver: docker I0828 09:16:00.177141 361583 start.go:638] validating driver "docker" against I0828 09:16:00.177156 361583 start.go:649] status for docker: {Installed:true Healthy:true NeedsImprovement:false Error: Fix: Doc:} I0828 09:16:00.177202 361583 cli_runner.go:109] Run: docker system info --format "{{json .}}" I0828 09:16:00.244322 361583 start_flags.go:222] no existing cluster config was found, will generate one from the flags I0828 09:16:00.244354 361583 start_flags.go:240] Using suggested 3900MB memory alloc based on sys=15906MB, container=15906MB I0828 09:16:00.244593 361583 start_flags.go:613] Wait components to verify : map[apiserver:true system_pods:true] I0828 09:16:00.244622 361583 cni.go:74] Creating CNI manager for "" I0828 09:16:00.244633 361583 cni.go:117] CNI unnecessary in this configuration, recommending no CNI I0828 09:16:00.244648 361583 start_flags.go:344] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s} I0828 09:16:00.247896 361583 out.go:105] 👍 Starting control plane node minikube in cluster minikube 👍 Starting control plane node minikube in cluster minikube I0828 09:16:00.280858 361583 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 in local docker daemon, skipping pull I0828 09:16:00.280883 361583 cache.go:113] gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 exists in daemon, skipping pull I0828 09:16:00.280889 361583 preload.go:97] Checking if preload exists for k8s version v1.18.3 and runtime docker I0828 09:16:00.337363 361583 preload.go:122] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4 I0828 09:16:00.337412 361583 cache.go:51] Caching tarball of preloaded images I0828 09:16:00.337471 361583 preload.go:97] Checking if preload exists for k8s version v1.18.3 and runtime docker I0828 09:16:00.389594 361583 preload.go:122] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4 I0828 09:16:00.393234 361583 out.go:105] 💾 Downloading Kubernetes v1.18.3 preload ... 💾 Downloading Kubernetes v1.18.3 preload ... I0828 09:16:00.393465 361583 download.go:78] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4 -> /home/mramos/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4 > preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4: 510.91 MiB I0828 09:17:18.906999 361583 preload.go:160] saving checksum for preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4 ... I0828 09:17:18.983626 361583 preload.go:177] verifying checksumm of /home/mramos/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4 ... I0828 09:17:19.992513 361583 cache.go:54] Finished verifying existence of preloaded tar for v1.18.3 on docker I0828 09:17:19.992747 361583 profile.go:150] Saving config to /home/mramos/.minikube/profiles/minikube/config.json ... I0828 09:17:19.992772 361583 lock.go:35] WriteFile acquiring /home/mramos/.minikube/profiles/minikube/config.json: {Name:mk09dc26ba42e51efd1453654d66e423dd162f3c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0828 09:17:19.992940 361583 cache.go:181] Successfully downloaded all kic artifacts I0828 09:17:19.992958 361583 start.go:244] acquiring machines lock for minikube: {Name:mk55cdf3d7f197305fa799901f990227a21e6096 Clock:{} Delay:500ms Timeout:15m0s Cancel:} I0828 09:17:19.992996 361583 start.go:248] acquired machines lock for "minikube" in 28.258µs I0828 09:17:19.993021 361583 start.go:85] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true} I0828 09:17:19.993062 361583 start.go:122] createHost starting for "" (driver="docker") I0828 09:17:19.998554 361583 out.go:105] 🔥 Creating docker container (CPUs=2, Memory=3900MB) ... 🔥 Creating docker container (CPUs=2, Memory=3900MB) ... I0828 09:17:19.998682 361583 start.go:158] libmachine.API.Create for "minikube" (driver="docker") I0828 09:17:19.998714 361583 client.go:164] LocalClient.Create starting I0828 09:17:19.998749 361583 main.go:115] libmachine: Creating CA: /home/mramos/.minikube/certs/ca.pem I0828 09:17:20.234610 361583 main.go:115] libmachine: Creating client certificate: /home/mramos/.minikube/certs/cert.pem I0828 09:17:20.351035 361583 cli_runner.go:109] Run: docker ps -a --format {{.Names}} I0828 09:17:20.382135 361583 cli_runner.go:109] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0828 09:17:20.418339 361583 oci.go:101] Successfully created a docker volume minikube I0828 09:17:20.418400 361583 cli_runner.go:109] Run: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 -d /var/lib I0828 09:17:21.496343 361583 cli_runner.go:151] Completed: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 -d /var/lib: (1.077856554s) I0828 09:17:21.496416 361583 oci.go:105] Successfully prepared a docker volume minikube W0828 09:17:21.496527 361583 oci.go:165] Your kernel does not support swap limit capabilities or the cgroup is not mounted. I0828 09:17:21.496551 361583 preload.go:97] Checking if preload exists for k8s version v1.18.3 and runtime docker I0828 09:17:21.496706 361583 cli_runner.go:109] Run: docker info --format "'{{json .SecurityOptions}}'" I0828 09:17:21.496772 361583 preload.go:105] Found local preload: /home/mramos/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4 I0828 09:17:21.496878 361583 kic.go:133] Starting extracting preloaded images to volume ... I0828 09:17:21.497151 361583 cli_runner.go:109] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/mramos/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 -I lz4 -xvf /preloaded.tar -C /extractDir I0828 09:17:21.632409 361583 cli_runner.go:109] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --security-opt apparmor=unconfined --cpus=2 --memory=3900mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 I0828 09:17:22.133467 361583 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Running}} I0828 09:17:22.189698 361583 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}} I0828 09:17:22.241701 361583 cli_runner.go:109] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables I0828 09:17:22.406004 361583 oci.go:222] the created container "minikube" has a running status. I0828 09:17:22.406037 361583 kic.go:157] Creating ssh key for kic: /home/mramos/.minikube/machines/minikube/id_rsa... I0828 09:17:22.711470 361583 vm_assets.go:95] NewFileAsset: /home/mramos/.minikube/machines/minikube/id_rsa.pub -> /home/docker/.ssh/authorized_keys I0828 09:17:22.711520 361583 kic_runner.go:179] docker (temp): /home/mramos/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0828 09:17:22.854439 361583 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}} I0828 09:17:22.896492 361583 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0828 09:17:22.896515 361583 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I0828 09:17:26.285908 361583 cli_runner.go:151] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/mramos/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 -I lz4 -xvf /preloaded.tar -C /extractDir: (4.788686051s) I0828 09:17:26.285939 361583 kic.go:138] duration metric: took 4.789070 seconds to extract preloaded images to volume I0828 09:17:26.286024 361583 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}} I0828 09:17:26.332436 361583 machine.go:88] provisioning docker machine ... I0828 09:17:26.332472 361583 ubuntu.go:166] provisioning hostname "minikube" I0828 09:17:26.332527 361583 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0828 09:17:26.370796 361583 main.go:115] libmachine: Using SSH client type: native I0828 09:17:26.370980 361583 main.go:115] libmachine: &{{{ 0 [] [] []} docker [0x7b9850] 0x7b9820 [] 0s} 127.0.0.1 32867 } I0828 09:17:26.370994 361583 main.go:115] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0828 09:17:26.605555 361583 main.go:115] libmachine: SSH cmd err, output: : minikube I0828 09:17:26.605751 361583 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0828 09:17:26.665862 361583 main.go:115] libmachine: Using SSH client type: native I0828 09:17:26.666005 361583 main.go:115] libmachine: &{{{ 0 [] [] []} docker [0x7b9850] 0x7b9820 [] 0s} 127.0.0.1 32867 } I0828 09:17:26.666027 361583 main.go:115] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0828 09:17:26.809289 361583 main.go:115] libmachine: SSH cmd err, output: : I0828 09:17:26.809322 361583 ubuntu.go:172] set auth options {CertDir:/home/mramos/.minikube CaCertPath:/home/mramos/.minikube/certs/ca.pem CaPrivateKeyPath:/home/mramos/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/mramos/.minikube/machines/server.pem ServerKeyPath:/home/mramos/.minikube/machines/server-key.pem ClientKeyPath:/home/mramos/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/mramos/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/mramos/.minikube} I0828 09:17:26.809350 361583 ubuntu.go:174] setting up certificates I0828 09:17:26.809364 361583 provision.go:82] configureAuth start I0828 09:17:26.809416 361583 cli_runner.go:109] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0828 09:17:26.857496 361583 provision.go:131] copyHostCerts I0828 09:17:26.857529 361583 vm_assets.go:95] NewFileAsset: /home/mramos/.minikube/certs/ca.pem -> /home/mramos/.minikube/ca.pem I0828 09:17:26.857568 361583 exec_runner.go:98] cp: /home/mramos/.minikube/certs/ca.pem --> /home/mramos/.minikube/ca.pem (1034 bytes) I0828 09:17:26.857656 361583 vm_assets.go:95] NewFileAsset: /home/mramos/.minikube/certs/cert.pem -> /home/mramos/.minikube/cert.pem I0828 09:17:26.857681 361583 exec_runner.go:98] cp: /home/mramos/.minikube/certs/cert.pem --> /home/mramos/.minikube/cert.pem (1078 bytes) I0828 09:17:26.857746 361583 vm_assets.go:95] NewFileAsset: /home/mramos/.minikube/certs/key.pem -> /home/mramos/.minikube/key.pem I0828 09:17:26.857777 361583 exec_runner.go:98] cp: /home/mramos/.minikube/certs/key.pem --> /home/mramos/.minikube/key.pem (1675 bytes) I0828 09:17:26.857848 361583 provision.go:105] generating server cert: /home/mramos/.minikube/machines/server.pem ca-key=/home/mramos/.minikube/certs/ca.pem private-key=/home/mramos/.minikube/certs/ca-key.pem org=mramos.minikube san=[172.17.0.3 localhost 127.0.0.1 minikube minikube] I0828 09:17:27.228626 361583 provision.go:159] copyRemoteCerts I0828 09:17:27.228673 361583 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0828 09:17:27.228703 361583 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0828 09:17:27.260958 361583 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/mramos/.minikube/machines/minikube/id_rsa Username:docker} I0828 09:17:27.387336 361583 vm_assets.go:95] NewFileAsset: /home/mramos/.minikube/certs/ca.pem -> /etc/docker/ca.pem I0828 09:17:27.387460 361583 ssh_runner.go:215] scp /home/mramos/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1034 bytes) I0828 09:17:27.480310 361583 vm_assets.go:95] NewFileAsset: /home/mramos/.minikube/machines/server.pem -> /etc/docker/server.pem I0828 09:17:27.480441 361583 ssh_runner.go:215] scp /home/mramos/.minikube/machines/server.pem --> /etc/docker/server.pem (1147 bytes) I0828 09:17:27.601203 361583 vm_assets.go:95] NewFileAsset: /home/mramos/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem I0828 09:17:27.601350 361583 ssh_runner.go:215] scp /home/mramos/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0828 09:17:27.723994 361583 provision.go:85] duration metric: configureAuth took 914.613876ms I0828 09:17:27.724030 361583 ubuntu.go:190] setting minikube options for container-runtime I0828 09:17:27.724305 361583 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0828 09:17:27.771460 361583 main.go:115] libmachine: Using SSH client type: native I0828 09:17:27.771622 361583 main.go:115] libmachine: &{{{ 0 [] [] []} docker [0x7b9850] 0x7b9820 [] 0s} 127.0.0.1 32867 } I0828 09:17:27.771637 361583 main.go:115] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0828 09:17:27.904496 361583 main.go:115] libmachine: SSH cmd err, output: : overlay I0828 09:17:27.904570 361583 ubuntu.go:71] root file system type: overlay I0828 09:17:27.905117 361583 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ... I0828 09:17:27.905271 361583 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0828 09:17:27.965967 361583 main.go:115] libmachine: Using SSH client type: native I0828 09:17:27.966121 361583 main.go:115] libmachine: &{{{ 0 [] [] []} docker [0x7b9850] 0x7b9820 [] 0s} 127.0.0.1 32867 } I0828 09:17:27.966215 361583 main.go:115] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %s "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket [Service] Type=notify # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0828 09:17:28.187389 361583 main.go:115] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket [Service] Type=notify # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0828 09:17:28.187721 361583 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0828 09:17:28.243952 361583 main.go:115] libmachine: Using SSH client type: native I0828 09:17:28.244115 361583 main.go:115] libmachine: &{{{ 0 [] [] []} docker [0x7b9850] 0x7b9820 [] 0s} 127.0.0.1 32867 } I0828 09:17:28.244140 361583 main.go:115] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0828 09:17:29.342661 361583 main.go:115] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2020-03-10 19:42:48.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2020-08-28 13:17:28.177989720 +0000 @@ -8,24 +8,22 @@ [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 - -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -33,9 +31,10 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes I0828 09:17:29.342787 361583 machine.go:91] provisioned docker machine in 3.010329223s I0828 09:17:29.342803 361583 client.go:167] LocalClient.Create took 9.344081517s I0828 09:17:29.342827 361583 start.go:166] duration metric: libmachine.API.Create for "minikube" took 9.344141322s I0828 09:17:29.342840 361583 start.go:207] post-start starting for "minikube" (driver="docker") I0828 09:17:29.342849 361583 start.go:217] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0828 09:17:29.342928 361583 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0828 09:17:29.342976 361583 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0828 09:17:29.381511 361583 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/mramos/.minikube/machines/minikube/id_rsa Username:docker} I0828 09:17:29.727668 361583 ssh_runner.go:148] Run: cat /etc/os-release I0828 09:17:29.737426 361583 main.go:115] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0828 09:17:29.737511 361583 main.go:115] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0828 09:17:29.737553 361583 main.go:115] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0828 09:17:29.737593 361583 info.go:99] Remote host: Ubuntu 20.04 LTS I0828 09:17:29.737630 361583 filesync.go:118] Scanning /home/mramos/.minikube/addons for local assets ... I0828 09:17:29.737792 361583 filesync.go:118] Scanning /home/mramos/.minikube/files for local assets ... I0828 09:17:29.737971 361583 start.go:210] post-start completed in 395.107592ms I0828 09:17:29.738746 361583 cli_runner.go:109] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0828 09:17:29.804165 361583 profile.go:150] Saving config to /home/mramos/.minikube/profiles/minikube/config.json ... I0828 09:17:29.804400 361583 start.go:125] duration metric: createHost completed in 9.81132823s I0828 09:17:29.804419 361583 start.go:76] releasing machines lock for "minikube", held for 9.811410132s I0828 09:17:29.804499 361583 cli_runner.go:109] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0828 09:17:29.855584 361583 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/ I0828 09:17:29.855623 361583 ssh_runner.go:148] Run: systemctl --version I0828 09:17:29.855665 361583 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0828 09:17:29.855699 361583 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0828 09:17:29.905975 361583 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/mramos/.minikube/machines/minikube/id_rsa Username:docker} I0828 09:17:29.906023 361583 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/mramos/.minikube/machines/minikube/id_rsa Username:docker} I0828 09:17:29.991940 361583 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service containerd I0828 09:17:30.162884 361583 ssh_runner.go:148] Run: sudo systemctl cat docker.service I0828 09:17:30.217840 361583 cruntime.go:194] skipping containerd shutdown because we are bound to it I0828 09:17:30.218003 361583 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio I0828 09:17:30.272677 361583 ssh_runner.go:148] Run: sudo systemctl cat docker.service I0828 09:17:30.325629 361583 ssh_runner.go:148] Run: sudo systemctl daemon-reload I0828 09:17:30.465583 361583 ssh_runner.go:148] Run: sudo systemctl start docker I0828 09:17:30.490140 361583 ssh_runner.go:148] Run: docker version --format {{.Server.Version}} I0828 09:17:30.568461 361583 out.go:105] 🐳 Preparing Kubernetes v1.18.3 on Docker 19.03.8 ... 🐳 Preparing Kubernetes v1.18.3 on Docker 19.03.8 ... I0828 09:17:30.568751 361583 cli_runner.go:109] Run: docker network ls --filter name=bridge --format {{.ID}} I0828 09:17:30.624421 361583 cli_runner.go:109] Run: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" 0d0cf33b9533 I0828 09:17:30.659461 361583 network.go:77] got host ip for mount in container by inspect docker network: 172.17.0.1 I0828 09:17:30.659520 361583 ssh_runner.go:148] Run: grep 172.17.0.1 host.minikube.internal$ /etc/hosts I0828 09:17:30.662329 361583 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "172.17.0.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts" I0828 09:17:30.700931 361583 preload.go:97] Checking if preload exists for k8s version v1.18.3 and runtime docker I0828 09:17:30.701036 361583 preload.go:105] Found local preload: /home/mramos/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4 I0828 09:17:30.701222 361583 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}} I0828 09:17:30.759811 361583 docker.go:381] Got preloaded images: -- stdout -- gcr.io/k8s-minikube/storage-provisioner:v2 kubernetesui/dashboard:v2.0.1 k8s.gcr.io/kube-proxy:v1.18.3 k8s.gcr.io/kube-scheduler:v1.18.3 k8s.gcr.io/kube-controller-manager:v1.18.3 k8s.gcr.io/kube-apiserver:v1.18.3 kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/pause:3.2 k8s.gcr.io/coredns:1.6.7 k8s.gcr.io/etcd:3.4.3-0 -- /stdout -- I0828 09:17:30.759841 361583 docker.go:319] Images already preloaded, skipping extraction I0828 09:17:30.759883 361583 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}} I0828 09:17:30.815172 361583 docker.go:381] Got preloaded images: -- stdout -- gcr.io/k8s-minikube/storage-provisioner:v2 kubernetesui/dashboard:v2.0.1 k8s.gcr.io/kube-proxy:v1.18.3 k8s.gcr.io/kube-controller-manager:v1.18.3 k8s.gcr.io/kube-scheduler:v1.18.3 k8s.gcr.io/kube-apiserver:v1.18.3 kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/pause:3.2 k8s.gcr.io/coredns:1.6.7 k8s.gcr.io/etcd:3.4.3-0 -- /stdout -- I0828 09:17:30.815204 361583 cache_images.go:74] Images are preloaded, skipping loading I0828 09:17:30.815266 361583 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}} I0828 09:17:30.860016 361583 cni.go:74] Creating CNI manager for "" I0828 09:17:30.860036 361583 cni.go:117] CNI unnecessary in this configuration, recommending no CNI I0828 09:17:30.860045 361583 kubeadm.go:84] Using pod CIDR: I0828 09:17:30.860058 361583 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:172.17.0.3 APIServerPort:8443 KubernetesVersion:v1.18.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.3"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0828 09:17:30.860160 361583 kubeadm.go:154] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 172.17.0.3 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 172.17.0.3 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "172.17.0.3"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd controllerManager: extraArgs: "leader-elect": "false" scheduler: extraArgs: "leader-elect": "false" kubernetesVersion: v1.18.3 networking: dnsDomain: cluster.local podSubnet: "" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%" nodefs.inodesFree: "0%" imagefs.available: "0%" failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "" metricsBindAddress: 172.17.0.3:10249 I0828 09:17:30.860254 361583 kubeadm.go:796] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.18.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.3 [Install] config: {KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0828 09:17:30.860308 361583 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.18.3 I0828 09:17:30.896935 361583 binaries.go:43] Found k8s binaries, skipping transfer I0828 09:17:30.897173 361583 ssh_runner.go:148] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0828 09:17:30.933545 361583 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes) I0828 09:17:31.043826 361583 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes) I0828 09:17:31.163082 361583 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1730 bytes) I0828 09:17:31.319048 361583 ssh_runner.go:148] Run: grep 172.17.0.3 control-plane.minikube.internal$ /etc/hosts I0828 09:17:31.328593 361583 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.3 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts" I0828 09:17:31.414622 361583 ssh_runner.go:148] Run: sudo systemctl daemon-reload I0828 09:17:31.566590 361583 ssh_runner.go:148] Run: sudo systemctl start kubelet I0828 09:17:31.617581 361583 certs.go:52] Setting up /home/mramos/.minikube/profiles/minikube for IP: 172.17.0.3 I0828 09:17:31.617647 361583 certs.go:173] generating minikubeCA CA: /home/mramos/.minikube/ca.key I0828 09:17:32.077688 361583 crypto.go:157] Writing cert to /home/mramos/.minikube/ca.crt ... I0828 09:17:32.077709 361583 lock.go:35] WriteFile acquiring /home/mramos/.minikube/ca.crt: {Name:mkeaac34fb89ac08fdc45cc31bd04276d557e45a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0828 09:17:32.077827 361583 crypto.go:165] Writing key to /home/mramos/.minikube/ca.key ... I0828 09:17:32.077838 361583 lock.go:35] WriteFile acquiring /home/mramos/.minikube/ca.key: {Name:mk93c574ae031ed0a8ac27175dfb1c5bbe041d75 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0828 09:17:32.077902 361583 certs.go:173] generating proxyClientCA CA: /home/mramos/.minikube/proxy-client-ca.key I0828 09:17:32.215368 361583 crypto.go:157] Writing cert to /home/mramos/.minikube/proxy-client-ca.crt ... I0828 09:17:32.215388 361583 lock.go:35] WriteFile acquiring /home/mramos/.minikube/proxy-client-ca.crt: {Name:mkda2889459eb2a84251a223568b70a8d1a3c123 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0828 09:17:32.215501 361583 crypto.go:165] Writing key to /home/mramos/.minikube/proxy-client-ca.key ... I0828 09:17:32.215510 361583 lock.go:35] WriteFile acquiring /home/mramos/.minikube/proxy-client-ca.key: {Name:mk851041ebb43cd3379c6d0eeb4ec42fad5bb11d Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0828 09:17:32.215592 361583 certs.go:273] generating minikube-user signed cert: /home/mramos/.minikube/profiles/minikube/client.key I0828 09:17:32.215598 361583 crypto.go:69] Generating cert /home/mramos/.minikube/profiles/minikube/client.crt with IP's: [] I0828 09:17:32.360809 361583 crypto.go:157] Writing cert to /home/mramos/.minikube/profiles/minikube/client.crt ... I0828 09:17:32.360828 361583 lock.go:35] WriteFile acquiring /home/mramos/.minikube/profiles/minikube/client.crt: {Name:mkfeac219438ed82cd3c5d73b38be90712415d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0828 09:17:32.360937 361583 crypto.go:165] Writing key to /home/mramos/.minikube/profiles/minikube/client.key ... I0828 09:17:32.360950 361583 lock.go:35] WriteFile acquiring /home/mramos/.minikube/profiles/minikube/client.key: {Name:mkbba0dd8e345f848dd11121fce8dcb7b0dc90fc Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0828 09:17:32.361015 361583 certs.go:273] generating minikube signed cert: /home/mramos/.minikube/profiles/minikube/apiserver.key.0f3e66d0 I0828 09:17:32.361021 361583 crypto.go:69] Generating cert /home/mramos/.minikube/profiles/minikube/apiserver.crt.0f3e66d0 with IP's: [172.17.0.3 10.96.0.1 127.0.0.1 10.0.0.1] I0828 09:17:32.414219 361583 crypto.go:157] Writing cert to /home/mramos/.minikube/profiles/minikube/apiserver.crt.0f3e66d0 ... I0828 09:17:32.414240 361583 lock.go:35] WriteFile acquiring /home/mramos/.minikube/profiles/minikube/apiserver.crt.0f3e66d0: {Name:mka1125e28bef1f930aa7a1b152a70e3cd9a5fd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0828 09:17:32.414356 361583 crypto.go:165] Writing key to /home/mramos/.minikube/profiles/minikube/apiserver.key.0f3e66d0 ... I0828 09:17:32.414364 361583 lock.go:35] WriteFile acquiring /home/mramos/.minikube/profiles/minikube/apiserver.key.0f3e66d0: {Name:mk71223b7e881985ba849fa372b0f614d4d4f53f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0828 09:17:32.414421 361583 certs.go:284] copying /home/mramos/.minikube/profiles/minikube/apiserver.crt.0f3e66d0 -> /home/mramos/.minikube/profiles/minikube/apiserver.crt I0828 09:17:32.414468 361583 certs.go:288] copying /home/mramos/.minikube/profiles/minikube/apiserver.key.0f3e66d0 -> /home/mramos/.minikube/profiles/minikube/apiserver.key I0828 09:17:32.414516 361583 certs.go:273] generating aggregator signed cert: /home/mramos/.minikube/profiles/minikube/proxy-client.key I0828 09:17:32.414522 361583 crypto.go:69] Generating cert /home/mramos/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I0828 09:17:32.581491 361583 crypto.go:157] Writing cert to /home/mramos/.minikube/profiles/minikube/proxy-client.crt ... I0828 09:17:32.581511 361583 lock.go:35] WriteFile acquiring /home/mramos/.minikube/profiles/minikube/proxy-client.crt: {Name:mk5fccbfc85799bb58cf4023c4947d690d72cdc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0828 09:17:32.581627 361583 crypto.go:165] Writing key to /home/mramos/.minikube/profiles/minikube/proxy-client.key ... I0828 09:17:32.581637 361583 lock.go:35] WriteFile acquiring /home/mramos/.minikube/profiles/minikube/proxy-client.key: {Name:mkbeda87a86af8cd7752fc18d2ba78b4f85bdd48 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0828 09:17:32.581712 361583 vm_assets.go:95] NewFileAsset: /home/mramos/.minikube/profiles/minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt I0828 09:17:32.581727 361583 vm_assets.go:95] NewFileAsset: /home/mramos/.minikube/profiles/minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key I0828 09:17:32.581736 361583 vm_assets.go:95] NewFileAsset: /home/mramos/.minikube/profiles/minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt I0828 09:17:32.581744 361583 vm_assets.go:95] NewFileAsset: /home/mramos/.minikube/profiles/minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key I0828 09:17:32.581753 361583 vm_assets.go:95] NewFileAsset: /home/mramos/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt I0828 09:17:32.581763 361583 vm_assets.go:95] NewFileAsset: /home/mramos/.minikube/ca.key -> /var/lib/minikube/certs/ca.key I0828 09:17:32.581776 361583 vm_assets.go:95] NewFileAsset: /home/mramos/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt I0828 09:17:32.581786 361583 vm_assets.go:95] NewFileAsset: /home/mramos/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key I0828 09:17:32.581826 361583 certs.go:348] found cert: /home/mramos/.minikube/certs/home/mramos/.minikube/certs/ca-key.pem (1679 bytes) I0828 09:17:32.581858 361583 certs.go:348] found cert: /home/mramos/.minikube/certs/home/mramos/.minikube/certs/ca.pem (1034 bytes) I0828 09:17:32.581879 361583 certs.go:348] found cert: /home/mramos/.minikube/certs/home/mramos/.minikube/certs/cert.pem (1078 bytes) I0828 09:17:32.581899 361583 certs.go:348] found cert: /home/mramos/.minikube/certs/home/mramos/.minikube/certs/key.pem (1675 bytes) I0828 09:17:32.581921 361583 vm_assets.go:95] NewFileAsset: /home/mramos/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem I0828 09:17:32.582528 361583 ssh_runner.go:215] scp /home/mramos/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1350 bytes) I0828 09:17:32.691892 361583 ssh_runner.go:215] scp /home/mramos/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0828 09:17:32.820337 361583 ssh_runner.go:215] scp /home/mramos/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1103 bytes) I0828 09:17:32.903214 361583 ssh_runner.go:215] scp /home/mramos/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0828 09:17:33.031407 361583 ssh_runner.go:215] scp /home/mramos/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes) I0828 09:17:33.150458 361583 ssh_runner.go:215] scp /home/mramos/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0828 09:17:33.283691 361583 ssh_runner.go:215] scp /home/mramos/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes) I0828 09:17:33.380002 361583 ssh_runner.go:215] scp /home/mramos/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0828 09:17:33.474359 361583 ssh_runner.go:215] scp /home/mramos/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes) I0828 09:17:33.578612 361583 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes) I0828 09:17:33.700842 361583 ssh_runner.go:148] Run: openssl version I0828 09:17:33.711970 361583 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0828 09:17:33.744345 361583 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0828 09:17:33.749691 361583 certs.go:389] hashing: -rw-r--r-- 1 root root 1066 Aug 28 13:17 /usr/share/ca-certificates/minikubeCA.pem I0828 09:17:33.749786 361583 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0828 09:17:33.758621 361583 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0828 09:17:33.804191 361583 kubeadm.go:327] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s} I0828 09:17:33.804405 361583 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0828 09:17:33.853991 361583 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0828 09:17:33.882910 361583 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0828 09:17:33.914643 361583 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver I0828 09:17:33.914713 361583 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0828 09:17:33.951182 361583 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0828 09:17:33.951323 361583 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0828 09:21:58.971732 361583 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (4m25.020327048s) W0828 09:21:58.972061 361583 out.go:151] 💥 initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.3 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.4.0-42-generic DOCKER_VERSION: 19.03.8 DOCKER_GRAPH_DRIVER: overlay2 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [172.17.0.3 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [172.17.0.3 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W0828 13:17:34.062656 792 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-42-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0828 13:17:37.458779 792 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" W0828 13:17:37.459648 792 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher 💥 initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.3 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.4.0-42-generic DOCKER_VERSION: 19.03.8 DOCKER_GRAPH_DRIVER: overlay2 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [172.17.0.3 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [172.17.0.3 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W0828 13:17:34.062656 792 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-42-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0828 13:17:37.458779 792 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" W0828 13:17:37.459648 792 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher I0828 09:21:58.973048 361583 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force" I0828 09:22:41.049292 361583 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (42.076192028s) I0828 09:22:41.049368 361583 ssh_runner.go:148] Run: sudo systemctl stop -f kubelet I0828 09:22:41.222544 361583 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0828 09:22:41.283357 361583 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver I0828 09:22:41.283406 361583 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0828 09:22:41.303557 361583 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0828 09:22:41.303599 361583 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0828 09:27:04.190640 361583 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (4m22.886992636s) I0828 09:27:04.190802 361583 kubeadm.go:329] StartCluster complete in 9m30.386633734s I0828 09:27:04.190963 361583 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0828 09:27:04.254512 361583 logs.go:203] 1 containers: [de78efeeea43] I0828 09:27:04.254589 361583 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0828 09:27:04.291061 361583 logs.go:203] 1 containers: [564c5ed4e16b] I0828 09:27:04.291137 361583 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0828 09:27:04.326614 361583 logs.go:203] 0 containers: [] W0828 09:27:04.326631 361583 logs.go:205] No container was found matching "coredns" I0828 09:27:04.326670 361583 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0828 09:27:04.361855 361583 logs.go:203] 1 containers: [dd2afdf19621] I0828 09:27:04.361905 361583 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0828 09:27:04.396677 361583 logs.go:203] 0 containers: [] W0828 09:27:04.396698 361583 logs.go:205] No container was found matching "kube-proxy" I0828 09:27:04.396741 361583 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0828 09:27:04.431462 361583 logs.go:203] 0 containers: [] W0828 09:27:04.431485 361583 logs.go:205] No container was found matching "kubernetes-dashboard" I0828 09:27:04.431527 361583 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0828 09:27:04.464884 361583 logs.go:203] 0 containers: [] W0828 09:27:04.464906 361583 logs.go:205] No container was found matching "storage-provisioner" I0828 09:27:04.464955 361583 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0828 09:27:04.498952 361583 logs.go:203] 1 containers: [22cff6315e7a] I0828 09:27:04.498985 361583 logs.go:117] Gathering logs for dmesg ... I0828 09:27:04.499006 361583 ssh_runner.go:148] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0828 09:27:04.525111 361583 logs.go:117] Gathering logs for kube-scheduler [dd2afdf19621] ... I0828 09:27:04.525133 361583 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 dd2afdf19621" I0828 09:27:04.565931 361583 logs.go:117] Gathering logs for kube-controller-manager [22cff6315e7a] ... I0828 09:27:04.565951 361583 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 22cff6315e7a" I0828 09:27:04.611894 361583 logs.go:117] Gathering logs for Docker ... I0828 09:27:04.611920 361583 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0828 09:27:04.643068 361583 logs.go:117] Gathering logs for container status ... I0828 09:27:04.643092 361583 ssh_runner.go:148] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0828 09:27:04.673316 361583 logs.go:117] Gathering logs for kubelet ... I0828 09:27:04.673354 361583 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0828 09:27:04.752351 361583 logs.go:117] Gathering logs for describe nodes ... I0828 09:27:04.752375 361583 ssh_runner.go:148] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0828 09:27:05.143668 361583 logs.go:117] Gathering logs for kube-apiserver [de78efeeea43] ... I0828 09:27:05.143696 361583 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 de78efeeea43" I0828 09:27:05.215694 361583 logs.go:117] Gathering logs for etcd [564c5ed4e16b] ... I0828 09:27:05.215727 361583 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 564c5ed4e16b" W0828 09:27:05.266148 361583 out.go:258] Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.3 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.4.0-42-generic DOCKER_VERSION: 19.03.8 DOCKER_GRAPH_DRIVER: overlay2 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W0828 13:22:41.402830 3301 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-42-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0828 13:22:42.676631 3301 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" W0828 13:22:42.677992 3301 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher W0828 09:27:05.266255 361583 out.go:151] W0828 09:27:05.266309 361583 out.go:151] 💣 Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.3 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.4.0-42-generic DOCKER_VERSION: 19.03.8 DOCKER_GRAPH_DRIVER: overlay2 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W0828 13:22:41.402830 3301 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-42-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0828 13:22:42.676631 3301 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" W0828 13:22:42.677992 3301 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher 💣 Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.3 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.4.0-42-generic DOCKER_VERSION: 19.03.8 DOCKER_GRAPH_DRIVER: overlay2 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W0828 13:22:41.402830 3301 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-42-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0828 13:22:42.676631 3301 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" W0828 13:22:42.677992 3301 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher W0828 09:27:05.266641 361583 out.go:151] W0828 09:27:05.266657 361583 out.go:151] 😿 minikube is exiting due to an error. If the above message is not useful, open an issue: 😿 minikube is exiting due to an error. If the above message is not useful, open an issue: W0828 09:27:05.266673 361583 out.go:151] 👉 https://github.com/kubernetes/minikube/issues/new/choose 👉 https://github.com/kubernetes/minikube/issues/new/choose I0828 09:27:05.266724 361583 exit.go:58] WithError(failed to start node)=startup failed: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.3 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.4.0-42-generic DOCKER_VERSION: 19.03.8 DOCKER_GRAPH_DRIVER: overlay2 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W0828 13:22:41.402830 3301 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-42-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0828 13:22:42.676631 3301 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" W0828 13:22:42.677992 3301 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher called from: goroutine 1 [running]: runtime/debug.Stack(0x0, 0x0, 0x100000000000000) /usr/local/go/src/runtime/debug/stack.go:24 +0x9d k8s.io/minikube/pkg/minikube/exit.WithError(0x1bb103e, 0x14, 0x1ed11c0, 0xc00030aba0) /app/pkg/minikube/exit/exit.go:58 +0x34 k8s.io/minikube/cmd/minikube/cmd.runStart(0x2cd78c0, 0xc000203900, 0x0, 0x2) /app/cmd/minikube/cmd/start.go:221 +0x6f9 github.com/spf13/cobra.(*Command).execute(0x2cd78c0, 0xc0002038e0, 0x2, 0x2, 0x2cd78c0, 0xc0002038e0) /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:846 +0x29d github.com/spf13/cobra.(*Command).ExecuteC(0x2cd6900, 0x0, 0x1, 0xc000659cc0) /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950 +0x349 github.com/spf13/cobra.(*Command).Execute(...) /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887 k8s.io/minikube/cmd/minikube/cmd.Execute() /app/cmd/minikube/cmd/root.go:106 +0x72c main.main() /app/cmd/minikube/main.go:71 +0x11f W0828 09:27:05.268625 361583 out.go:151] E0828 09:27:05.268649 361583 exit.go:76] &{ID:NONE_KUBELET Err:/bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.3 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.4.0-42-generic DOCKER_VERSION: 19.03.8 DOCKER_GRAPH_DRIVER: overlay2 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W0828 13:22:41.402830 3301 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-42-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0828 13:22:42.676631 3301 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" W0828 13:22:42.677992 3301 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher run k8s.io/minikube/pkg/minikube/bootstrapper/kubeadm.(*Bootstrapper).init /app/pkg/minikube/bootstrapper/kubeadm/kubeadm.go:240 k8s.io/minikube/pkg/minikube/bootstrapper/kubeadm.(*Bootstrapper).StartCluster /app/pkg/minikube/bootstrapper/kubeadm/kubeadm.go:367 k8s.io/minikube/pkg/minikube/node.Start /app/pkg/minikube/node/start.go:113 k8s.io/minikube/cmd/minikube/cmd.startWithDriver /app/cmd/minikube/cmd/start.go:298 k8s.io/minikube/cmd/minikube/cmd.runStart /app/cmd/minikube/cmd/start.go:218 github.com/spf13/cobra.(*Command).execute /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:846 github.com/spf13/cobra.(*Command).ExecuteC /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950 github.com/spf13/cobra.(*Command).Execute /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887 k8s.io/minikube/cmd/minikube/cmd.Execute /app/cmd/minikube/cmd/root.go:106 main.main /app/cmd/minikube/main.go:71 runtime.main /usr/local/go/src/runtime/proc.go:203 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1373 startup failed k8s.io/minikube/cmd/minikube/cmd.maybeDeleteAndRetry /app/cmd/minikube/cmd/start.go:474 k8s.io/minikube/cmd/minikube/cmd.startWithDriver /app/cmd/minikube/cmd/start.go:300 k8s.io/minikube/cmd/minikube/cmd.runStart /app/cmd/minikube/cmd/start.go:218 github.com/spf13/cobra.(*Command).execute /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:846 github.com/spf13/cobra.(*Command).ExecuteC /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950 github.com/spf13/cobra.(*Command).Execute /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887 k8s.io/minikube/cmd/minikube/cmd.Execute /app/cmd/minikube/cmd/root.go:106 main.main /app/cmd/minikube/main.go:71 runtime.main /usr/local/go/src/runtime/proc.go:203 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1373 Advice:Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start URL: Issues:[4172] ShowIssueLink:false} W0828 09:27:05.269202 361583 out.go:151] ❌ [NONE_KUBELET] failed to start node startup failed: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.3 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.4.0-42-generic DOCKER_VERSION: 19.03.8 DOCKER_GRAPH_DRIVER: overlay2 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W0828 13:22:41.402830 3301 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-42-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0828 13:22:42.676631 3301 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" W0828 13:22:42.677992 3301 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher ❌ [NONE_KUBELET] failed to start node startup failed: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.3 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.4.0-42-generic DOCKER_VERSION: 19.03.8 DOCKER_GRAPH_DRIVER: overlay2 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W0828 13:22:41.402830 3301 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-42-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0828 13:22:42.676631 3301 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" W0828 13:22:42.677992 3301 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher W0828 09:27:05.269556 361583 out.go:151] 💡 Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start 💡 Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start W0828 09:27:05.269602 361583 out.go:151] ⁉️ Related issue: https://github.com/kubernetes/minikube/issues/4172 ⁉️ Related issue: https://github.com/kubernetes/minikube/issues/4172 ```

Optional: Full output of minikube logs command:

``` mramos@mramos-l-ubuntu:~$ minikube logs ==> Docker <== -- Logs begin at Fri 2020-08-28 13:17:22 UTC, end at Fri 2020-08-28 13:31:50 UTC. -- Aug 28 13:17:23 minikube dockerd[153]: time="2020-08-28T13:17:23.049953980Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 28 13:17:23 minikube dockerd[153]: time="2020-08-28T13:17:23.049984640Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 28 13:17:23 minikube dockerd[153]: time="2020-08-28T13:17:23.050008621Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Aug 28 13:17:23 minikube dockerd[153]: time="2020-08-28T13:17:23.050020036Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 28 13:17:23 minikube dockerd[153]: time="2020-08-28T13:17:23.063343683Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 28 13:17:23 minikube dockerd[153]: time="2020-08-28T13:17:23.063371433Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 28 13:17:23 minikube dockerd[153]: time="2020-08-28T13:17:23.063386859Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Aug 28 13:17:23 minikube dockerd[153]: time="2020-08-28T13:17:23.063400951Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 28 13:17:25 minikube dockerd[153]: time="2020-08-28T13:17:25.023652917Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Aug 28 13:17:25 minikube dockerd[153]: time="2020-08-28T13:17:25.070703976Z" level=warning msg="Your kernel does not support swap memory limit" Aug 28 13:17:25 minikube dockerd[153]: time="2020-08-28T13:17:25.070732406Z" level=warning msg="Your kernel does not support cgroup rt period" Aug 28 13:17:25 minikube dockerd[153]: time="2020-08-28T13:17:25.070739582Z" level=warning msg="Your kernel does not support cgroup rt runtime" Aug 28 13:17:25 minikube dockerd[153]: time="2020-08-28T13:17:25.070745370Z" level=warning msg="Your kernel does not support cgroup blkio weight" Aug 28 13:17:25 minikube dockerd[153]: time="2020-08-28T13:17:25.070751085Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Aug 28 13:17:25 minikube dockerd[153]: time="2020-08-28T13:17:25.070887413Z" level=info msg="Loading containers: start." Aug 28 13:17:25 minikube dockerd[153]: time="2020-08-28T13:17:25.184284373Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 28 13:17:25 minikube dockerd[153]: time="2020-08-28T13:17:25.421345263Z" level=info msg="Loading containers: done." Aug 28 13:17:26 minikube dockerd[153]: time="2020-08-28T13:17:26.243605697Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8 Aug 28 13:17:26 minikube dockerd[153]: time="2020-08-28T13:17:26.243925969Z" level=info msg="Daemon has completed initialization" Aug 28 13:17:26 minikube dockerd[153]: time="2020-08-28T13:17:26.304024015Z" level=info msg="API listen on /run/docker.sock" Aug 28 13:17:26 minikube systemd[1]: Started Docker Application Container Engine. Aug 28 13:17:28 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed. Aug 28 13:17:28 minikube systemd[1]: Stopping Docker Application Container Engine... Aug 28 13:17:28 minikube dockerd[153]: time="2020-08-28T13:17:28.759766770Z" level=info msg="Processing signal 'terminated'" Aug 28 13:17:28 minikube dockerd[153]: time="2020-08-28T13:17:28.761831267Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby Aug 28 13:17:28 minikube dockerd[153]: time="2020-08-28T13:17:28.764011233Z" level=info msg="Daemon shutdown complete" Aug 28 13:17:28 minikube systemd[1]: docker.service: Succeeded. Aug 28 13:17:28 minikube systemd[1]: Stopped Docker Application Container Engine. Aug 28 13:17:28 minikube systemd[1]: Starting Docker Application Container Engine... Aug 28 13:17:28 minikube dockerd[381]: time="2020-08-28T13:17:28.836999562Z" level=info msg="Starting up" Aug 28 13:17:28 minikube dockerd[381]: time="2020-08-28T13:17:28.839653138Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 28 13:17:28 minikube dockerd[381]: time="2020-08-28T13:17:28.839693901Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 28 13:17:28 minikube dockerd[381]: time="2020-08-28T13:17:28.839723195Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Aug 28 13:17:28 minikube dockerd[381]: time="2020-08-28T13:17:28.839747328Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 28 13:17:28 minikube dockerd[381]: time="2020-08-28T13:17:28.840706821Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 28 13:17:28 minikube dockerd[381]: time="2020-08-28T13:17:28.840726618Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 28 13:17:28 minikube dockerd[381]: time="2020-08-28T13:17:28.840741214Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Aug 28 13:17:28 minikube dockerd[381]: time="2020-08-28T13:17:28.840756182Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 28 13:17:28 minikube dockerd[381]: time="2020-08-28T13:17:28.852621878Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Aug 28 13:17:28 minikube dockerd[381]: time="2020-08-28T13:17:28.868877465Z" level=warning msg="Your kernel does not support swap memory limit" Aug 28 13:17:28 minikube dockerd[381]: time="2020-08-28T13:17:28.868962865Z" level=warning msg="Your kernel does not support cgroup rt period" Aug 28 13:17:28 minikube dockerd[381]: time="2020-08-28T13:17:28.868985679Z" level=warning msg="Your kernel does not support cgroup rt runtime" Aug 28 13:17:28 minikube dockerd[381]: time="2020-08-28T13:17:28.869003679Z" level=warning msg="Your kernel does not support cgroup blkio weight" Aug 28 13:17:28 minikube dockerd[381]: time="2020-08-28T13:17:28.869022059Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Aug 28 13:17:28 minikube dockerd[381]: time="2020-08-28T13:17:28.869447582Z" level=info msg="Loading containers: start." Aug 28 13:17:29 minikube dockerd[381]: time="2020-08-28T13:17:29.174823343Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 28 13:17:29 minikube dockerd[381]: time="2020-08-28T13:17:29.249763997Z" level=info msg="Loading containers: done." Aug 28 13:17:29 minikube dockerd[381]: time="2020-08-28T13:17:29.286404415Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8 Aug 28 13:17:29 minikube dockerd[381]: time="2020-08-28T13:17:29.286649265Z" level=info msg="Daemon has completed initialization" Aug 28 13:17:29 minikube dockerd[381]: time="2020-08-28T13:17:29.340619615Z" level=info msg="API listen on /var/run/docker.sock" Aug 28 13:17:29 minikube dockerd[381]: time="2020-08-28T13:17:29.340627634Z" level=info msg="API listen on [::]:2376" Aug 28 13:17:29 minikube systemd[1]: Started Docker Application Container Engine. Aug 28 13:22:39 minikube dockerd[381]: time="2020-08-28T13:22:39.693208402Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 28 13:22:39 minikube dockerd[381]: time="2020-08-28T13:22:39.832320778Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 28 13:22:39 minikube dockerd[381]: time="2020-08-28T13:22:39.996496849Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 28 13:22:40 minikube dockerd[381]: time="2020-08-28T13:22:40.123211086Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 28 13:22:40 minikube dockerd[381]: time="2020-08-28T13:22:40.323406241Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 28 13:22:40 minikube dockerd[381]: time="2020-08-28T13:22:40.556231788Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 28 13:22:40 minikube dockerd[381]: time="2020-08-28T13:22:40.771591482Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 28 13:22:40 minikube dockerd[381]: time="2020-08-28T13:22:40.964918926Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" ==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 564c5ed4e16b1 303ce5db0e90d 9 minutes ago Running etcd 0 be4f264152828 dd2afdf196216 76216c34ed0c7 9 minutes ago Running kube-scheduler 0 fb68d1f1cc944 22cff6315e7a9 da26705ccb4b5 9 minutes ago Running kube-controller-manager 0 b3b85ec414454 de78efeeea438 7e28efa976bd1 9 minutes ago Running kube-apiserver 0 94efda2ee85c4 ==> describe nodes <== No resources found in default namespace. ==> dmesg <== [Aug28 00:22] show_signal: 4724 callbacks suppressed [Aug28 00:32] atkbd serio0: Unknown key pressed (translated set 2, code 0x85 on isa0060/serio0). [ +0.000006] atkbd serio0: Use 'setkeycodes e005 ' to make it known. [ +5.057920] IRQ 142: no longer affine to CPU3 [ +0.016418] IRQ 122: no longer affine to CPU5 [ +0.008422] IRQ 124: no longer affine to CPU6 [ +0.008048] IRQ 125: no longer affine to CPU7 [ +0.000009] IRQ 126: no longer affine to CPU7 [ +0.000008] IRQ 127: no longer affine to CPU7 [ +0.000010] IRQ 141: no longer affine to CPU7 [ +1.424087] iwlwifi 0000:02:00.0: FW already configured (0) - re-configuring [ +0.035808] iwlwifi 0000:02:00.0: BIOS contains WGDS but no WRDS [ +12.258643] usb 1-1-port3: Cannot enable. Maybe the USB cable is bad? [Aug28 00:59] kauditd_printk_skb: 22 callbacks suppressed [Aug28 01:00] kauditd_printk_skb: 21 callbacks suppressed [Aug28 01:01] kauditd_printk_skb: 20 callbacks suppressed [Aug28 01:10] kauditd_printk_skb: 30 callbacks suppressed [Aug28 01:11] atkbd serio0: Unknown key pressed (translated set 2, code 0x85 on isa0060/serio0). [ +0.000005] atkbd serio0: Use 'setkeycodes e005 ' to make it known. [Aug28 01:12] atkbd serio0: Unknown key pressed (translated set 2, code 0x85 on isa0060/serio0). [ +0.000005] atkbd serio0: Use 'setkeycodes e005 ' to make it known. [ +10.651870] IRQ 127: no longer affine to CPU4 [ +0.010776] IRQ 122: no longer affine to CPU5 [ +0.000021] IRQ 141: no longer affine to CPU5 [ +0.011216] IRQ 124: no longer affine to CPU6 [ +0.000021] IRQ 142: no longer affine to CPU6 [ +0.012375] IRQ 125: no longer affine to CPU7 [ +0.000012] IRQ 126: no longer affine to CPU7 [ +1.425498] iwlwifi 0000:02:00.0: FW already configured (0) - re-configuring [ +0.022232] iwlwifi 0000:02:00.0: BIOS contains WGDS but no WRDS [ +3.814891] iwlwifi 0000:02:00.0: FW already configured (0) - re-configuring [ +0.016383] iwlwifi 0000:02:00.0: BIOS contains WGDS but no WRDS ==> etcd [564c5ed4e16b] <== [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-08-28 13:22:49.991437 I | etcdmain: etcd Version: 3.4.3 2020-08-28 13:22:49.991475 I | etcdmain: Git SHA: 3cf2f69b5 2020-08-28 13:22:49.991479 I | etcdmain: Go Version: go1.12.12 2020-08-28 13:22:49.991483 I | etcdmain: Go OS/Arch: linux/amd64 2020-08-28 13:22:49.991487 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8 [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-08-28 13:22:49.991559 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-08-28 13:22:49.992143 I | embed: name = minikube 2020-08-28 13:22:49.992155 I | embed: data dir = /var/lib/minikube/etcd 2020-08-28 13:22:49.992160 I | embed: member dir = /var/lib/minikube/etcd/member 2020-08-28 13:22:49.992164 I | embed: heartbeat = 100ms 2020-08-28 13:22:49.992167 I | embed: election = 1000ms 2020-08-28 13:22:49.992171 I | embed: snapshot count = 10000 2020-08-28 13:22:49.992179 I | embed: advertise client URLs = https://172.17.0.3:2379 2020-08-28 13:22:50.016000 I | etcdserver: starting member b273bc7741bcb020 in cluster 86482fea2286a1d2 raft2020/08/28 13:22:50 INFO: b273bc7741bcb020 switched to configuration voters=() raft2020/08/28 13:22:50 INFO: b273bc7741bcb020 became follower at term 0 raft2020/08/28 13:22:50 INFO: newRaft b273bc7741bcb020 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] raft2020/08/28 13:22:50 INFO: b273bc7741bcb020 became follower at term 1 raft2020/08/28 13:22:50 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056) 2020-08-28 13:22:50.086732 W | auth: simple token is not cryptographically signed 2020-08-28 13:22:50.097201 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided] raft2020/08/28 13:22:50 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056) 2020-08-28 13:22:50.099098 I | etcdserver/membership: added member b273bc7741bcb020 [https://172.17.0.3:2380] to cluster 86482fea2286a1d2 2020-08-28 13:22:50.100452 I | etcdserver: b273bc7741bcb020 as single-node; fast-forwarding 9 ticks (election ticks 10) 2020-08-28 13:22:50.106769 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-08-28 13:22:50.106992 I | embed: listening for peers on 172.17.0.3:2380 2020-08-28 13:22:50.109181 I | embed: listening for metrics on http://127.0.0.1:2381 raft2020/08/28 13:22:50 INFO: b273bc7741bcb020 is starting a new election at term 1 raft2020/08/28 13:22:50 INFO: b273bc7741bcb020 became candidate at term 2 raft2020/08/28 13:22:50 INFO: b273bc7741bcb020 received MsgVoteResp from b273bc7741bcb020 at term 2 raft2020/08/28 13:22:50 INFO: b273bc7741bcb020 became leader at term 2 raft2020/08/28 13:22:50 INFO: raft.node: b273bc7741bcb020 elected leader b273bc7741bcb020 at term 2 2020-08-28 13:22:50.517316 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.3:2379]} to cluster 86482fea2286a1d2 2020-08-28 13:22:50.517553 I | embed: ready to serve client requests 2020-08-28 13:22:50.517721 I | etcdserver: setting up the initial cluster version to 3.4 2020-08-28 13:22:50.517952 I | embed: ready to serve client requests 2020-08-28 13:22:50.520877 N | etcdserver/membership: set the initial cluster version to 3.4 2020-08-28 13:22:50.521044 I | etcdserver/api: enabled capabilities for version 3.4 2020-08-28 13:22:50.522218 I | embed: serving client requests on 127.0.0.1:2379 2020-08-28 13:22:50.523018 I | embed: serving client requests on 172.17.0.3:2379 ==> kernel <== 13:31:51 up 14:15, 0 users, load average: 1.75, 1.59, 2.03 Linux minikube 5.4.0-42-generic #46-Ubuntu SMP Fri Jul 10 00:24:02 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04 LTS" ==> kube-apiserver [de78efeeea43] <== I0828 13:22:55.711791 1 client.go:361] parsed scheme: "endpoint" I0828 13:22:55.711827 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0828 13:22:55.720383 1 client.go:361] parsed scheme: "endpoint" I0828 13:22:55.720419 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] W0828 13:22:55.831978 1 genericapiserver.go:409] Skipping API batch/v2alpha1 because it has no resources. W0828 13:22:55.840452 1 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources. W0828 13:22:55.851037 1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources. W0828 13:22:55.866202 1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W0828 13:22:55.869030 1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W0828 13:22:55.887288 1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W0828 13:22:55.899990 1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources. W0828 13:22:55.900006 1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources. I0828 13:22:55.906279 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0828 13:22:55.906294 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I0828 13:22:55.907504 1 client.go:361] parsed scheme: "endpoint" I0828 13:22:55.907522 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0828 13:22:55.912993 1 client.go:361] parsed scheme: "endpoint" I0828 13:22:55.913013 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0828 13:22:57.392294 1 secure_serving.go:178] Serving securely on [::]:8443 I0828 13:22:57.392403 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt I0828 13:22:57.392429 1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key I0828 13:22:57.392452 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0828 13:22:57.393370 1 autoregister_controller.go:141] Starting autoregister controller I0828 13:22:57.393383 1 cache.go:32] Waiting for caches to sync for autoregister controller I0828 13:22:57.393462 1 crd_finalizer.go:266] Starting CRDFinalizer I0828 13:22:57.393482 1 controller.go:86] Starting OpenAPI controller I0828 13:22:57.393496 1 customresource_discovery_controller.go:209] Starting DiscoveryController I0828 13:22:57.393510 1 naming_controller.go:291] Starting NamingConditionController I0828 13:22:57.393522 1 establishing_controller.go:76] Starting EstablishingController I0828 13:22:57.393534 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController I0828 13:22:57.393546 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0828 13:22:57.393635 1 available_controller.go:387] Starting AvailableConditionController I0828 13:22:57.393640 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0828 13:22:57.393656 1 controller.go:81] Starting OpenAPI AggregationController I0828 13:22:57.393773 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I0828 13:22:57.394400 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0828 13:22:57.394408 1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller I0828 13:22:57.394922 1 apiservice_controller.go:94] Starting APIServiceRegistrationController I0828 13:22:57.394931 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0828 13:22:57.394947 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0828 13:22:57.394952 1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister I0828 13:22:57.395715 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I0828 13:22:57.395745 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt E0828 13:22:57.402094 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.3, ResourceVersion: 0, AdditionalErrorMsg: I0828 13:22:57.493741 1 cache.go:39] Caches are synced for AvailableConditionController controller I0828 13:22:57.494032 1 cache.go:39] Caches are synced for autoregister controller I0828 13:22:57.494514 1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller I0828 13:22:57.495044 1 shared_informer.go:230] Caches are synced for crd-autoregister I0828 13:22:57.495079 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0828 13:22:58.391864 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0828 13:22:58.391945 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0828 13:22:58.403506 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000 I0828 13:22:58.412622 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000 I0828 13:22:58.412731 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist. I0828 13:22:58.853528 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0828 13:22:58.944432 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io W0828 13:22:59.169419 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.17.0.3] I0828 13:22:59.172265 1 controller.go:606] quota admission added evaluator for: endpoints I0828 13:22:59.190459 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io I0828 13:22:59.678182 1 controller.go:606] quota admission added evaluator for: serviceaccounts ==> kube-controller-manager [22cff6315e7a] <== I0828 13:23:04.178615 1 stateful_set.go:146] Starting stateful set controller I0828 13:23:04.178678 1 shared_informer.go:223] Waiting for caches to sync for stateful set I0828 13:23:04.874582 1 controllermanager.go:533] Started "horizontalpodautoscaling" I0828 13:23:04.874653 1 horizontal.go:169] Starting HPA controller I0828 13:23:04.874662 1 shared_informer.go:223] Waiting for caches to sync for HPA E0828 13:23:05.127785 1 core.go:89] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail W0828 13:23:05.127859 1 controllermanager.go:525] Skipping "service" I0828 13:23:05.127922 1 core.go:239] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true. W0828 13:23:05.127950 1 controllermanager.go:525] Skipping "route" W0828 13:23:05.127975 1 controllermanager.go:525] Skipping "ttl-after-finished" W0828 13:23:05.127995 1 controllermanager.go:525] Skipping "root-ca-cert-publisher" I0828 13:23:05.378581 1 controllermanager.go:533] Started "serviceaccount" I0828 13:23:05.378747 1 serviceaccounts_controller.go:117] Starting service account controller I0828 13:23:05.378783 1 shared_informer.go:223] Waiting for caches to sync for service account I0828 13:23:05.628189 1 controllermanager.go:533] Started "deployment" I0828 13:23:05.628272 1 deployment_controller.go:153] Starting deployment controller I0828 13:23:05.628306 1 shared_informer.go:223] Waiting for caches to sync for deployment I0828 13:23:05.874176 1 controllermanager.go:533] Started "ttl" W0828 13:23:05.874191 1 controllermanager.go:525] Skipping "nodeipam" I0828 13:23:05.874241 1 ttl_controller.go:118] Starting TTL controller I0828 13:23:05.874247 1 shared_informer.go:223] Waiting for caches to sync for TTL I0828 13:23:06.128446 1 controllermanager.go:533] Started "persistentvolume-binder" I0828 13:23:06.128576 1 pv_controller_base.go:295] Starting persistent volume controller I0828 13:23:06.128606 1 shared_informer.go:223] Waiting for caches to sync for persistent volume I0828 13:23:06.376050 1 controllermanager.go:533] Started "pvc-protection" I0828 13:23:06.377267 1 pvc_protection_controller.go:101] Starting PVC protection controller I0828 13:23:06.377305 1 shared_informer.go:223] Waiting for caches to sync for PVC protection I0828 13:23:06.383607 1 shared_informer.go:223] Waiting for caches to sync for resource quota I0828 13:23:06.384606 1 shared_informer.go:223] Waiting for caches to sync for garbage collector I0828 13:23:06.474614 1 shared_informer.go:230] Caches are synced for TTL I0828 13:23:06.478765 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator I0828 13:23:06.479202 1 shared_informer.go:230] Caches are synced for service account I0828 13:23:06.485897 1 shared_informer.go:230] Caches are synced for namespace I0828 13:23:06.726796 1 shared_informer.go:230] Caches are synced for PV protection I0828 13:23:06.728915 1 shared_informer.go:230] Caches are synced for expand I0828 13:23:06.805891 1 shared_informer.go:230] Caches are synced for bootstrap_signer I0828 13:23:06.928830 1 shared_informer.go:230] Caches are synced for certificate-csrsigning I0828 13:23:06.935228 1 shared_informer.go:230] Caches are synced for certificate-csrapproving I0828 13:23:06.983939 1 shared_informer.go:230] Caches are synced for resource quota I0828 13:23:06.985013 1 shared_informer.go:230] Caches are synced for garbage collector I0828 13:23:06.987903 1 shared_informer.go:230] Caches are synced for job I0828 13:23:06.994892 1 shared_informer.go:230] Caches are synced for garbage collector I0828 13:23:06.994960 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0828 13:23:06.999541 1 shared_informer.go:230] Caches are synced for resource quota I0828 13:23:07.017049 1 shared_informer.go:230] Caches are synced for endpoint I0828 13:23:07.027311 1 shared_informer.go:230] Caches are synced for disruption I0828 13:23:07.027368 1 disruption.go:339] Sending events to api server. I0828 13:23:07.028599 1 shared_informer.go:230] Caches are synced for deployment I0828 13:23:07.028616 1 shared_informer.go:230] Caches are synced for ReplicaSet I0828 13:23:07.028987 1 shared_informer.go:230] Caches are synced for GC I0828 13:23:07.029006 1 shared_informer.go:230] Caches are synced for persistent volume I0828 13:23:07.035796 1 shared_informer.go:230] Caches are synced for daemon sets I0828 13:23:07.036043 1 shared_informer.go:230] Caches are synced for taint I0828 13:23:07.036300 1 taint_manager.go:187] Starting NoExecuteTaintManager I0828 13:23:07.054758 1 shared_informer.go:230] Caches are synced for attach detach I0828 13:23:07.060147 1 shared_informer.go:230] Caches are synced for endpoint_slice I0828 13:23:07.075171 1 shared_informer.go:230] Caches are synced for HPA I0828 13:23:07.077633 1 shared_informer.go:230] Caches are synced for PVC protection I0828 13:23:07.079188 1 shared_informer.go:230] Caches are synced for stateful set I0828 13:23:07.079201 1 shared_informer.go:230] Caches are synced for ReplicationController ==> kube-scheduler [dd2afdf19621] <== I0828 13:22:50.190197 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0828 13:22:50.190357 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0828 13:22:51.915556 1 serving.go:313] Generated self-signed cert in-memory W0828 13:22:57.411073 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0828 13:22:57.411098 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0828 13:22:57.411108 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous. W0828 13:22:57.411115 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0828 13:22:57.420128 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0828 13:22:57.420151 1 registry.go:150] Registering EvenPodsSpread predicate and priority function W0828 13:22:57.421688 1 authorization.go:47] Authorization is disabled W0828 13:22:57.421708 1 authentication.go:40] Authentication is disabled I0828 13:22:57.421717 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I0828 13:22:57.423382 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 I0828 13:22:57.423580 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0828 13:22:57.423594 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0828 13:22:57.423612 1 tlsconfig.go:240] Starting DynamicServingCertificateController E0828 13:22:57.427258 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0828 13:22:57.427269 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0828 13:22:57.427352 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0828 13:22:57.427392 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0828 13:22:57.427420 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0828 13:22:57.427478 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0828 13:22:57.427498 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0828 13:22:57.427564 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0828 13:22:57.427716 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0828 13:22:58.363901 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0828 13:22:58.519390 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0828 13:22:58.700735 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0828 13:22:58.701464 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope I0828 13:23:00.723941 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file ==> kubelet <== -- Logs begin at Fri 2020-08-28 13:17:22 UTC, end at Fri 2020-08-28 13:31:52 UTC. -- Aug 28 13:31:46 minikube kubelet[3528]: E0828 13:31:46.542005 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:46 minikube kubelet[3528]: E0828 13:31:46.642512 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:46 minikube kubelet[3528]: E0828 13:31:46.742806 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:46 minikube kubelet[3528]: E0828 13:31:46.843822 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:46 minikube kubelet[3528]: E0828 13:31:46.944213 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:47 minikube kubelet[3528]: E0828 13:31:47.044745 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:47 minikube kubelet[3528]: E0828 13:31:47.145071 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:47 minikube kubelet[3528]: E0828 13:31:47.245394 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:47 minikube kubelet[3528]: E0828 13:31:47.345763 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:47 minikube kubelet[3528]: E0828 13:31:47.446700 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:47 minikube kubelet[3528]: E0828 13:31:47.546958 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:47 minikube kubelet[3528]: E0828 13:31:47.647198 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:47 minikube kubelet[3528]: E0828 13:31:47.747539 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:47 minikube kubelet[3528]: E0828 13:31:47.847879 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:47 minikube kubelet[3528]: E0828 13:31:47.948245 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:48 minikube kubelet[3528]: E0828 13:31:48.048635 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:48 minikube kubelet[3528]: E0828 13:31:48.149021 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:48 minikube kubelet[3528]: E0828 13:31:48.249361 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:48 minikube kubelet[3528]: E0828 13:31:48.349747 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:48 minikube kubelet[3528]: E0828 13:31:48.450140 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:48 minikube kubelet[3528]: E0828 13:31:48.550521 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:48 minikube kubelet[3528]: E0828 13:31:48.650858 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:48 minikube kubelet[3528]: E0828 13:31:48.751289 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:48 minikube kubelet[3528]: E0828 13:31:48.805932 3528 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get node info: node "minikube" not found Aug 28 13:31:48 minikube kubelet[3528]: E0828 13:31:48.851620 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:48 minikube kubelet[3528]: E0828 13:31:48.952105 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:49 minikube kubelet[3528]: E0828 13:31:49.052518 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:49 minikube kubelet[3528]: E0828 13:31:49.152786 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:49 minikube kubelet[3528]: E0828 13:31:49.253053 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:49 minikube kubelet[3528]: E0828 13:31:49.353426 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:49 minikube kubelet[3528]: E0828 13:31:49.453846 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:49 minikube kubelet[3528]: E0828 13:31:49.554256 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:49 minikube kubelet[3528]: E0828 13:31:49.654519 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:49 minikube kubelet[3528]: E0828 13:31:49.754900 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:49 minikube kubelet[3528]: E0828 13:31:49.855149 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:49 minikube kubelet[3528]: E0828 13:31:49.955442 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:50 minikube kubelet[3528]: E0828 13:31:50.055697 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:50 minikube kubelet[3528]: E0828 13:31:50.155817 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:50 minikube kubelet[3528]: E0828 13:31:50.256152 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:50 minikube kubelet[3528]: E0828 13:31:50.306812 3528 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: Get https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Aug 28 13:31:50 minikube kubelet[3528]: E0828 13:31:50.356328 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:50 minikube kubelet[3528]: E0828 13:31:50.456462 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:50 minikube kubelet[3528]: E0828 13:31:50.556802 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:50 minikube kubelet[3528]: E0828 13:31:50.657135 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:50 minikube kubelet[3528]: E0828 13:31:50.757484 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:50 minikube kubelet[3528]: E0828 13:31:50.857689 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:50 minikube kubelet[3528]: E0828 13:31:50.957913 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:51 minikube kubelet[3528]: E0828 13:31:51.058050 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:51 minikube kubelet[3528]: E0828 13:31:51.158338 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:51 minikube kubelet[3528]: E0828 13:31:51.258674 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:51 minikube kubelet[3528]: E0828 13:31:51.359043 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:51 minikube kubelet[3528]: E0828 13:31:51.459372 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:51 minikube kubelet[3528]: E0828 13:31:51.559644 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:51 minikube kubelet[3528]: E0828 13:31:51.659979 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:51 minikube kubelet[3528]: E0828 13:31:51.760302 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:51 minikube kubelet[3528]: E0828 13:31:51.860466 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:51 minikube kubelet[3528]: E0828 13:31:51.960750 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:52 minikube kubelet[3528]: E0828 13:31:52.061045 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:52 minikube kubelet[3528]: E0828 13:31:52.161431 3528 kubelet.go:2267] node "minikube" not found Aug 28 13:31:52 minikube kubelet[3528]: E0828 13:31:52.261719 3528 kubelet.go:2267] node "minikube" not found ```

Environment

``` **OS Version** mramos@mramos-l-ubuntu:~$ cat /etc/os-release NAME="Ubuntu" VERSION="20.04.1 LTS (Focal Fossa)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 20.04.1 LTS" VERSION_ID="20.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=focal UBUNTU_CODENAME=focal **Minikube Version** mramos@mramos-l-ubuntu:~$ minikube version minikube version: v1.12.3 commit: 2243b4b97c131e3244c5f014faedca0d846599f5-dirty **Docker Version** mramos@mramos-l-ubuntu:~$ docker version Client: Docker Engine - Community Version: 19.03.12 API version: 1.40 Go version: go1.13.10 Git commit: 48a66213fe Built: Mon Jun 22 15:45:44 2020 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.12 API version: 1.40 (minimum version 1.12) Go version: go1.13.10 Git commit: 48a66213fe Built: Mon Jun 22 15:44:15 2020 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.2.13 GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429 runc: Version: 1.0.0-rc10 GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd docker-init: Version: 0.18.0 GitCommit: fec3683 **Kubectl Version** mramos@mramos-l-ubuntu:~$ kubectl version --client Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:30:33Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"} ```
mramos-dev commented 4 years ago

Output from docker logs CONTAINERID:

mramos@mramos-l-ubuntu:~$ docker logs fcfe334cbfad + select_iptables + local mode=nft ++ grep '^-' ++ wc -l ++ true + num_legacy_lines=0 + '[' 0 -ge 10 ']' ++ grep '^-' ++ wc -l ++ true + num_nft_lines=0 + '[' 0 -ge 0 ']' + mode=legacy + echo 'INFO: setting iptables to detected mode: legacy' INFO: setting iptables to detected mode: legacy + update-alternatives --set iptables /usr/sbin/iptables-legacy + echo 'retryable update-alternatives: --set iptables /usr/sbin/iptables-legacy' + local 'args=--set iptables /usr/sbin/iptables-legacy' ++ seq 0 15 + for i in $(seq 0 15) + /usr/bin/update-alternatives --set iptables /usr/sbin/iptables-legacy + return + update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy + echo 'retryable update-alternatives: --set ip6tables /usr/sbin/ip6tables-legacy' + local 'args=--set ip6tables /usr/sbin/ip6tables-legacy' ++ seq 0 15 + for i in $(seq 0 15) + /usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy + return + fix_kmsg + [[ ! -e /dev/kmsg ]] + fix_mount + echo 'INFO: ensuring we can execute mount/umount even with userns-remap' INFO: ensuring we can execute mount/umount even with userns-remap ++ which mount ++ which umount + chown root:root /usr/bin/mount /usr/bin/umount ++ which mount ++ which umount + chmod -s /usr/bin/mount /usr/bin/umount ++ stat -f -c %T /bin/mount + [[ overlayfs == \a\u\f\s ]] + echo 'INFO: remounting /sys read-only' INFO: remounting /sys read-only + mount -o remount,ro /sys + echo 'INFO: making mounts shared' INFO: making mounts shared + mount --make-rshared / + fix_cgroup + echo 'INFO: fix cgroup mounts for all subsystems' INFO: fix cgroup mounts for all subsystems + local docker_cgroup_mounts ++ grep /sys/fs/cgroup /proc/self/mountinfo ++ grep docker + docker_cgroup_mounts='1243 1242 0:29 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:591 master:11 - cgroup cgroup rw,xattr,name=systemd 1245 1242 0:33 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:592 master:16 - cgroup cgroup rw,cpu,cpuacct 1246 1242 0:34 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:593 master:17 - cgroup cgroup rw,pids 1247 1242 0:35 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:594 master:18 - cgroup cgroup rw,cpuset 1248 1242 0:36 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:595 master:19 - cgroup cgroup rw,memory 1249 1242 0:37 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:596 master:20 - cgroup cgroup rw,freezer 1250 1242 0:38 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:597 master:21 - cgroup cgroup rw,perf_event 1251 1242 0:39 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:598 master:22 - cgroup cgroup rw,devices 1252 1242 0:40 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:599 master:23 - cgroup cgroup rw,blkio 1254 1242 0:42 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:601 master:25 - cgroup cgroup rw,net_cls,net_prio 1255 1242 0:43 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:602 master:26 - cgroup cgroup rw,hugetlb' + [[ -n 1243 1242 0:29 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:591 master:11 - cgroup cgroup rw,xattr,name=systemd 1245 1242 0:33 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:592 master:16 - cgroup cgroup rw,cpu,cpuacct 1246 1242 0:34 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:593 master:17 - cgroup cgroup rw,pids 1247 1242 0:35 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:594 master:18 - cgroup cgroup rw,cpuset 1248 1242 0:36 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:595 master:19 - cgroup cgroup rw,memory 1249 1242 0:37 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:596 master:20 - cgroup cgroup rw,freezer 1250 1242 0:38 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:597 master:21 - cgroup cgroup rw,perf_event 1251 1242 0:39 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:598 master:22 - cgroup cgroup rw,devices 1252 1242 0:40 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:599 master:23 - cgroup cgroup rw,blkio 1254 1242 0:42 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:601 master:25 - cgroup cgroup rw,net_cls,net_prio 1255 1242 0:43 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:602 master:26 - cgroup cgroup rw,hugetlb ]] + local docker_cgroup cgroup_subsystems subsystem ++ echo '1243 1242 0:29 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:591 master:11 - cgroup cgroup rw,xattr,name=systemd 1245 1242 0:33 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:592 master:16 - cgroup cgroup rw,cpu,cpuacct 1246 1242 0:34 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:593 master:17 - cgroup cgroup rw,pids 1247 1242 0:35 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:594 master:18 - cgroup cgroup rw,cpuset 1248 1242 0:36 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:595 master:19 - cgroup cgroup rw,memory 1249 1242 0:37 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:596 master:20 - cgroup cgroup rw,freezer 1250 1242 0:38 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:597 master:21 - cgroup cgroup rw,perf_event 1251 1242 0:39 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:598 master:22 - cgroup cgroup rw,devices 1252 1242 0:40 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:599 master:23 - cgroup cgroup rw,blkio 1254 1242 0:42 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:601 master:25 - cgroup cgroup rw,net_cls,net_prio 1255 1242 0:43 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:602 master:26 - cgroup cgroup rw,hugetlb' ++ head -n 1 ++ cut '-d ' -f 4 + docker_cgroup=/docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c ++ echo '1243 1242 0:29 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:591 master:11 - cgroup cgroup rw,xattr,name=systemd 1245 1242 0:33 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:592 master:16 - cgroup cgroup rw,cpu,cpuacct 1246 1242 0:34 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:593 master:17 - cgroup cgroup rw,pids 1247 1242 0:35 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:594 master:18 - cgroup cgroup rw,cpuset 1248 1242 0:36 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:595 master:19 - cgroup cgroup rw,memory 1249 1242 0:37 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:596 master:20 - cgroup cgroup rw,freezer 1250 1242 0:38 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:597 master:21 - cgroup cgroup rw,perf_event 1251 1242 0:39 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:598 master:22 - cgroup cgroup rw,devices 1252 1242 0:40 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:599 master:23 - cgroup cgroup rw,blkio 1254 1242 0:42 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:601 master:25 - cgroup cgroup rw,net_cls,net_prio 1255 1242 0:43 /docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:602 master:26 - cgroup cgroup rw,hugetlb' ++ cut '-d ' -f 5 + cgroup_subsystems='/sys/fs/cgroup/systemd /sys/fs/cgroup/cpu,cpuacct /sys/fs/cgroup/pids /sys/fs/cgroup/cpuset /sys/fs/cgroup/memory /sys/fs/cgroup/freezer /sys/fs/cgroup/perf_event /sys/fs/cgroup/devices /sys/fs/cgroup/blkio /sys/fs/cgroup/net_cls,net_prio /sys/fs/cgroup/hugetlb' + IFS= + read -r subsystem + echo '/sys/fs/cgroup/systemd /sys/fs/cgroup/cpu,cpuacct /sys/fs/cgroup/pids /sys/fs/cgroup/cpuset /sys/fs/cgroup/memory /sys/fs/cgroup/freezer /sys/fs/cgroup/perf_event /sys/fs/cgroup/devices /sys/fs/cgroup/blkio /sys/fs/cgroup/net_cls,net_prio /sys/fs/cgroup/hugetlb' + mkdir -p /sys/fs/cgroup/systemd/docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c + mount --bind /sys/fs/cgroup/systemd /sys/fs/cgroup/systemd/docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c + IFS= + read -r subsystem + mkdir -p /sys/fs/cgroup/cpu,cpuacct/docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c + mount --bind /sys/fs/cgroup/cpu,cpuacct /sys/fs/cgroup/cpu,cpuacct/docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c + IFS= + read -r subsystem + mkdir -p /sys/fs/cgroup/pids/docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c + mount --bind /sys/fs/cgroup/pids /sys/fs/cgroup/pids/docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c + IFS= + read -r subsystem + mkdir -p /sys/fs/cgroup/cpuset/docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c + mount --bind /sys/fs/cgroup/cpuset /sys/fs/cgroup/cpuset/docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c + IFS= + read -r subsystem + mkdir -p /sys/fs/cgroup/memory/docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c + mount --bind /sys/fs/cgroup/memory /sys/fs/cgroup/memory/docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c + IFS= + read -r subsystem + mkdir -p /sys/fs/cgroup/freezer/docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c + mount --bind /sys/fs/cgroup/freezer /sys/fs/cgroup/freezer/docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c + IFS= + read -r subsystem + mkdir -p /sys/fs/cgroup/perf_event/docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c + mount --bind /sys/fs/cgroup/perf_event /sys/fs/cgroup/perf_event/docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c + IFS= + read -r subsystem + mkdir -p /sys/fs/cgroup/devices/docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c + mount --bind /sys/fs/cgroup/devices /sys/fs/cgroup/devices/docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c + IFS= + read -r subsystem + mkdir -p /sys/fs/cgroup/blkio/docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c + mount --bind /sys/fs/cgroup/blkio /sys/fs/cgroup/blkio/docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c + IFS= + read -r subsystem + mkdir -p /sys/fs/cgroup/net_cls,net_prio/docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c + mount --bind /sys/fs/cgroup/net_cls,net_prio /sys/fs/cgroup/net_cls,net_prio/docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c + IFS= + read -r subsystem + mkdir -p /sys/fs/cgroup/hugetlb/docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c + mount --bind /sys/fs/cgroup/hugetlb /sys/fs/cgroup/hugetlb/docker/fcfe334cbfada2192202539408d206e41e5cff01bbeea2fe4be819bfefe9523c + IFS= + read -r subsystem + local podman_cgroup_mounts ++ grep /sys/fs/cgroup /proc/self/mountinfo ++ grep libpod_parent ++ true + podman_cgroup_mounts= + [[ -n '' ]] + fix_machine_id + echo 'INFO: clearing and regenerating /etc/machine-id' INFO: clearing and regenerating /etc/machine-id + rm -f /etc/machine-id + systemd-machine-id-setup Initializing machine ID from random generator. + fix_product_name + [[ -f /sys/class/dmi/id/product_name ]] + echo 'INFO: faking /sys/class/dmi/id/product_name to be "kind"' INFO: faking /sys/class/dmi/id/product_name to be "kind" + echo kind + mount -o ro,bind /kind/product_name /sys/class/dmi/id/product_name + fix_product_uuid + [[ ! -f /kind/product_uuid ]] + cat /proc/sys/kernel/random/uuid + [[ -f /sys/class/dmi/id/product_uuid ]] + echo 'INFO: faking /sys/class/dmi/id/product_uuid to be random' INFO: faking /sys/class/dmi/id/product_uuid to be random + mount -o ro,bind /kind/product_uuid /sys/class/dmi/id/product_uuid + [[ -f /sys/devices/virtual/dmi/id/product_uuid ]] + echo 'INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well' INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well + mount -o ro,bind /kind/product_uuid /sys/devices/virtual/dmi/id/product_uuid + configure_proxy + mkdir -p /etc/systemd/system.conf.d/ + cat + enable_network_magic + local docker_embedded_dns_ip=127.0.0.11 + local docker_host_ip ++ getent ahostsv4 host.docker.internal ++ cut '-d ' -f1 ++ head -n1 + docker_host_ip=92.242.140.21 + [[ -z 92.242.140.21 ]] + iptables-save + iptables-restore + sed -e 's/-d 127.0.0.11/-d 92.242.140.21/g' -e 's/-A OUTPUT \(.*\) -j DOCKER_OUTPUT/\0\n-A PREROUTING \1 -j DOCKER_OUTPUT/' -e 's/--to-source :53/--to-source 92.242.140.21:53/g' + cp /etc/resolv.conf /etc/resolv.conf.original + sed -e s/127.0.0.11/92.242.140.21/g /etc/resolv.conf.original ++ head -n1 +++ hostname ++ getent ahostsv4 minikube ++ cut '-d ' -f1 + curr_ipv4=92.242.140.21 + echo 'INFO: Detected IPv4 address: 92.242.140.21' INFO: Detected IPv4 address: 92.242.140.21 + '[' -f /kind/old-ipv4 ']' + [[ -n 92.242.140.21 ]] + echo -n 92.242.140.21 ++ head -n1 ++ cut '-d ' -f1 +++ hostname ++ getent ahostsv6 minikube ++ true + curr_ipv6= + echo 'INFO: Detected IPv6 address: ' INFO: Detected IPv6 address: + '[' -f /kind/old-ipv6 ']' + [[ -n '' ]] + exec /sbin/init systemd 245.4-4ubuntu3 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid) Detected virtualization docker. Detected architecture x86-64. Welcome to Ubuntu 20.04 LTS! Set hostname to . /lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. [ OK ] Started Dispatch Password …ts to Console Directory Watch. [ OK ] Set up automount Arbitrary…s File System Automount Point. [ OK ] Reached target Local Encrypted Volumes. [ OK ] Reached target Network is Online. [ OK ] Reached target Paths. [ OK ] Reached target Slices. [ OK ] Reached target Swap. [ OK ] Listening on Journal Audit Socket. [ OK ] Listening on Journal Socket (/dev/log). [ OK ] Listening on Journal Socket. Mounting Huge Pages File System... Mounting Kernel Debug File System... Mounting Kernel Trace File System... Starting Journal Service... Starting Create list of st…odes for the current kernel... Mounting FUSE Control File System... Starting Remount Root and Kernel File Systems... Starting Apply Kernel Variables... [ OK ] Mounted Huge Pages File System. [ OK ] Mounted Kernel Debug File System. [ OK ] Mounted Kernel Trace File System. [ OK ] Finished Create list of st… nodes for the current kernel. [ OK ] Finished Apply Kernel Variables. [ OK ] Mounted FUSE Control File System. [ OK ] Finished Remount Root and Kernel File Systems. Starting Create System Users... Starting Update UTMP about System Boot/Shutdown... [ OK ] Finished Update UTMP about System Boot/Shutdown. [ OK ] Finished Create System Users. Starting Create Static Device Nodes in /dev... [ OK ] Finished Create Static Device Nodes in /dev. [ OK ] Reached target Local File Systems (Pre). [ OK ] Reached target Local File Systems. [ OK ] Started Journal Service. [ OK ] Reached target System Initialization. [ OK ] Started Daily Cleanup of Temporary Directories. [ OK ] Reached target Timers. Starting Docker Socket for the API. Starting Flush Journal to Persistent Storage... [ OK ] Listening on Docker Socket for the API. [ OK ] Reached target Sockets. [ OK ] Reached target Basic System. Starting containerd container runtime... Starting minikube automount... Starting OpenBSD Secure Shell server... [ OK ] Finished Flush Journal to Persistent Storage. [ OK ] Started containerd container runtime. [ OK ] Finished minikube automount. Starting Docker Application Container Engine... [ OK ] Started OpenBSD Secure Shell server. [ OK ] Started Docker Application Container Engine. [ OK ] Reached target Multi-User System. [ OK ] Reached target Graphical Interface. Starting Update UTMP about System Runlevel Changes... [ OK ] Finished Update UTMP about System Runlevel Changes.
mramos-dev commented 4 years ago

Output from systemctl status kubelet:

mramos@mramos-l-ubuntu:~$ systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/lib/systemd/system/kubelet.service; disabled; vendor preset: enabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: inactive (dead) Docs: http://kubernetes.io/docs/
pappasam commented 4 years ago

I also have this issue on Linux Mint 20 (which uses Ubuntu 20.04 as its base)

mramos-dev commented 4 years ago

@pappasam My current workaround is to use v1.12.1 which unfortunately has a bug with persistent storage not persisting. If you can workaround that issue, that's what I would suggest.

I've reviewed the changelog for v1.12.2 but I'm out of my element with respects to debugging this issue: https://github.com/kubernetes/minikube/blob/v1.12.2/CHANGELOG.md#version-1122---2020-08-03

I wonder if one of the contributors to that release have an idea of what could be causing this issue. I don't know the protocol for triaging issues for this repo, so I won't mention them all here.

tstromberg commented 4 years ago

So, kubelet is dying, for unknown reasons. The big hint provider here is minikube logs, if folks have further logs to contribute.

I am curious if a more recent binary helps: https://storage.googleapis.com/minikube-builds/9151/minikube-linux-amd64 (you will need to run 'minikube delete' to get the latest image). It'll complain about storage-provisioner, but should work.

In the mean-time, I am downloading ubuntu 20.04 to see if I can replicate the issue.

tstromberg commented 4 years ago

FWIW, I wasn't able to replicate this issue using minikube start with Ubuntu 20.04.1 from WSL2 or from inside Hyper-V.

(I did not try with minikube start --extra-config=kubelet.cgroup-driver=systemd because it's an unnecessary setting.)

mramos-dev commented 4 years ago

@tstromberg I included the output of minikube logs as part of my initial submission when after running minikube --alsologtostderr --v=7 start. Is the log level not high enough? If not, what level should I pass to minikube start?

mramos-dev commented 4 years ago

I've updated to v1.13 and this appears to have been resolved for me.