kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.47k stars 4.89k forks source link

minikube 1.15.1 ubuntu start with docker driver fails #9962

Closed zmykevin closed 3 years ago

zmykevin commented 3 years ago

Steps to reproduce the issue:

1.minikube start --mount=true --mount-string="/home/zmykevin/semafor/hackathon/week_2:/home/zmykevin/semafor/hackathon/week_2" -v=1 --alsologtostderr
2. 
3.


**Full output of failed command:** 
I1214 11:46:10.827299 3370728 out.go:185] Setting OutFile to fd 1 ...
I1214 11:46:10.827630 3370728 out.go:237] isatty.IsTerminal(1) = true
I1214 11:46:10.827643 3370728 out.go:198] Setting ErrFile to fd 2...
I1214 11:46:10.827655 3370728 out.go:237] isatty.IsTerminal(2) = true
I1214 11:46:10.827792 3370728 root.go:279] Updating PATH: /home/zmykevin/.minikube/bin
I1214 11:46:10.828150 3370728 out.go:192] Setting JSON to false
I1214 11:46:10.864462 3370728 start.go:103] hostinfo: {"hostname":"nlp","uptime":2332848,"bootTime":1605642322,"procs":825,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"18.04","kernelVersion":"4.15.0-124-generic","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"20e670d4-fa81-4c37-918b-f8c1aac651bc"}
I1214 11:46:10.869950 3370728 start.go:113] virtualization: kvm host
I1214 11:46:10.945609 3370728 out.go:110] ๐Ÿ˜„  minikube v1.15.1 on Ubuntu 18.04
๐Ÿ˜„  minikube v1.15.1 on Ubuntu 18.04
I1214 11:46:10.945843 3370728 notify.go:126] Checking for updates...
I1214 11:46:10.946150 3370728 driver.go:302] Setting default libvirt URI to qemu:///system
I1214 11:46:10.946241 3370728 global.go:102] Querying for installed drivers using PATH=/home/zmykevin/.minikube/bin:/home/zmykevin/miniconda3/envs/semafor/bin:/home/zmykevin/miniconda3/condabin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
I1214 11:46:10.946412 3370728 global.go:110] virtualbox priority: 5, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/}
I1214 11:46:10.946583 3370728 global.go:110] vmware priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/}
I1214 11:46:11.029519 3370728 docker.go:117] docker version: linux-19.03.13
I1214 11:46:11.029752 3370728 cli_runner.go:110] Run: docker system info --format "{{json .}}"
I1214 11:46:11.183083 3370728 info.go:253] docker info: {ID:ABZD:XAYF:PSAW:ZQ6L:NQKF:FYOS:3L7A:AQ3E:XV5C:37PF:5NY4:G4LK Containers:50 ContainersRunning:0 ContainersPaused:0 ContainersStopped:50 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2020-12-14 11:46:11.092019619 -0800 PST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.15.0-124-generic OperatingSystem:Ubuntu 18.04.5 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:64 MemTotal:404337303552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:nlp Labels:[] ExperimentalBuild:false ServerVersion:19.03.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fba4e9a7d01810a393d5d25a3621dc101981175 Expected:8fba4e9a7d01810a393d5d25a3621dc101981175} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}}
I1214 11:46:11.183344 3370728 docker.go:147] overlay module found
I1214 11:46:11.183367 3370728 global.go:110] docker priority: 8, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Fix: Doc:}
I1214 11:46:11.183484 3370728 global.go:110] kvm2 priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "virsh": executable file not found in $PATH Fix:Install libvirt Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/}
I1214 11:46:11.183608 3370728 global.go:110] none priority: 3, state: {Installed:true Healthy:false Running:false NeedsImprovement:false Error:the 'none' driver must be run as the root user Fix:For non-root usage, try the newer 'docker' driver Doc:}
I1214 11:46:11.183698 3370728 global.go:110] podman priority: 2, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/}
I1214 11:46:11.183738 3370728 driver.go:248] "docker" has a higher priority (8) than "" (0)
I1214 11:46:11.183767 3370728 driver.go:239] not recommending "none" due to health: the 'none' driver must be run as the root user
I1214 11:46:11.183805 3370728 driver.go:273] Picked: docker
I1214 11:46:11.183826 3370728 driver.go:274] Alternatives: []
I1214 11:46:11.183845 3370728 driver.go:275] Rejects: [virtualbox vmware kvm2 none podman]
I1214 11:46:11.188021 3370728 out.go:110] โœจ  Automatically selected the docker driver
โœจ  Automatically selected the docker driver
I1214 11:46:11.188078 3370728 start.go:272] selected driver: docker
I1214 11:46:11.188100 3370728 start.go:686] validating driver "docker" against 
I1214 11:46:11.188149 3370728 start.go:697] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Fix: Doc:}
I1214 11:46:11.188337 3370728 cli_runner.go:110] Run: docker system info --format "{{json .}}"
I1214 11:46:11.360036 3370728 info.go:253] docker info: {ID:ABZD:XAYF:PSAW:ZQ6L:NQKF:FYOS:3L7A:AQ3E:XV5C:37PF:5NY4:G4LK Containers:50 ContainersRunning:0 ContainersPaused:0 ContainersStopped:50 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2020-12-14 11:46:11.257036905 -0800 PST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.15.0-124-generic OperatingSystem:Ubuntu 18.04.5 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:64 MemTotal:404337303552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:nlp Labels:[] ExperimentalBuild:false ServerVersion:19.03.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fba4e9a7d01810a393d5d25a3621dc101981175 Expected:8fba4e9a7d01810a393d5d25a3621dc101981175} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}}
I1214 11:46:11.360301 3370728 start_flags.go:233] no existing cluster config was found, will generate one from the flags 
I1214 11:46:11.371850 3370728 start_flags.go:251] Using suggested 96400MB memory alloc based on sys=385606MB, container=385606MB
I1214 11:46:11.372050 3370728 start_flags.go:641] Wait components to verify : map[apiserver:true system_pods:true]
I1214 11:46:11.372084 3370728 cni.go:74] Creating CNI manager for ""
I1214 11:46:11.372099 3370728 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I1214 11:46:11.372116 3370728 start_flags.go:364] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e Memory:96400 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[/home/zmykevin/semafor/hackathon/week_2:/home/zmykevin/semafor/hackathon/week_2] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[]}
I1214 11:46:11.381336 3370728 out.go:110] ๐Ÿ‘  Starting control plane node minikube in cluster minikube
๐Ÿ‘  Starting control plane node minikube in cluster minikube
I1214 11:46:11.455336 3370728 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e in local docker daemon, skipping pull
I1214 11:46:11.455389 3370728 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e exists in daemon, skipping pull
I1214 11:46:11.455421 3370728 preload.go:97] Checking if preload exists for k8s version v1.19.4 and runtime docker
I1214 11:46:11.455494 3370728 preload.go:105] Found local preload: /home/zmykevin/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4
I1214 11:46:11.455513 3370728 cache.go:54] Caching tarball of preloaded images
I1214 11:46:11.455551 3370728 preload.go:131] Found /home/zmykevin/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1214 11:46:11.455570 3370728 cache.go:57] Finished verifying existence of preloaded tar for  v1.19.4 on docker
I1214 11:46:11.456495 3370728 profile.go:150] Saving config to /home/zmykevin/.minikube/profiles/minikube/config.json ...
I1214 11:46:11.456559 3370728 lock.go:36] WriteFile acquiring /home/zmykevin/.minikube/profiles/minikube/config.json: {Name:mk4c2db1ae045754192d3b3ee8baa4a9c60dff3c Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I1214 11:46:11.456987 3370728 cache.go:184] Successfully downloaded all kic artifacts
I1214 11:46:11.457036 3370728 start.go:314] acquiring machines lock for minikube: {Name:mk354a581559b8b88ccbfa41d24785d64bbe8d2b Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1214 11:46:11.457137 3370728 start.go:318] acquired machines lock for "minikube" in 68.976ยตs
I1214 11:46:11.457174 3370728 start.go:90] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e Memory:96400 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[/home/zmykevin/semafor/hackathon/week_2:/home/zmykevin/semafor/hackathon/week_2] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.19.4 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[]} &{Name: IP: Port:8443 KubernetesVersion:v1.19.4 ControlPlane:true Worker:true}
I1214 11:46:11.457346 3370728 start.go:127] createHost starting for "" (driver="docker")
I1214 11:46:11.459747 3370728 out.go:110] ๐Ÿ”ฅ  Creating docker container (CPUs=2, Memory=96400MB) ...
๐Ÿ”ฅ  Creating docker container (CPUs=2, Memory=96400MB) ...
I1214 11:46:11.460364 3370728 start.go:164] libmachine.API.Create for "minikube" (driver="docker")
I1214 11:46:11.460441 3370728 client.go:165] LocalClient.Create starting
I1214 11:46:11.460503 3370728 main.go:119] libmachine: Reading certificate data from /home/zmykevin/.minikube/certs/ca.pem
I1214 11:46:11.460578 3370728 main.go:119] libmachine: Decoding PEM data...
I1214 11:46:11.460626 3370728 main.go:119] libmachine: Parsing certificate...
I1214 11:46:11.460919 3370728 main.go:119] libmachine: Reading certificate data from /home/zmykevin/.minikube/certs/cert.pem
I1214 11:46:11.460969 3370728 main.go:119] libmachine: Decoding PEM data...
I1214 11:46:11.461020 3370728 main.go:119] libmachine: Parsing certificate...
I1214 11:46:11.461852 3370728 cli_runner.go:110] Run: docker network inspect minikube --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}"
W1214 11:46:11.529748 3370728 cli_runner.go:148] docker network inspect minikube --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}" returned with exit code 1
I1214 11:46:11.530251 3370728 network_create.go:178] running [docker network inspect minikube] to gather additional debugging logs...
I1214 11:46:11.530296 3370728 cli_runner.go:110] Run: docker network inspect minikube
W1214 11:46:11.588077 3370728 cli_runner.go:148] docker network inspect minikube returned with exit code 1
I1214 11:46:11.588133 3370728 network_create.go:181] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
stdout:
[]

stderr:
Error: No such network: minikube
I1214 11:46:11.588167 3370728 network_create.go:183] output of [docker network inspect minikube]: -- stdout --
[]

-- /stdout --
** stderr ** 
Error: No such network: minikube

** /stderr **
I1214 11:46:11.588326 3370728 cli_runner.go:110] Run: docker network inspect bridge --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}"
I1214 11:46:11.660494 3370728 network_create.go:96] attempt to create network 192.168.49.0/24 with subnet: minikube and gateway 192.168.49.1 and MTU of 1500 ...
I1214 11:46:11.660703 3370728 cli_runner.go:110] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true minikube -o com.docker.network.driver.mtu=1500
I1214 11:46:11.808339 3370728 kic.go:93] calculated static IP "192.168.49.2" for the "minikube" container
I1214 11:46:11.808569 3370728 cli_runner.go:110] Run: docker ps -a --format {{.Names}}
I1214 11:46:11.949344 3370728 cli_runner.go:110] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I1214 11:46:12.014978 3370728 oci.go:102] Successfully created a docker volume minikube
I1214 11:46:12.015174 3370728 cli_runner.go:110] Run: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e -d /var/lib
I1214 11:46:14.288874 3370728 cli_runner.go:154] Completed: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e -d /var/lib: (2.273622584s)
I1214 11:46:14.288969 3370728 oci.go:106] Successfully prepared a docker volume minikube
W1214 11:46:14.289051 3370728 oci.go:153] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1214 11:46:14.289059 3370728 preload.go:97] Checking if preload exists for k8s version v1.19.4 and runtime docker
I1214 11:46:14.289162 3370728 preload.go:105] Found local preload: /home/zmykevin/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4
I1214 11:46:14.289186 3370728 kic.go:148] Starting extracting preloaded images to volume ...
I1214 11:46:14.289196 3370728 cli_runner.go:110] Run: docker info --format "'{{json .SecurityOptions}}'"
I1214 11:46:14.289329 3370728 cli_runner.go:110] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/zmykevin/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e -I lz4 -xvf /preloaded.tar -C /extractDir
I1214 11:46:14.434789 3370728 cli_runner.go:110] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=96400mb --memory-swap=96400mb --cpus=2 -e container=docker --expose 8443 --volume=/home/zmykevin/semafor/hackathon/week_2:/home/zmykevin/semafor/hackathon/week_2 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e
I1214 11:46:16.033058 3370728 cli_runner.go:154] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=96400mb --memory-swap=96400mb --cpus=2 -e container=docker --expose 8443 --volume=/home/zmykevin/semafor/hackathon/week_2:/home/zmykevin/semafor/hackathon/week_2 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e: (1.598112403s)
I1214 11:46:16.033241 3370728 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Running}}
I1214 11:46:16.114643 3370728 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I1214 11:46:16.178618 3370728 cli_runner.go:110] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
I1214 11:46:16.385010 3370728 oci.go:245] the created container "minikube" has a running status.
I1214 11:46:16.385081 3370728 kic.go:179] Creating ssh key for kic: /home/zmykevin/.minikube/machines/minikube/id_rsa...
I1214 11:46:16.606462 3370728 kic_runner.go:179] docker (temp): /home/zmykevin/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1214 11:46:17.428429 3370728 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I1214 11:46:17.499745 3370728 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1214 11:46:17.499801 3370728 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I1214 11:46:21.631091 3370728 cli_runner.go:154] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/zmykevin/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e -I lz4 -xvf /preloaded.tar -C /extractDir: (7.341619737s)
I1214 11:46:21.631154 3370728 kic.go:157] duration metric: took 7.341962 seconds to extract preloaded images to volume
I1214 11:46:21.631325 3370728 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I1214 11:46:21.697515 3370728 machine.go:88] provisioning docker machine ...
I1214 11:46:21.697613 3370728 ubuntu.go:166] provisioning hostname "minikube"
I1214 11:46:21.697740 3370728 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1214 11:46:21.765684 3370728 main.go:119] libmachine: Using SSH client type: native
I1214 11:46:21.766257 3370728 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x808c20] 0x808be0   [] 0s} 127.0.0.1 32791  }
I1214 11:46:21.766308 3370728 main.go:119] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I1214 11:46:21.965001 3370728 main.go:119] libmachine: SSH cmd err, output: : minikube

I1214 11:46:21.965201 3370728 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1214 11:46:22.040739 3370728 main.go:119] libmachine: Using SSH client type: native
I1214 11:46:22.041235 3370728 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x808c20] 0x808be0   [] 0s} 127.0.0.1 32791  }
I1214 11:46:22.041306 3370728 main.go:119] libmachine: About to run SSH command:

        if ! grep -xq '.*\sminikube' /etc/hosts; then
            if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
            else 
                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
            fi
        fi
I1214 11:46:22.222789 3370728 main.go:119] libmachine: SSH cmd err, output: : 
I1214 11:46:22.222849 3370728 ubuntu.go:172] set auth options {CertDir:/home/zmykevin/.minikube CaCertPath:/home/zmykevin/.minikube/certs/ca.pem CaPrivateKeyPath:/home/zmykevin/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/zmykevin/.minikube/machines/server.pem ServerKeyPath:/home/zmykevin/.minikube/machines/server-key.pem ClientKeyPath:/home/zmykevin/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/zmykevin/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/zmykevin/.minikube}
I1214 11:46:22.222976 3370728 ubuntu.go:174] setting up certificates
I1214 11:46:22.223009 3370728 provision.go:82] configureAuth start
I1214 11:46:22.223159 3370728 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I1214 11:46:22.291791 3370728 provision.go:131] copyHostCerts
I1214 11:46:22.291908 3370728 exec_runner.go:91] found /home/zmykevin/.minikube/cert.pem, removing ...
I1214 11:46:22.292048 3370728 exec_runner.go:98] cp: /home/zmykevin/.minikube/certs/cert.pem --> /home/zmykevin/.minikube/cert.pem (1127 bytes)
I1214 11:46:22.292283 3370728 exec_runner.go:91] found /home/zmykevin/.minikube/key.pem, removing ...
I1214 11:46:22.292358 3370728 exec_runner.go:98] cp: /home/zmykevin/.minikube/certs/key.pem --> /home/zmykevin/.minikube/key.pem (1675 bytes)
I1214 11:46:22.292493 3370728 exec_runner.go:91] found /home/zmykevin/.minikube/ca.pem, removing ...
I1214 11:46:22.292554 3370728 exec_runner.go:98] cp: /home/zmykevin/.minikube/certs/ca.pem --> /home/zmykevin/.minikube/ca.pem (1082 bytes)
I1214 11:46:22.292666 3370728 provision.go:105] generating server cert: /home/zmykevin/.minikube/machines/server.pem ca-key=/home/zmykevin/.minikube/certs/ca.pem private-key=/home/zmykevin/.minikube/certs/ca-key.pem org=zmykevin.minikube san=[192.168.49.2 localhost 127.0.0.1 minikube minikube]
I1214 11:46:22.459165 3370728 provision.go:159] copyRemoteCerts
I1214 11:46:22.459228 3370728 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1214 11:46:22.459280 3370728 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1214 11:46:22.527461 3370728 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32791 SSHKeyPath:/home/zmykevin/.minikube/machines/minikube/id_rsa Username:docker}
I1214 11:46:22.651515 3370728 ssh_runner.go:215] scp /home/zmykevin/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1214 11:46:22.695198 3370728 ssh_runner.go:215] scp /home/zmykevin/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
I1214 11:46:22.735271 3370728 ssh_runner.go:215] scp /home/zmykevin/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1214 11:46:22.777314 3370728 provision.go:85] duration metric: configureAuth took 554.261377ms
I1214 11:46:22.777358 3370728 ubuntu.go:190] setting minikube options for container-runtime
I1214 11:46:22.777796 3370728 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1214 11:46:22.853954 3370728 main.go:119] libmachine: Using SSH client type: native
I1214 11:46:22.854410 3370728 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x808c20] 0x808be0   [] 0s} 127.0.0.1 32791  }
I1214 11:46:22.854454 3370728 main.go:119] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1214 11:46:23.035356 3370728 main.go:119] libmachine: SSH cmd err, output: : overlay

I1214 11:46:23.035413 3370728 ubuntu.go:71] root file system type: overlay
I1214 11:46:23.035814 3370728 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I1214 11:46:23.035944 3370728 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1214 11:46:23.103172 3370728 main.go:119] libmachine: Using SSH client type: native
I1214 11:46:23.103582 3370728 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x808c20] 0x808be0   [] 0s} 127.0.0.1 32791  }
I1214 11:46:23.103827 3370728 main.go:119] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1214 11:46:23.302735 3370728 main.go:119] libmachine: SSH cmd err, output: : [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP 

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I1214 11:46:23.302929 3370728 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1214 11:46:23.369408 3370728 main.go:119] libmachine: Using SSH client type: native
I1214 11:46:23.369778 3370728 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x808c20] 0x808be0   [] 0s} 127.0.0.1 32791  }
I1214 11:46:23.369834 3370728 main.go:119] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1214 11:46:24.629113 3370728 main.go:119] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service   2020-09-16 17:01:20.000000000 +0000
+++ /lib/systemd/system/docker.service.new  2020-12-14 19:46:23.296589340 +0000
@@ -8,24 +8,22 @@

 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
+ExecReload=/bin/kill -s HUP 

 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -33,9 +31,10 @@
 LimitNPROC=infinity
 LimitCORE=infinity

-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0

 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker

I1214 11:46:24.629298 3370728 machine.go:91] provisioned docker machine in 2.931712026s
I1214 11:46:24.629327 3370728 client.go:168] LocalClient.Create took 13.168873411s
I1214 11:46:24.629397 3370728 start.go:172] duration metric: libmachine.API.Create for "minikube" took 13.169033363s
I1214 11:46:24.629422 3370728 start.go:268] post-start starting for "minikube" (driver="docker")
I1214 11:46:24.629441 3370728 start.go:278] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1214 11:46:24.629589 3370728 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1214 11:46:24.629699 3370728 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1214 11:46:24.698365 3370728 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32791 SSHKeyPath:/home/zmykevin/.minikube/machines/minikube/id_rsa Username:docker}
I1214 11:46:24.824100 3370728 ssh_runner.go:148] Run: cat /etc/os-release
I1214 11:46:24.830471 3370728 main.go:119] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1214 11:46:24.830539 3370728 main.go:119] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1214 11:46:24.830577 3370728 main.go:119] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1214 11:46:24.830600 3370728 info.go:97] Remote host: Ubuntu 20.04.1 LTS
I1214 11:46:24.830627 3370728 filesync.go:118] Scanning /home/zmykevin/.minikube/addons for local assets ...
I1214 11:46:24.830726 3370728 filesync.go:118] Scanning /home/zmykevin/.minikube/files for local assets ...
I1214 11:46:24.830786 3370728 start.go:271] post-start completed in 201.344808ms
I1214 11:46:24.831517 3370728 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I1214 11:46:24.900413 3370728 profile.go:150] Saving config to /home/zmykevin/.minikube/profiles/minikube/config.json ...
I1214 11:46:24.900990 3370728 ssh_runner.go:148] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1214 11:46:24.901107 3370728 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1214 11:46:24.974532 3370728 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32791 SSHKeyPath:/home/zmykevin/.minikube/machines/minikube/id_rsa Username:docker}
I1214 11:46:25.096253 3370728 start.go:130] duration metric: createHost completed in 13.638880022s
I1214 11:46:25.096290 3370728 start.go:81] releasing machines lock for "minikube", held for 13.639128438s
I1214 11:46:25.096423 3370728 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I1214 11:46:25.170512 3370728 ssh_runner.go:148] Run: systemctl --version
I1214 11:46:25.170547 3370728 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I1214 11:46:25.170631 3370728 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1214 11:46:25.170701 3370728 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1214 11:46:25.240655 3370728 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32791 SSHKeyPath:/home/zmykevin/.minikube/machines/minikube/id_rsa Username:docker}
I1214 11:46:25.254357 3370728 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32791 SSHKeyPath:/home/zmykevin/.minikube/machines/minikube/id_rsa Username:docker}
I1214 11:46:25.499235 3370728 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service containerd
I1214 11:46:25.522227 3370728 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I1214 11:46:25.542551 3370728 cruntime.go:193] skipping containerd shutdown because we are bound to it
I1214 11:46:25.542683 3370728 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I1214 11:46:25.565318 3370728 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I1214 11:46:25.587431 3370728 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I1214 11:46:25.707441 3370728 ssh_runner.go:148] Run: sudo systemctl start docker
I1214 11:46:25.728590 3370728 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
I1214 11:46:25.833909 3370728 out.go:110] ๐Ÿณ  Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
๐Ÿณ  Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
I1214 11:46:25.834044 3370728 cli_runner.go:110] Run: docker network inspect minikube --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}"
I1214 11:46:25.905762 3370728 ssh_runner.go:148] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1214 11:46:25.913502 3370728 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.49.1    host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I1214 11:46:25.933848 3370728 preload.go:97] Checking if preload exists for k8s version v1.19.4 and runtime docker
I1214 11:46:25.933916 3370728 preload.go:105] Found local preload: /home/zmykevin/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4
I1214 11:46:25.934039 3370728 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I1214 11:46:26.001446 3370728 docker.go:382] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.19.4
k8s.gcr.io/kube-apiserver:v1.19.4
k8s.gcr.io/kube-controller-manager:v1.19.4
k8s.gcr.io/kube-scheduler:v1.19.4
gcr.io/k8s-minikube/storage-provisioner:v3
k8s.gcr.io/etcd:3.4.13-0
kubernetesui/dashboard:v2.0.3
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2

-- /stdout --
I1214 11:46:26.001509 3370728 docker.go:319] Images already preloaded, skipping extraction
I1214 11:46:26.001607 3370728 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I1214 11:46:26.070801 3370728 docker.go:382] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.19.4
k8s.gcr.io/kube-controller-manager:v1.19.4
k8s.gcr.io/kube-apiserver:v1.19.4
k8s.gcr.io/kube-scheduler:v1.19.4
gcr.io/k8s-minikube/storage-provisioner:v3
k8s.gcr.io/etcd:3.4.13-0
kubernetesui/dashboard:v2.0.3
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2

-- /stdout --
I1214 11:46:26.070880 3370728 cache_images.go:74] Images are preloaded, skipping loading
I1214 11:46:26.070994 3370728 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
I1214 11:46:26.237839 3370728 cni.go:74] Creating CNI manager for ""
I1214 11:46:26.237883 3370728 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I1214 11:46:26.237905 3370728 kubeadm.go:84] Using pod CIDR: 
I1214 11:46:26.237954 3370728 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.19.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I1214 11:46:26.238325 3370728 kubeadm.go:154] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.49.2
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 192.168.49.2
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
  extraArgs:
    leader-elect: "false"
scheduler:
  extraArgs:
    leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
    extraArgs:
      proxy-refresh-interval: "70000"
kubernetesVersion: v1.19.4
networking:
  dnsDomain: cluster.local
  podSubnet: ""
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: ""
metricsBindAddress: 192.168.49.2:10249

I1214 11:46:26.238676 3370728 kubeadm.go:822] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.19.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2

[Install]
 config:
{KubernetesVersion:v1.19.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1214 11:46:26.238829 3370728 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.19.4
I1214 11:46:26.255080 3370728 binaries.go:44] Found k8s binaries, skipping transfer
I1214 11:46:26.255214 3370728 ssh_runner.go:148] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1214 11:46:26.272072 3370728 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
I1214 11:46:26.301959 3370728 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
I1214 11:46:26.330338 3370728 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1787 bytes)
I1214 11:46:26.358280 3370728 ssh_runner.go:148] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1214 11:46:26.364406 3370728 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.2   control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I1214 11:46:26.386073 3370728 certs.go:52] Setting up /home/zmykevin/.minikube/profiles/minikube for IP: 192.168.49.2
I1214 11:46:26.386195 3370728 certs.go:169] skipping minikubeCA CA generation: /home/zmykevin/.minikube/ca.key
I1214 11:46:26.386245 3370728 certs.go:169] skipping proxyClientCA CA generation: /home/zmykevin/.minikube/proxy-client-ca.key
I1214 11:46:26.386353 3370728 certs.go:273] generating minikube-user signed cert: /home/zmykevin/.minikube/profiles/minikube/client.key
I1214 11:46:26.386389 3370728 crypto.go:69] Generating cert /home/zmykevin/.minikube/profiles/minikube/client.crt with IP's: []
I1214 11:46:26.621081 3370728 crypto.go:157] Writing cert to /home/zmykevin/.minikube/profiles/minikube/client.crt ...
I1214 11:46:26.621102 3370728 lock.go:36] WriteFile acquiring /home/zmykevin/.minikube/profiles/minikube/client.crt: {Name:mk87b99fcb9a51c48179e8135809ebd94995c975 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I1214 11:46:26.621280 3370728 crypto.go:165] Writing key to /home/zmykevin/.minikube/profiles/minikube/client.key ...
I1214 11:46:26.621290 3370728 lock.go:36] WriteFile acquiring /home/zmykevin/.minikube/profiles/minikube/client.key: {Name:mk3bf8029c6362db76e0a139e9e1c44accf78dd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I1214 11:46:26.621356 3370728 certs.go:273] generating minikube signed cert: /home/zmykevin/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I1214 11:46:26.621363 3370728 crypto.go:69] Generating cert /home/zmykevin/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I1214 11:46:26.837957 3370728 crypto.go:157] Writing cert to /home/zmykevin/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ...
I1214 11:46:26.837971 3370728 lock.go:36] WriteFile acquiring /home/zmykevin/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mkfbe8caa856030eff958f092ad107b0ef13541b Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I1214 11:46:26.838059 3370728 crypto.go:165] Writing key to /home/zmykevin/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ...
I1214 11:46:26.838068 3370728 lock.go:36] WriteFile acquiring /home/zmykevin/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk77b5471a00f027ef7720586f591c767a8d4f0e Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I1214 11:46:26.838116 3370728 certs.go:284] copying /home/zmykevin/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/zmykevin/.minikube/profiles/minikube/apiserver.crt
I1214 11:46:26.838156 3370728 certs.go:288] copying /home/zmykevin/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/zmykevin/.minikube/profiles/minikube/apiserver.key
I1214 11:46:26.838200 3370728 certs.go:273] generating aggregator signed cert: /home/zmykevin/.minikube/profiles/minikube/proxy-client.key
I1214 11:46:26.838206 3370728 crypto.go:69] Generating cert /home/zmykevin/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I1214 11:46:26.994135 3370728 crypto.go:157] Writing cert to /home/zmykevin/.minikube/profiles/minikube/proxy-client.crt ...
I1214 11:46:26.994167 3370728 lock.go:36] WriteFile acquiring /home/zmykevin/.minikube/profiles/minikube/proxy-client.crt: {Name:mk6e75ec2e4c278ce1852cbaf21c1edd9f8be273 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I1214 11:46:26.994305 3370728 crypto.go:165] Writing key to /home/zmykevin/.minikube/profiles/minikube/proxy-client.key ...
I1214 11:46:26.994319 3370728 lock.go:36] WriteFile acquiring /home/zmykevin/.minikube/profiles/minikube/proxy-client.key: {Name:mkcf333537455640bee96b696cb8ed972920ffc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I1214 11:46:26.994469 3370728 certs.go:348] found cert: /home/zmykevin/.minikube/certs/home/zmykevin/.minikube/certs/ca-key.pem (1675 bytes)
I1214 11:46:26.994506 3370728 certs.go:348] found cert: /home/zmykevin/.minikube/certs/home/zmykevin/.minikube/certs/ca.crt (7035 bytes)
I1214 11:46:26.994543 3370728 certs.go:348] found cert: /home/zmykevin/.minikube/certs/home/zmykevin/.minikube/certs/ca.pem (1082 bytes)
I1214 11:46:26.994570 3370728 certs.go:348] found cert: /home/zmykevin/.minikube/certs/home/zmykevin/.minikube/certs/cert.pem (1127 bytes)
I1214 11:46:26.994597 3370728 certs.go:348] found cert: /home/zmykevin/.minikube/certs/home/zmykevin/.minikube/certs/key.pem (1675 bytes)
I1214 11:46:26.995605 3370728 ssh_runner.go:215] scp /home/zmykevin/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1214 11:46:27.037910 3370728 ssh_runner.go:215] scp /home/zmykevin/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1214 11:46:27.079613 3370728 ssh_runner.go:215] scp /home/zmykevin/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1214 11:46:27.120518 3370728 ssh_runner.go:215] scp /home/zmykevin/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1214 11:46:27.162397 3370728 ssh_runner.go:215] scp /home/zmykevin/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1214 11:46:27.203269 3370728 ssh_runner.go:215] scp /home/zmykevin/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1214 11:46:27.244604 3370728 ssh_runner.go:215] scp /home/zmykevin/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1214 11:46:27.287531 3370728 ssh_runner.go:215] scp /home/zmykevin/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1214 11:46:27.328367 3370728 ssh_runner.go:148] Run: stat -c "%s %y" /usr/share/ca-certificates/ca.pem
I1214 11:46:27.334091 3370728 ssh_runner.go:205] existence check for /usr/share/ca-certificates/ca.pem: stat -c "%s %y" /usr/share/ca-certificates/ca.pem: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/usr/share/ca-certificates/ca.pem': No such file or directory
I1214 11:46:27.334152 3370728 ssh_runner.go:215] scp /home/zmykevin/.minikube/certs/ca.crt --> /usr/share/ca-certificates/ca.pem (7035 bytes)
I1214 11:46:27.374916 3370728 ssh_runner.go:215] scp /home/zmykevin/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1214 11:46:27.414523 3370728 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I1214 11:46:27.445142 3370728 ssh_runner.go:148] Run: openssl version
I1214 11:46:27.457892 3370728 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/ca.pem && ln -fs /usr/share/ca-certificates/ca.pem /etc/ssl/certs/ca.pem"
I1214 11:46:27.476164 3370728 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/ca.pem
I1214 11:46:27.483728 3370728 certs.go:389] hashing: -rw-r--r-- 1 root root 7035 Dec 14 08:51 /usr/share/ca-certificates/ca.pem
I1214 11:46:27.483814 3370728 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/ca.pem
I1214 11:46:27.496880 3370728 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/5dc5a7ff.0 || ln -fs /etc/ssl/certs/ca.pem /etc/ssl/certs/5dc5a7ff.0"
I1214 11:46:27.511647 3370728 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1214 11:46:27.529704 3370728 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1214 11:46:27.536808 3370728 certs.go:389] hashing: -rw-r--r-- 1 root root 1111 Dec  4 21:57 /usr/share/ca-certificates/minikubeCA.pem
I1214 11:46:27.536896 3370728 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1214 11:46:27.549930 3370728 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1214 11:46:27.567790 3370728 kubeadm.go:324] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e Memory:96400 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[/home/zmykevin/semafor/hackathon/week_2:/home/zmykevin/semafor/hackathon/week_2] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.19.4 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[]}
I1214 11:46:27.568035 3370728 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1214 11:46:27.635048 3370728 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1214 11:46:27.651765 3370728 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1214 11:46:27.668005 3370728 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver
I1214 11:46:27.668121 3370728 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1214 11:46:27.684451 3370728 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1214 11:46:27.684524 3370728 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1214 11:48:47.914550 3370728 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (2m20.229950647s)
W1214 11:48:47.915168 3370728 out.go:146] ๐Ÿ’ข  initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.19.4
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-124-generic
DOCKER_VERSION: 19.03.13
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_PIDS: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.004711 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-check] Initial timeout of 40s passed.

stderr:
W1214 19:46:27.971149     881 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING Swap]: running with swap on is not supported. Please disable swap
    [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-124-generic\n", err: exit status 1
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher

๐Ÿ’ข  initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.19.4
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-124-generic
DOCKER_VERSION: 19.03.13
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_PIDS: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.004711 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-check] Initial timeout of 40s passed.

stderr:
W1214 19:46:27.971149     881 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING Swap]: running with swap on is not supported. Please disable swap
    [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-124-generic\n", err: exit status 1
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher

I1214 11:48:47.915716 3370728 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I1214 11:48:51.622336 3370728 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (3.706577468s)
I1214 11:48:51.622469 3370728 ssh_runner.go:148] Run: sudo systemctl stop -f kubelet
I1214 11:48:51.647176 3370728 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1214 11:48:51.711959 3370728 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver
I1214 11:48:51.712043 3370728 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1214 11:48:51.725985 3370728 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1214 11:48:51.726062 3370728 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1214 11:51:09.393587 3370728 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (2m17.667446169s)
I1214 11:51:09.393738 3370728 kubeadm.go:326] StartCluster complete in 4m41.825955903s
I1214 11:51:09.393911 3370728 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1214 11:51:09.460138 3370728 logs.go:206] 1 containers: [ba376e75c7e4]
I1214 11:51:09.460273 3370728 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1214 11:51:09.526325 3370728 logs.go:206] 1 containers: [3591dd4cec26]
I1214 11:51:09.526511 3370728 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1214 11:51:09.592696 3370728 logs.go:206] 0 containers: []
W1214 11:51:09.592743 3370728 logs.go:208] No container was found matching "coredns"
I1214 11:51:09.592839 3370728 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1214 11:51:09.657238 3370728 logs.go:206] 1 containers: [a9edd27337d8]
I1214 11:51:09.657404 3370728 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1214 11:51:09.721041 3370728 logs.go:206] 0 containers: []
W1214 11:51:09.721085 3370728 logs.go:208] No container was found matching "kube-proxy"
I1214 11:51:09.721182 3370728 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I1214 11:51:09.783331 3370728 logs.go:206] 0 containers: []
W1214 11:51:09.783388 3370728 logs.go:208] No container was found matching "kubernetes-dashboard"
I1214 11:51:09.783506 3370728 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I1214 11:51:09.847245 3370728 logs.go:206] 0 containers: []
W1214 11:51:09.847290 3370728 logs.go:208] No container was found matching "storage-provisioner"
I1214 11:51:09.847394 3370728 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1214 11:51:09.919481 3370728 logs.go:206] 1 containers: [db524df7774a]
I1214 11:51:09.919572 3370728 logs.go:120] Gathering logs for describe nodes ...
I1214 11:51:09.919609 3370728 ssh_runner.go:148] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.19.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1214 11:51:10.194610 3370728 logs.go:120] Gathering logs for kube-apiserver [ba376e75c7e4] ...
I1214 11:51:10.194659 3370728 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 ba376e75c7e4"
I1214 11:51:10.288145 3370728 logs.go:120] Gathering logs for kube-scheduler [a9edd27337d8] ...
I1214 11:51:10.288181 3370728 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 a9edd27337d8"
I1214 11:51:10.375412 3370728 logs.go:120] Gathering logs for kubelet ...
I1214 11:51:10.375453 3370728 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1214 11:51:10.453620 3370728 logs.go:120] Gathering logs for etcd [3591dd4cec26] ...
I1214 11:51:10.453645 3370728 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 3591dd4cec26"
I1214 11:51:10.529244 3370728 logs.go:120] Gathering logs for kube-controller-manager [db524df7774a] ...
I1214 11:51:10.529292 3370728 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 db524df7774a"
I1214 11:51:10.616022 3370728 logs.go:120] Gathering logs for Docker ...
I1214 11:51:10.616084 3370728 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I1214 11:51:10.651181 3370728 logs.go:120] Gathering logs for container status ...
I1214 11:51:10.651212 3370728 ssh_runner.go:148] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1214 11:51:10.692443 3370728 logs.go:120] Gathering logs for dmesg ...
I1214 11:51:10.692483 3370728 ssh_runner.go:148] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
W1214 11:51:10.743338 3370728 out.go:258] Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.19.4
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-124-generic
DOCKER_VERSION: 19.03.13
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_PIDS: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 15.003838 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-check] Initial timeout of 40s passed.

stderr:
W1214 19:48:51.975856    3651 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING Swap]: running with swap on is not supported. Please disable swap
    [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-124-generic\n", err: exit status 1
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher
W1214 11:51:10.743496 3370728 out.go:146] 

W1214 11:51:10.743765 3370728 out.go:146] ๐Ÿ’ฃ  Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.19.4
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-124-generic
DOCKER_VERSION: 19.03.13
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_PIDS: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 15.003838 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-check] Initial timeout of 40s passed.

stderr:
W1214 19:48:51.975856    3651 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING Swap]: running with swap on is not supported. Please disable swap
    [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-124-generic\n", err: exit status 1
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher

๐Ÿ’ฃ  Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.19.4
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-124-generic
DOCKER_VERSION: 19.03.13
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_PIDS: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 15.003838 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-check] Initial timeout of 40s passed.

stderr:
W1214 19:48:51.975856    3651 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING Swap]: running with swap on is not supported. Please disable swap
    [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-124-generic\n", err: exit status 1
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher

W1214 11:51:10.744035 3370728 out.go:146] 

W1214 11:51:10.744075 3370728 out.go:146] ๐Ÿ˜ฟ  minikube is exiting due to an error. If the above message is not useful, open an issue:
๐Ÿ˜ฟ  minikube is exiting due to an error. If the above message is not useful, open an issue:
W1214 11:51:10.744117 3370728 out.go:146] ๐Ÿ‘‰  https://github.com/kubernetes/minikube/issues/new/choose
๐Ÿ‘‰  https://github.com/kubernetes/minikube/issues/new/choose
I1214 11:51:10.749951 3370728 out.go:110] 

W1214 11:51:10.750422 3370728 out.go:146] โŒ  Exiting due to GUEST_START: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.19.4
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-124-generic
DOCKER_VERSION: 19.03.13
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_PIDS: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 15.003838 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-check] Initial timeout of 40s passed.

stderr:
W1214 19:48:51.975856    3651 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING Swap]: running with swap on is not supported. Please disable swap
    [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-124-generic\n", err: exit status 1
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher

โŒ  Exiting due to GUEST_START: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.19.4
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-124-generic
DOCKER_VERSION: 19.03.13
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_PIDS: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 15.003838 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-check] Initial timeout of 40s passed.

stderr:
W1214 19:48:51.975856    3651 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING Swap]: running with swap on is not supported. Please disable swap
    [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-124-generic\n", err: exit status 1
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher

W1214 11:51:10.751140 3370728 out.go:146] 

W1214 11:51:10.751246 3370728 out.go:146] ๐Ÿ˜ฟ  If the above advice does not help, please let us know: 
๐Ÿ˜ฟ  If the above advice does not help, please let us know: 
W1214 11:51:10.751353 3370728 out.go:146] ๐Ÿ‘‰  https://github.com/kubernetes/minikube/issues/new/choose
๐Ÿ‘‰  https://github.com/kubernetes/minikube/issues/new/choose
I1214 11:51:10.753509 3370728 out.go:110]

**Full output of `minikube start` command used, if not already included:**

**Optional: Full output of `minikube logs` command:**
medyagh commented 3 years ago

thanks for creating this issue @zmykevin . do u mind sharing

"docker info"

and also minikube delete - then try the a differen cgroup ?

minikube start --force-systemd

I am also curious if this would happen without the mount ?

( btw u can do mount in a separate temrinal after minikube is started using minikube mount command)

priyawadhwa commented 3 years ago

Hey @zmykevin friendly ping, are you still seeing this issue, and were you able to give the suggestions above a try?

spowelljr commented 3 years ago

Hi @zmykevin, we haven't heard back from you, do you still have this issue? There isn't enough information in this issue to make it actionable, and a long enough duration has passed, so this issue is likely difficult to replicate.

I will close this issue for now but feel free to reopen when you feel ready to provide more information.