kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.19k stars 4.87k forks source link

site: Add tutorial for using minikube with "apiserver.token-auth-file" #8762

Closed agentreno closed 3 years ago

agentreno commented 4 years ago

Steps to reproduce the issue:

  1. Create ~/.minikube/files/etc/tokens.csv file containing mytoken,myuser,123 in the defined format for static token files.
  2. minikube start --extra-config=apiserver.token-auth-file=/etc/tokens.csv
  3. While waiting, verify that tokens file is present with minikube ssh and cat /etc/tokens.csv
  4. Eventually, minikube start times out with an error.

Full output of failed command: Including --alsologtostderr:

I0718 17:01:47.259570    5112 start.go:99] hostinfo: {"hostname":"redacted","uptime":1061934,"bootTime":1594026173,"procs":514,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"18.04","kernelVersion":"5.3.0-62-generic","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"c9355790-be10-491f-8470-ccde91481a7f"}
I0718 17:01:47.261012    5112 start.go:109] virtualization: kvm host
๐Ÿ˜„  minikube v1.12.0 on Ubuntu 18.04
I0718 17:01:47.263239    5112 driver.go:257] Setting default libvirt URI to qemu:///system
I0718 17:01:47.263279    5112 global.go:102] Querying for installed drivers using PATH=redacted
I0718 17:01:47.263475    5112 global.go:110] podman priority: 2, state: {Installed:false Healthy:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/}
I0718 17:01:47.566270    5112 global.go:110] virtualbox priority: 5, state: {Installed:true Healthy:true NeedsImprovement:false Error: Fix: Doc:}
I0718 17:01:47.566592    5112 global.go:110] vmware priority: 6, state: {Installed:false Healthy:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/}
I0718 17:01:47.633853    5112 docker.go:87] docker version: linux-19.03.12
I0718 17:01:47.634953    5112 global.go:110] docker priority: 8, state: {Installed:true Healthy:true NeedsImprovement:false Error: Fix: Doc:}
I0718 17:01:47.659493    5112 global.go:110] kvm2 priority: 7, state: {Installed:true Healthy:true NeedsImprovement:false Error: Fix: Doc:}
I0718 17:01:47.659621    5112 global.go:110] none priority: 3, state: {Installed:true Healthy:false NeedsImprovement:false Error:the 'none' driver must be run as the root user Fix:For non-root usage, try the newer 'docker' driver Doc:}
I0718 17:01:47.659637    5112 driver.go:205] not recommending "none" due to health: the 'none' driver must be run as the root user
I0718 17:01:47.659646    5112 driver.go:239] Picked: docker
I0718 17:01:47.659652    5112 driver.go:240] Alternatives: [kvm2 virtualbox]
I0718 17:01:47.659656    5112 driver.go:241] Rejects: [podman vmware none]
โœจ  Automatically selected the docker driver. Other choices: kvm2, virtualbox
I0718 17:01:47.661322    5112 start.go:209] selected driver: docker
I0718 17:01:47.661328    5112 start.go:608] validating driver "docker" against 
I0718 17:01:47.661341    5112 start.go:619] status for docker: {Installed:true Healthy:true NeedsImprovement:false Error: Fix: Doc:}
I0718 17:01:47.661432    5112 start_flags.go:221] no existing cluster config was found, will generate one from the flags
I0718 17:01:47.661615    5112 cli_runner.go:109] Run: docker system info --format "{{json .}}"
I0718 17:01:47.722598    5112 start_flags.go:235] Using suggested 3900MB memory alloc based on sys=15651MB, container=15651MB
I0718 17:01:47.722690    5112 start_flags.go:565] Wait components to verify : map[apiserver:true system_pods:true]
I0718 17:01:47.722707    5112 cni.go:74] Creating CNI manager for ""
I0718 17:01:47.722715    5112 cni.go:113] CNI unnecessary in this configuration, recommending no CNI
๐Ÿ‘  Starting control plane node minikube in cluster minikube
I0718 17:01:47.822126    5112 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 in local docker daemon, skipping pull
I0718 17:01:47.822148    5112 cache.go:113] gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 exists in daemon, skipping pull
I0718 17:01:47.822156    5112 preload.go:95] Checking if preload exists for k8s version v1.18.3 and runtime docker
I0718 17:01:47.822188    5112 preload.go:103] Found local preload: /home/karl/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v4-v1.18.3-docker-overlay2-amd64.tar.lz4
I0718 17:01:47.822199    5112 cache.go:51] Caching tarball of preloaded images
I0718 17:01:47.822210    5112 preload.go:129] Found /home/karl/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v4-v1.18.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0718 17:01:47.822215    5112 cache.go:54] Finished verifying existence of preloaded tar for  v1.18.3 on docker
I0718 17:01:47.822385    5112 profile.go:150] Saving config to /home/karl/.minikube/profiles/minikube/config.json ...
I0718 17:01:47.822494    5112 lock.go:35] WriteFile acquiring /home/karl/.minikube/profiles/minikube/config.json: {Name:mkdbae7b303fa5a1f6740c3404dd2d67ad72c877 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0718 17:01:47.822693    5112 cache.go:178] Successfully downloaded all kic artifacts
I0718 17:01:47.822711    5112 start.go:240] acquiring machines lock for minikube: {Name:mkf9e6ceb01072ff2521a25017908b7a0342a3be Clock:{} Delay:500ms Timeout:15m0s Cancel:}
I0718 17:01:47.822753    5112 start.go:244] acquired machines lock for "minikube" in 31.734ยตs
I0718 17:01:47.822776    5112 start.go:84] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:3900 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:apiserver Key:token-auth-file Value:/etc/tokens.csv}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]} &{Name: IP: Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}
I0718 17:01:47.822830    5112 start.go:121] createHost starting for "" (driver="docker")
๐Ÿ”ฅ  Creating docker container (CPUs=2, Memory=3900MB) ...
I0718 17:01:47.824487    5112 start.go:157] libmachine.API.Create for "minikube" (driver="docker")
I0718 17:01:47.824507    5112 client.go:161] LocalClient.Create starting
I0718 17:01:47.824556    5112 main.go:115] libmachine: Reading certificate data from /home/karl/.minikube/certs/ca.pem
I0718 17:01:47.824611    5112 main.go:115] libmachine: Decoding PEM data...
I0718 17:01:47.824629    5112 main.go:115] libmachine: Parsing certificate...
I0718 17:01:47.824738    5112 main.go:115] libmachine: Reading certificate data from /home/karl/.minikube/certs/cert.pem
I0718 17:01:47.824771    5112 main.go:115] libmachine: Decoding PEM data...
I0718 17:01:47.824784    5112 main.go:115] libmachine: Parsing certificate...
I0718 17:01:47.825126    5112 cli_runner.go:109] Run: docker ps -a --format {{.Names}}
I0718 17:01:47.857938    5112 cli_runner.go:109] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0718 17:01:47.890020    5112 oci.go:101] Successfully created a docker volume minikube
W0718 17:01:47.890063    5112 oci.go:161] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0718 17:01:47.890116    5112 preload.go:95] Checking if preload exists for k8s version v1.18.3 and runtime docker
I0718 17:01:47.890672    5112 cli_runner.go:109] Run: docker info --format "'{{json .SecurityOptions}}'"
I0718 17:01:47.890698    5112 preload.go:103] Found local preload: /home/karl/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v4-v1.18.3-docker-overlay2-amd64.tar.lz4
I0718 17:01:47.890711    5112 kic.go:134] Starting extracting preloaded images to volume ...
I0718 17:01:47.890902    5112 cli_runner.go:109] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/karl/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v4-v1.18.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir
I0718 17:01:47.975401    5112 cli_runner.go:109] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --security-opt apparmor=unconfined --cpus=2 --memory=3900mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438
I0718 17:01:48.483183    5112 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Running}}
I0718 17:01:48.517582    5112 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}}
I0718 17:01:48.559755    5112 cli_runner.go:109] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
I0718 17:01:48.706107    5112 oci.go:218] the created container "minikube" has a running status.
I0718 17:01:48.706132    5112 kic.go:162] Creating ssh key for kic: /home/karl/.minikube/machines/minikube/id_rsa...
I0718 17:01:48.798477    5112 kic_runner.go:179] docker (temp): /home/karl/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0718 17:01:48.923363    5112 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0718 17:01:48.923393    5112 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0718 17:01:51.410981    5112 cli_runner.go:151] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/karl/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v4-v1.18.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir: (3.520010922s)
I0718 17:01:51.411041    5112 kic.go:139] duration metric: took 3.520324 seconds to extract preloaded images to volume
I0718 17:01:51.411471    5112 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}}
I0718 17:01:51.475887    5112 machine.go:88] provisioning docker machine ...
I0718 17:01:51.475914    5112 ubuntu.go:166] provisioning hostname "minikube"
I0718 17:01:51.475992    5112 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0718 17:01:51.506406    5112 main.go:115] libmachine: Using SSH client type: native
I0718 17:01:51.506589    5112 main.go:115] libmachine: &{{{ 0 [] [] []} docker [0x7bfaf0] 0x7bfac0   [] 0s} 127.0.0.1 32807  }
I0718 17:01:51.506603    5112 main.go:115] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0718 17:01:51.664107    5112 main.go:115] libmachine: SSH cmd err, output: : minikube

I0718 17:01:51.664446    5112 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0718 17:01:51.723758    5112 main.go:115] libmachine: Using SSH client type: native
I0718 17:01:51.723975    5112 main.go:115] libmachine: &{{{ 0 [] [] []} docker [0x7bfaf0] 0x7bfac0   [] 0s} 127.0.0.1 32807  }
I0718 17:01:51.723995    5112 main.go:115] libmachine: About to run SSH command:

                if ! grep -xq '.*\sminikube' /etc/hosts; then
                        if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
                        else
                                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts;
                        fi
                fi
I0718 17:01:51.873236    5112 main.go:115] libmachine: SSH cmd err, output: :
I0718 17:01:51.873308    5112 ubuntu.go:172] set auth options {CertDir:/home/karl/.minikube CaCertPath:/home/karl/.minikube/certs/ca.pem CaPrivateKeyPath:/home/karl/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/karl/.minikube/machines/server.pem ServerKeyPath:/home/karl/.minikube/machines/server-key.pem ClientKeyPath:/home/karl/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/karl/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/karl/.minikube}
I0718 17:01:51.873417    5112 ubuntu.go:174] setting up certificates
I0718 17:01:51.873444    5112 provision.go:82] configureAuth start
I0718 17:01:51.873700    5112 cli_runner.go:109] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0718 17:01:51.943999    5112 provision.go:131] copyHostCerts
I0718 17:01:51.944081    5112 exec_runner.go:91] found /home/karl/.minikube/ca.pem, removing ...
I0718 17:01:51.944172    5112 exec_runner.go:98] cp: /home/karl/.minikube/certs/ca.pem --> /home/karl/.minikube/ca.pem (1029 bytes)
I0718 17:01:51.944397    5112 exec_runner.go:91] found /home/karl/.minikube/cert.pem, removing ...
I0718 17:01:51.944457    5112 exec_runner.go:98] cp: /home/karl/.minikube/certs/cert.pem --> /home/karl/.minikube/cert.pem (1070 bytes)
I0718 17:01:51.944618    5112 exec_runner.go:91] found /home/karl/.minikube/key.pem, removing ...
I0718 17:01:51.944672    5112 exec_runner.go:98] cp: /home/karl/.minikube/certs/key.pem --> /home/karl/.minikube/key.pem (1675 bytes)
I0718 17:01:51.944805    5112 provision.go:105] generating server cert: /home/karl/.minikube/machines/server.pem ca-key=/home/karl/.minikube/certs/ca.pem private-key=/home/karl/.minikube/certs/ca-key.pem org=karl.minikube san=[172.17.0.3 localhost 127.0.0.1]
I0718 17:01:52.052929    5112 provision.go:159] copyRemoteCerts
I0718 17:01:52.053010    5112 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0718 17:01:52.053071    5112 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0718 17:01:52.084719    5112 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/karl/.minikube/machines/minikube/id_rsa Username:docker}
I0718 17:01:52.181568    5112 ssh_runner.go:215] scp /home/karl/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1029 bytes)
I0718 17:01:52.242217    5112 ssh_runner.go:215] scp /home/karl/.minikube/machines/server.pem --> /etc/docker/server.pem (1111 bytes)
I0718 17:01:52.303996    5112 ssh_runner.go:215] scp /home/karl/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0718 17:01:52.366175    5112 provision.go:85] duration metric: configureAuth took 492.692587ms
I0718 17:01:52.366226    5112 ubuntu.go:190] setting minikube options for container-runtime
I0718 17:01:52.367021    5112 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0718 17:01:52.429653    5112 main.go:115] libmachine: Using SSH client type: native
I0718 17:01:52.429869    5112 main.go:115] libmachine: &{{{ 0 [] [] []} docker [0x7bfaf0] 0x7bfac0   [] 0s} 127.0.0.1 32807  }
I0718 17:01:52.429889    5112 main.go:115] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0718 17:01:52.586727    5112 main.go:115] libmachine: SSH cmd err, output: : overlay

I0718 17:01:52.586798    5112 ubuntu.go:71] root file system type: overlay
I0718 17:01:52.587230    5112 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I0718 17:01:52.587542    5112 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0718 17:01:52.651102    5112 main.go:115] libmachine: Using SSH client type: native
I0718 17:01:52.651245    5112 main.go:115] libmachine: &{{{ 0 [] [] []} docker [0x7bfaf0] 0x7bfac0   [] 0s} 127.0.0.1 32807  }
I0718 17:01:52.651311    5112 main.go:115] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0718 17:01:52.813566    5112 main.go:115] libmachine: SSH cmd err, output: : [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0718 17:01:52.814009    5112 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0718 17:01:52.877320    5112 main.go:115] libmachine: Using SSH client type: native
I0718 17:01:52.877474    5112 main.go:115] libmachine: &{{{ 0 [] [] []} docker [0x7bfaf0] 0x7bfac0   [] 0s} 127.0.0.1 32807  }
I0718 17:01:52.877495    5112 main.go:115] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0718 17:01:53.762885    5112 main.go:115] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service       2019-08-29 04:42:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new      2020-07-18 16:01:52.803649475 +0000
@@ -8,24 +8,22 @@

 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP

 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -33,9 +31,10 @@
 LimitNPROC=infinity
 LimitCORE=infinity

-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0

 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes

I0718 17:01:53.763186    5112 machine.go:91] provisioned docker machine in 2.287274705s
I0718 17:01:53.763225    5112 client.go:164] LocalClient.Create took 5.938708738s
I0718 17:01:53.763283    5112 start.go:162] duration metric: libmachine.API.Create for "minikube" took 5.938783984s
I0718 17:01:53.763318    5112 start.go:203] post-start starting for "minikube" (driver="docker")
I0718 17:01:53.763348    5112 start.go:213] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0718 17:01:53.763711    5112 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0718 17:01:53.764011    5112 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0718 17:01:53.811757    5112 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/karl/.minikube/machines/minikube/id_rsa Username:docker}
I0718 17:01:53.902222    5112 ssh_runner.go:148] Run: cat /etc/os-release
I0718 17:01:53.910288    5112 main.go:115] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0718 17:01:53.910371    5112 main.go:115] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0718 17:01:53.910429    5112 main.go:115] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0718 17:01:53.910461    5112 info.go:96] Remote host: Ubuntu 19.10
I0718 17:01:53.910501    5112 filesync.go:118] Scanning /home/karl/.minikube/addons for local assets ...
I0718 17:01:53.910677    5112 filesync.go:118] Scanning /home/karl/.minikube/files for local assets ...
I0718 17:01:53.910946    5112 filesync.go:141] local asset: /home/karl/.minikube/files/etc/tokens.csv -> tokens.csv in /etc
I0718 17:01:53.911064    5112 ssh_runner.go:215] scp /home/karl/.minikube/files/etc/tokens.csv --> /etc/tokens.csv (17 bytes)
I0718 17:01:53.971333    5112 start.go:206] post-start completed in 207.979505ms
I0718 17:01:53.972634    5112 cli_runner.go:109] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0718 17:01:54.032850    5112 profile.go:150] Saving config to /home/karl/.minikube/profiles/minikube/config.json ...
I0718 17:01:54.033119    5112 start.go:124] duration metric: createHost completed in 6.210278971s
I0718 17:01:54.033130    5112 start.go:75] releasing machines lock for "minikube", held for 6.210368435s
I0718 17:01:54.033231    5112 cli_runner.go:109] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0718 17:01:54.064617    5112 ssh_runner.go:148] Run: systemctl --version
I0718 17:01:54.064653    5112 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I0718 17:01:54.064695    5112 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0718 17:01:54.064726    5112 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0718 17:01:54.096904    5112 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/karl/.minikube/machines/minikube/id_rsa Username:docker}
I0718 17:01:54.097573    5112 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/karl/.minikube/machines/minikube/id_rsa Username:docker}
I0718 17:01:54.284142    5112 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service containerd
I0718 17:01:54.315217    5112 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0718 17:01:54.347778    5112 cruntime.go:192] skipping containerd shutdown because we are bound to it
I0718 17:01:54.348046    5112 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I0718 17:01:54.387251    5112 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0718 17:01:54.419658    5112 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0718 17:01:54.624158    5112 ssh_runner.go:148] Run: sudo systemctl start docker
I0718 17:01:54.632096    5112 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
๐Ÿณ  Preparing Kubernetes v1.18.3 on Docker 19.03.2 ...
I0718 17:01:54.673117    5112 cli_runner.go:109] Run: docker network ls --filter name=bridge --format {{.ID}}
I0718 17:01:54.705301    5112 cli_runner.go:109] Run: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" 7339cf0620ec
I0718 17:01:54.736662    5112 network.go:77] got host ip for mount in container by inspect docker network: 172.17.0.1
I0718 17:01:54.736678    5112 start.go:268] checking
I0718 17:01:54.736753    5112 ssh_runner.go:148] Run: grep 172.17.0.1   host.minikube.internal$ /etc/hosts
I0718 17:01:54.739479    5112 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "172.17.0.1  host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
    โ–ช apiserver.token-auth-file=/etc/tokens.csv
I0718 17:01:54.750420    5112 preload.go:95] Checking if preload exists for k8s version v1.18.3 and runtime docker
I0718 17:01:54.750473    5112 preload.go:103] Found local preload: /home/karl/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v4-v1.18.3-docker-overlay2-amd64.tar.lz4
I0718 17:01:54.750578    5112 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0718 17:01:54.784421    5112 docker.go:381] Got preloaded images: -- stdout --
kubernetesui/dashboard:v2.0.1
k8s.gcr.io/kube-proxy:v1.18.3
k8s.gcr.io/kube-scheduler:v1.18.3
k8s.gcr.io/kube-apiserver:v1.18.3
k8s.gcr.io/kube-controller-manager:v1.18.3
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
gcr.io/k8s-minikube/storage-provisioner:v1.8.1

-- /stdout --
I0718 17:01:54.784485    5112 docker.go:319] Images already preloaded, skipping extraction
I0718 17:01:54.784575    5112 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0718 17:01:54.817339    5112 docker.go:381] Got preloaded images: -- stdout --
kubernetesui/dashboard:v2.0.1
k8s.gcr.io/kube-proxy:v1.18.3
k8s.gcr.io/kube-controller-manager:v1.18.3
k8s.gcr.io/kube-scheduler:v1.18.3
k8s.gcr.io/kube-apiserver:v1.18.3
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
gcr.io/k8s-minikube/storage-provisioner:v1.8.1

-- /stdout --
I0718 17:01:54.817431    5112 cache_images.go:69] Images are preloaded, skipping loading
I0718 17:01:54.817486    5112 cni.go:74] Creating CNI manager for ""
I0718 17:01:54.817525    5112 cni.go:113] CNI unnecessary in this configuration, recommending no CNI
I0718 17:01:54.817533    5112 kubeadm.go:79] Using pod CIDR:
I0718 17:01:54.817544    5112 kubeadm.go:139] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:172.17.0.3 APIServerPort:8443 KubernetesVersion:v1.18.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota token-auth-file:/etc/tokens.csv] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.3"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.3 ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0718 17:01:54.817609    5112 kubeadm.go:143] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.17.0.3
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 172.17.0.3
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "172.17.0.3"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
    token-auth-file: "/etc/tokens.csv"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
controllerManager:
  extraArgs:
    "leader-elect": "false"
scheduler:
  extraArgs:
    "leader-elect": "false"
kubernetesVersion: v1.18.3
networking:
  dnsDomain: cluster.local
  podSubnet: ""
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
failSwapOn: false
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: ""
metricsBindAddress: 172.17.0.3:10249

I0718 17:01:54.817801    5112 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
I0718 17:01:54.855527    5112 kubeadm.go:775] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.3/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.3 --pod-manifest-path=/etc/kubernetes/manifests

[Install]
 config:
{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:apiserver Key:token-auth-file Value:/etc/tokens.csv}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0718 17:01:54.855632    5112 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.18.3
I0718 17:01:54.861250    5112 binaries.go:43] Found k8s binaries, skipping transfer
I0718 17:01:54.861331    5112 ssh_runner.go:148] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0718 17:01:54.866991    5112 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (480 bytes)
I0718 17:01:54.882432    5112 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
I0718 17:01:54.897114    5112 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1632 bytes)
I0718 17:01:54.913579    5112 start.go:268] checking
I0718 17:01:54.913681    5112 ssh_runner.go:148] Run: grep 172.17.0.3   control-plane.minikube.internal$ /etc/hosts
I0718 17:01:54.916805    5112 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.3 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0718 17:01:54.925750    5112 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0718 17:01:55.119014    5112 ssh_runner.go:148] Run: sudo systemctl start kubelet
I0718 17:01:55.130056    5112 certs.go:52] Setting up /home/karl/.minikube/profiles/minikube for IP: 172.17.0.3
I0718 17:01:55.130145    5112 certs.go:169] skipping minikubeCA CA generation: /home/karl/.minikube/ca.key
I0718 17:01:55.130195    5112 certs.go:169] skipping proxyClientCA CA generation: /home/karl/.minikube/proxy-client-ca.key
I0718 17:01:55.130284    5112 certs.go:273] generating minikube-user signed cert: /home/karl/.minikube/profiles/minikube/client.key
I0718 17:01:55.130297    5112 crypto.go:69] Generating cert /home/karl/.minikube/profiles/minikube/client.crt with IP's: []
I0718 17:01:55.437951    5112 crypto.go:157] Writing cert to /home/karl/.minikube/profiles/minikube/client.crt ...
I0718 17:01:55.437973    5112 lock.go:35] WriteFile acquiring /home/karl/.minikube/profiles/minikube/client.crt: {Name:mk8f77ad9f2e6725a41d65325af623e64be39c72 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0718 17:01:55.438144    5112 crypto.go:165] Writing key to /home/karl/.minikube/profiles/minikube/client.key ...
I0718 17:01:55.438154    5112 lock.go:35] WriteFile acquiring /home/karl/.minikube/profiles/minikube/client.key: {Name:mk52a50a39f31c025137fba9e4ba924c0ff035cc Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0718 17:01:55.438311    5112 certs.go:273] generating minikube signed cert: /home/karl/.minikube/profiles/minikube/apiserver.key.0f3e66d0
I0718 17:01:55.438318    5112 crypto.go:69] Generating cert /home/karl/.minikube/profiles/minikube/apiserver.crt.0f3e66d0 with IP's: [172.17.0.3 10.96.0.1 127.0.0.1 10.0.0.1]
I0718 17:01:55.652213    5112 crypto.go:157] Writing cert to /home/karl/.minikube/profiles/minikube/apiserver.crt.0f3e66d0 ...
I0718 17:01:55.652234    5112 lock.go:35] WriteFile acquiring /home/karl/.minikube/profiles/minikube/apiserver.crt.0f3e66d0: {Name:mka9f48f86aa86beddb78a1856179b20b43106bc Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0718 17:01:55.652483    5112 crypto.go:165] Writing key to /home/karl/.minikube/profiles/minikube/apiserver.key.0f3e66d0 ...
I0718 17:01:55.652493    5112 lock.go:35] WriteFile acquiring /home/karl/.minikube/profiles/minikube/apiserver.key.0f3e66d0: {Name:mkc9a93248c407a5afb5a21908f6e1e26ee7eb26 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0718 17:01:55.652708    5112 certs.go:284] copying /home/karl/.minikube/profiles/minikube/apiserver.crt.0f3e66d0 -> /home/karl/.minikube/profiles/minikube/apiserver.crt
I0718 17:01:55.652897    5112 certs.go:288] copying /home/karl/.minikube/profiles/minikube/apiserver.key.0f3e66d0 -> /home/karl/.minikube/profiles/minikube/apiserver.key
I0718 17:01:55.653066    5112 certs.go:273] generating aggregator signed cert: /home/karl/.minikube/profiles/minikube/proxy-client.key
I0718 17:01:55.653074    5112 crypto.go:69] Generating cert /home/karl/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0718 17:01:55.806896    5112 crypto.go:157] Writing cert to /home/karl/.minikube/profiles/minikube/proxy-client.crt ...
I0718 17:01:55.806918    5112 lock.go:35] WriteFile acquiring /home/karl/.minikube/profiles/minikube/proxy-client.crt: {Name:mk8794a2e5a01091de001b9a9fdca55fa1344d2a Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0718 17:01:55.807085    5112 crypto.go:165] Writing key to /home/karl/.minikube/profiles/minikube/proxy-client.key ...
I0718 17:01:55.807095    5112 lock.go:35] WriteFile acquiring /home/karl/.minikube/profiles/minikube/proxy-client.key: {Name:mk1c190df2f8a0fe0f303abefbdf2565e6a1b1a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0718 17:01:55.807368    5112 certs.go:348] found cert: /home/karl/.minikube/certs/home/karl/.minikube/certs/ca-key.pem (1675 bytes)
I0718 17:01:55.807405    5112 certs.go:348] found cert: /home/karl/.minikube/certs/home/karl/.minikube/certs/ca.pem (1029 bytes)
I0718 17:01:55.807457    5112 certs.go:348] found cert: /home/karl/.minikube/certs/home/karl/.minikube/certs/cert.pem (1070 bytes)
I0718 17:01:55.807488    5112 certs.go:348] found cert: /home/karl/.minikube/certs/home/karl/.minikube/certs/key.pem (1675 bytes)
I0718 17:01:55.808072    5112 ssh_runner.go:215] scp /home/karl/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1350 bytes)
I0718 17:01:55.823456    5112 ssh_runner.go:215] scp /home/karl/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0718 17:01:55.839411    5112 ssh_runner.go:215] scp /home/karl/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1103 bytes)
I0718 17:01:55.854899    5112 ssh_runner.go:215] scp /home/karl/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0718 17:01:55.871069    5112 ssh_runner.go:215] scp /home/karl/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes)
I0718 17:01:55.887294    5112 ssh_runner.go:215] scp /home/karl/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0718 17:01:55.903860    5112 ssh_runner.go:215] scp /home/karl/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes)
I0718 17:01:55.920434    5112 ssh_runner.go:215] scp /home/karl/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0718 17:01:55.937544    5112 ssh_runner.go:215] scp /home/karl/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes)
I0718 17:01:55.953681    5112 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I0718 17:01:55.971297    5112 ssh_runner.go:148] Run: openssl version
I0718 17:01:55.975356    5112 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0718 17:01:55.981885    5112 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0718 17:01:55.985101    5112 certs.go:389] hashing: -rw-r--r-- 1 root root 1066 Jul 13  2019 /usr/share/ca-certificates/minikubeCA.pem
I0718 17:01:55.985184    5112 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0718 17:01:55.990411    5112 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0718 17:01:55.999049    5112 kubeadm.go:320] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:3900 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:apiserver Key:token-auth-file Value:/etc/tokens.csv}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]}
I0718 17:01:55.999173    5112 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0718 17:01:56.035713    5112 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0718 17:01:56.041544    5112 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0718 17:01:56.047687    5112 kubeadm.go:210] ignoring SystemVerification for kubeadm because of docker driver
I0718 17:01:56.047765    5112 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0718 17:01:56.053485    5112 kubeadm.go:146] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0718 17:01:56.053533    5112 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0718 17:05:58.734091    5112 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (4m2.680495331s)
๐Ÿ’ฅ  initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.3.0-62-generic
DOCKER_VERSION: 19.03.2
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

stderr:
W0718 16:01:56.087245     777 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.3.0-62-generic\n", err: exit status 1
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0718 16:01:58.722023     777 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0718 16:01:58.723050     777 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

I0718 17:05:58.734742    5112 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0718 17:06:00.262498    5112 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1.527715004s)
I0718 17:06:00.262693    5112 ssh_runner.go:148] Run: sudo systemctl stop -f kubelet
I0718 17:06:00.299475    5112 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0718 17:06:00.339976    5112 kubeadm.go:210] ignoring SystemVerification for kubeadm because of docker driver
I0718 17:06:00.340074    5112 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0718 17:06:00.345963    5112 kubeadm.go:146] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0718 17:06:00.346008    5112 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0718 17:10:01.570154    5112 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (4m1.224103554s)
I0718 17:10:01.570267    5112 kubeadm.go:322] StartCluster complete in 8m5.571218671s
I0718 17:10:01.570536    5112 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0718 17:10:01.623960    5112 logs.go:203] 1 containers: [66e1c1228705]
I0718 17:10:01.624083    5112 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0718 17:10:01.654326    5112 logs.go:203] 1 containers: [6f736cad80ef]
I0718 17:10:01.654411    5112 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0718 17:10:01.684692    5112 logs.go:203] 0 containers: []
W0718 17:10:01.684728    5112 logs.go:205] No container was found matching "coredns"
I0718 17:10:01.684838    5112 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0718 17:10:01.715141    5112 logs.go:203] 1 containers: [2778978117b3]
I0718 17:10:01.715223    5112 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0718 17:10:01.745318    5112 logs.go:203] 0 containers: []
W0718 17:10:01.745336    5112 logs.go:205] No container was found matching "kube-proxy"
I0718 17:10:01.745456    5112 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0718 17:10:01.776523    5112 logs.go:203] 0 containers: []
W0718 17:10:01.776540    5112 logs.go:205] No container was found matching "kubernetes-dashboard"
I0718 17:10:01.776612    5112 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0718 17:10:01.808069    5112 logs.go:203] 0 containers: []
W0718 17:10:01.808089    5112 logs.go:205] No container was found matching "storage-provisioner"
I0718 17:10:01.808215    5112 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0718 17:10:01.838853    5112 logs.go:203] 1 containers: [6f88f38e741d]
I0718 17:10:01.838878    5112 logs.go:117] Gathering logs for describe nodes ...
I0718 17:10:01.838891    5112 ssh_runner.go:148] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0718 17:10:01.880259    5112 logs.go:124] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
 output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?

** /stderr **
I0718 17:10:01.880284    5112 logs.go:117] Gathering logs for kube-apiserver [66e1c1228705] ...
I0718 17:10:01.880295    5112 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 66e1c1228705"
I0718 17:10:01.929710    5112 logs.go:117] Gathering logs for kube-scheduler [2778978117b3] ...
I0718 17:10:01.929728    5112 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 2778978117b3"
I0718 17:10:01.972341    5112 logs.go:117] Gathering logs for kubelet ...
I0718 17:10:01.972360    5112 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0718 17:10:01.987905    5112 logs.go:132] Found kubelet problem: Jul 18 16:09:35 minikube kubelet[5844]: E0718 16:09:35.400126    5844 pod_workers.go:191] Error syncing pod 481d964192db3aaa66402a539639c968 ("kube-apiserver-minikube_kube-system(481d964192db3aaa66402a539639c968)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(481d964192db3aaa66402a539639c968)"
W0718 17:10:01.989447    5112 logs.go:132] Found kubelet problem: Jul 18 16:09:38 minikube kubelet[5844]: E0718 16:09:38.437873    5844 pod_workers.go:191] Error syncing pod 8a9925b92c1bf68a9656aa86994b3aca ("kube-controller-manager-minikube_kube-system(8a9925b92c1bf68a9656aa86994b3aca)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(8a9925b92c1bf68a9656aa86994b3aca)"
W0718 17:10:01.997102    5112 logs.go:132] Found kubelet problem: Jul 18 16:09:50 minikube kubelet[5844]: E0718 16:09:50.399820    5844 pod_workers.go:191] Error syncing pod 8a9925b92c1bf68a9656aa86994b3aca ("kube-controller-manager-minikube_kube-system(8a9925b92c1bf68a9656aa86994b3aca)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(8a9925b92c1bf68a9656aa86994b3aca)"
W0718 17:10:01.997209    5112 logs.go:132] Found kubelet problem: Jul 18 16:09:50 minikube kubelet[5844]: E0718 16:09:50.399843    5844 pod_workers.go:191] Error syncing pod 481d964192db3aaa66402a539639c968 ("kube-apiserver-minikube_kube-system(481d964192db3aaa66402a539639c968)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(481d964192db3aaa66402a539639c968)"
W0718 17:10:02.003223    5112 logs.go:132] Found kubelet problem: Jul 18 16:10:01 minikube kubelet[5844]: E0718 16:10:01.432798    5844 pod_workers.go:191] Error syncing pod 481d964192db3aaa66402a539639c968 ("kube-apiserver-minikube_kube-system(481d964192db3aaa66402a539639c968)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(481d964192db3aaa66402a539639c968)"
I0718 17:10:02.003712    5112 logs.go:117] Gathering logs for dmesg ...
I0718 17:10:02.003722    5112 ssh_runner.go:148] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0718 17:10:02.032404    5112 logs.go:117] Gathering logs for Docker ...
I0718 17:10:02.032421    5112 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0718 17:10:02.048726    5112 logs.go:117] Gathering logs for container status ...
I0718 17:10:02.048747    5112 ssh_runner.go:148] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0718 17:10:02.093610    5112 logs.go:117] Gathering logs for etcd [6f736cad80ef] ...
I0718 17:10:02.093637    5112 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 6f736cad80ef"
I0718 17:10:02.149964    5112 logs.go:117] Gathering logs for kube-controller-manager [6f88f38e741d] ...
I0718 17:10:02.149983    5112 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 6f88f38e741d"
W0718 17:10:02.182325    5112 out.go:201] Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.3.0-62-generic
DOCKER_VERSION: 19.03.2
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

stderr:
W0718 16:06:00.378250    5583 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.3.0-62-generic\n", err: exit status 1
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0718 16:06:01.558570    5583 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0718 16:06:01.559244    5583 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

๐Ÿ’ฃ  Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.3.0-62-generic
DOCKER_VERSION: 19.03.2
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

stderr:
W0718 16:06:00.378250    5583 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.3.0-62-generic\n", err: exit status 1
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0718 16:06:01.558570    5583 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0718 16:06:01.559244    5583 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

๐Ÿ˜ฟ  minikube is exiting due to an error. If the above message is not useful, open an issue:
๐Ÿ‘‰  https://github.com/kubernetes/minikube/issues/new/choose
โŒ  Problems detected in kubelet:
    Jul 18 16:09:35 minikube kubelet[5844]: E0718 16:09:35.400126    5844 pod_workers.go:191] Error syncing pod 481d964192db3aaa66402a539639c968 ("kube-apiserver-minikube_kube-system(481d964192db3aaa66402a539639c968)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(481d964192db3aaa66402a539639c968)"
    Jul 18 16:09:38 minikube kubelet[5844]: E0718 16:09:38.437873    5844 pod_workers.go:191] Error syncing pod 8a9925b92c1bf68a9656aa86994b3aca ("kube-controller-manager-minikube_kube-system(8a9925b92c1bf68a9656aa86994b3aca)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(8a9925b92c1bf68a9656aa86994b3aca)"
    Jul 18 16:09:50 minikube kubelet[5844]: E0718 16:09:50.399820    5844 pod_workers.go:191] Error syncing pod 8a9925b92c1bf68a9656aa86994b3aca ("kube-controller-manager-minikube_kube-system(8a9925b92c1bf68a9656aa86994b3aca)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(8a9925b92c1bf68a9656aa86994b3aca)"
I0718 17:11:56.900871    5112 exit.go:58] WithError(failed to start node)=startup failed: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.3.0-62-generic
DOCKER_VERSION: 19.03.2
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

stderr:
W0718 16:06:00.378250    5583 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.3.0-62-generic\n", err: exit status 1
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0718 16:06:01.558570    5583 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0718 16:06:01.559244    5583 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
 called from:
goroutine 1 [running]:
runtime/debug.Stack(0x0, 0x0, 0xc00046ca50)
        /usr/local/go/src/runtime/debug/stack.go:24 +0x9d
k8s.io/minikube/pkg/minikube/exit.WithError(0x1baebdd, 0x14, 0x1ea7cc0, 0xc00000e0c0)
        /app/pkg/minikube/exit/exit.go:58 +0x34
k8s.io/minikube/cmd/minikube/cmd.runStart(0x2c85020, 0xc0008104e0, 0x0, 0x2)
        /app/cmd/minikube/cmd/start.go:198 +0x40f
github.com/spf13/cobra.(*Command).execute(0x2c85020, 0xc0008104c0, 0x2, 0x2, 0x2c85020, 0xc0008104c0)
        /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:846 +0x2aa
github.com/spf13/cobra.(*Command).ExecuteC(0x2c84060, 0x0, 0x1, 0xc0006c9380)
        /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950 +0x349
github.com/spf13/cobra.(*Command).Execute(...)
        /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887
k8s.io/minikube/cmd/minikube/cmd.Execute()
        /app/cmd/minikube/cmd/root.go:106 +0x747
main.main()
        /app/cmd/minikube/main.go:71 +0x143

โŒ  [NONE_KUBELET] failed to start node startup failed: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.3.0-62-generic
DOCKER_VERSION: 19.03.2
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

stderr:
W0718 16:06:00.378250    5583 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.3.0-62-generic\n", err: exit status 1
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0718 16:06:01.558570    5583 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0718 16:06:01.559244    5583 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

๐Ÿ’ก  Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
โ‰๏ธ   Related issue: https://github.com/kubernetes/minikube/issues/4172

Full output of minikube start command used, if not already included: Already included above.

Optional: Full output of minikube logs command: Available on request (doesn't look relevant).

tstromberg commented 4 years ago

Can you share the apiserver section of minikube logs?

It seems that the apiserver may need more configuration than just this flag, as it's in a crashloop:

Jul 18 16:09:35 minikube kubelet[5844]: E0718 16:09:35.400126 5844 pod_workers.go:191] Error syncing pod 481d964192db3aaa66402a539639c968 ("kube-apiserver-minikube_kube-system(481d964192db3aaa66402a539639c968)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(481d964192db3aaa66402a539639c968)"
Jul 18 16:09:38 minikube kubelet[5844]: E0718 16:09:38.437873 5844 pod_workers.go:191] Error syncing pod 8a9925b92c1bf68a9656aa86994b3aca ("kube-controller-manager-minikube_kube-system(8a9925b92c1bf68a9656aa86994b3aca)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(8a9925b92c1bf68a9656aa86994b3aca)"
tstromberg commented 4 years ago

FWIW, we should certainly surface a better error message here, but I won't know what that is without the apiserver logs.

agentreno commented 4 years ago

Thanks for taking a look. Is this the right section of the logs? It looks like a usage message for the apiserver, like it's been given invalid arguments or something. After this it moves on to kube-controller-manager which generally fails to contact apiserver.

kube-apiserver section of minikube logs

```shell ==> kube-apiserver [b7f5615b40f5] <== api/all=true|false controls all API versions api/ga=true|false controls all API versions of the form v[0-9]+ api/beta=true|false controls all API versions of the form v[0-9]+beta[0-9]+ api/alpha=true|false controls all API versions of the form v[0-9]+alpha[0-9]+ api/legacy is deprecated, and will be removed in a future version Egress selector flags: --egress-selector-config-file string File with apiserver egress selector configuration. Admission flags: --admission-control strings Admission is divided into two phases. In the first phase, only mutating admission plugins run. In the second phase, only validating admission plugins run. The names in the below list may represent a validating plugin, a mutating plugin, or both. The order of plugins in which they are passed to this flag does not matter. Comma-delimited list of: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. (DEPRECATED: Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.) --admission-control-config-file string File with admission control configuration. --disable-admission-plugins strings admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter. --enable-admission-plugins strings admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter. Metrics flags: --show-hidden-metrics-for-version string The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is ., e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that. Misc flags: --allow-privileged If true, allow privileged containers. [default=false] --apiserver-count int The number of apiservers running in the cluster, must be a positive number. (In use when --endpoint-reconciler-type=master-count is enabled.) (default 1) --enable-aggregator-routing Turns on aggregator routing requests to endpoints IP rather than cluster IP. --endpoint-reconciler-type string Use an endpoint reconciler (master-count, lease, none) (default "lease") --event-ttl duration Amount of time to retain events. (default 1h0m0s) --kubelet-certificate-authority string Path to a cert file for the certificate authority. --kubelet-client-certificate string Path to a client cert file for TLS. --kubelet-client-key string Path to a client key file for TLS. --kubelet-https Use https for kubelet connections. (default true) --kubelet-preferred-address-types strings List of the preferred NodeAddressTypes to use for kubelet connections. (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP]) --kubelet-timeout duration Timeout for kubelet operations. (default 5s) --kubernetes-service-node-port int If non-zero, the Kubernetes master service (which apiserver creates/maintains) will be of type NodePort, using this as the value of the port. If zero, the Kubernetes master service will be of type ClusterIP. --max-connection-bytes-per-sec int If non-zero, throttle each user connection to this number of bytes/sec. Currently only applies to long-running requests. --proxy-client-cert-file string Client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins. It is expected that this cert includes a signature from the CA in the --requestheader-client-ca-file flag. That CA is published in the 'extension-apiserver-authentication' configmap in the kube-system namespace. Components receiving calls from kube-aggregator should use that CA to perform their half of the mutual TLS verification. --proxy-client-key-file string Private key for the client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins. --service-account-signing-key-file string Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.) --service-cluster-ip-range string A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods. --service-node-port-range portRange A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767) Global flags: --add-dir-header If true, adds the file directory to the header --alsologtostderr log to standard error as well as files -h, --help help for kube-apiserver --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0) --log-dir string If non-empty, write log files in this directory --log-file string If non-empty, use this log file --log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800) --log-flush-frequency duration Maximum number of seconds between log flushes (default 5s) --logtostderr log to standard error instead of files (default true) --skip-headers If true, avoid header prefixes in the log messages --skip-log-headers If true, avoid headers when opening log files --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level number for the log level verbosity --version version[=true] Print version information and quit --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ```

Thought maybe container status would also be helpful:

==> container status <==
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
b7f5615b40f52       7e28efa976bd1       32 seconds ago      Exited              kube-apiserver            6                   ccb375e34f80e
dec4c19261121       da26705ccb4b5       2 minutes ago       Exited              kube-controller-manager   5                   1bc19600914ba
8084421c42296       76216c34ed0c7       6 minutes ago       Running             kube-scheduler            0                   eb21b69fad7a7
7bb1f3ab426ce       303ce5db0e90d       6 minutes ago       Running             etcd                      0                   1a867ffafd0f8
sharifelgamal commented 4 years ago

Yeah, it would seem that whatever version of kube-apiserver in use doesn't accept the --token-auth-file despite it definitely being there in documentation. That's not the only missing command line parameter in that usage text too.

priyawadhwa commented 4 years ago

Hey @agentreno were you able to resolve this issue? I'd suggest using a version of kubernetes that supports this flag, which you can specify via:

minikube start --kubernetes-version=vx.y.z
agentreno commented 4 years ago

hi @priyawadhwa :) I'm using version 1.18.3, I struggled to find a better reference but I think this file on this branch means it's supported by apiserver in that release?

Tested again and still seeing the same issue and same log output with apiserver usage info printed. Did you manage to get token auth working on a particular kubernetes version this way?

medyagh commented 3 years ago

@agentreno do you mind trying with v1.19.2 and see if that helps? the latest verison of minikube comes with kubernetes v.1.19

antonlisovenko commented 3 years ago

@medyagh Tried 1.19.2 and it still doesn't work. Strange thing - when I started the minikube first it ran on K8s 1.16.15 (I haven't run minikube for a while - so some old version came from somewhere) and it started!:

minikube start --extra-config=apiserver.token-auth-file=tokens.csv

๐Ÿ˜„  minikube v1.13.1 on Darwin 10.15.6
๐Ÿ†•  Kubernetes 1.19.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.19.2
โœจ  Using the docker driver based on existing profile
๐ŸŽ‰  minikube 1.14.2 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.14.2
๐Ÿ’ก  To disable this notice, run: 'minikube config set WantUpdateNotification false'

๐Ÿ‘  Starting control plane node minikube in cluster minikube
๐Ÿ”„  Restarting existing docker container for "minikube" ...
๐Ÿณ  Preparing Kubernetes v1.16.15 on Docker 19.03.8 ...
    โ–ช apiserver.token-auth-file=tokens.csv
๐Ÿ”Ž  Verifying Kubernetes components...
๐ŸŒŸ  Enabled addons: default-storageclass, storage-provisioner

โ—  /usr/local/bin/kubectl is version 1.18.8, which may have incompatibilites with Kubernetes 1.16.15.
๐Ÿ’ก  Want kubectl v1.16.15? Try 'minikube kubectl -- get pods -A'
๐Ÿ„  Done! kubectl is now configured to use "minikube" by default

Though when later I changed this to 1.19 and back to 1.16 - it stopped working :(

minikube delete
...
minikube start --extra-config=apiserver.token-auth-file=tokens.csv --kubernetes-version=v1.19.2

๐Ÿ˜„  minikube v1.13.1 on Darwin 10.15.6
โœจ  Automatically selected the docker driver. Other choices: hyperkit, virtualbox
๐Ÿ‘  Starting control plane node minikube in cluster minikube
๐Ÿ”ฅ  Creating docker container (CPUs=2, Memory=8100MB) ...
๐Ÿณ  Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
    โ–ช apiserver.token-auth-file=tokens.csv
๐Ÿ’ข  initialization failed, will try again: ....
minikube delete
...
minikube start --extra-config=apiserver.token-auth-file=tokens.csv --kubernetes-version=v1.16.15

๐Ÿ˜„  minikube v1.13.1 on Darwin 10.15.6
โœจ  Automatically selected the docker driver. Other choices: hyperkit, virtualbox
๐Ÿ‘  Starting control plane node minikube in cluster minikube
๐Ÿ”ฅ  Creating docker container (CPUs=2, Memory=8100MB) ...
๐Ÿณ  Preparing Kubernetes v1.16.15 on Docker 19.03.8 ...
    โ–ช apiserver.token-auth-file=tokens.csv
๐Ÿ’ข  initialization failed, will try again
jordeu commented 3 years ago

I've managed to make it work in this way:

mkdir -p ~/.minikube/files/etc/ca-certificates
cp tokens.csv ~/.minikube/files/etc/ca-certificates/

minikube --extra-config="apiserver.token-auth-file=/etc/ca-certificates/tokens.csv" start

The problem is that the file has to be in a folder that is mounted inside the apiserver pod.

medyagh commented 3 years ago

I've managed to make it work in this way:

mkdir -p ~/.minikube/files/etc/ca-certificates
cp tokens.csv ~/.minikube/files/etc/ca-certificates/

minikube --extra-config="apiserver.token-auth-file=/etc/ca-certificates/tokens.csv" start

The problem is that the file has to be in a folder that is mounted inside the apiserver pod.

@jordeu thank you for finding this workarround ! this would be a cool tutuorial to add to our website.

Aut0R3V commented 3 years ago

@medyagh I'd like to work on this

Aut0R3V commented 3 years ago

/assign

agentreno commented 3 years ago

@jordeu nice fix, I've tested and it works for me also :+1:

spowelljr commented 3 years ago

@Aut0R3V Are you still working on this?

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot commented 3 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten

andriyDev commented 3 years ago

@Aut0R3V, This issue is very old, I'll take it over!

/remove-lifecycle rotten

dtandersen commented 2 years ago

https://minikube.sigs.k8s.io/docs/tutorials/token-auth-file/ should explain why etc/ca-certificates/ is important