kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.28k stars 4.87k forks source link

Trouble Starting Minikube on VM #8619

Closed DNCoelho closed 4 years ago

DNCoelho commented 4 years ago

Steps to reproduce the issue:

  1. minikube start --driver=docker --extra-config=kubelet.resolv-conf=/run/systemd/resolve/resolv.conf --alsologtostderr -v=1

Full output of failed command:

I0701 12:18:24.099868    2117 start.go:98] hostinfo: {"hostname":"prd-solomon-deploy.egoiapp.com","uptime":157380,"bootTime":1593444924,"procs":347,"os":"linux","platform":"centos","platformFamily":"rhel","platformVersion":"7.8.2003","kernelVersion":"3.10.0-1127.13.1.el7.x86_64","virtualizationSystem":"","virtualizationRole":"","hostid":"922d382a-02c0-1e45-950f-9d9384ddf11c"}
I0701 12:18:24.100586    2117 start.go:108] virtualization:
๐Ÿ˜„  minikube v1.11.0 on Centos 7.8.2003
I0701 12:18:24.102994    2117 driver.go:253] Setting default libvirt URI to qemu:///system
I0701 12:18:24.103185    2117 notify.go:125] Checking for updates...
I0701 12:18:24.145551    2117 docker.go:95] docker version: linux-19.03.12
โœจ  Using the docker driver based on user configuration
I0701 12:18:24.147355    2117 start.go:214] selected driver: docker
I0701 12:18:24.147375    2117 start.go:611] validating driver "docker" against <nil>
I0701 12:18:24.147394    2117 start.go:617] status for docker: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0701 12:18:24.147424    2117 start.go:935] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
I0701 12:18:24.147495    2117 start_flags.go:218] no existing cluster config was found, will generate one from the flags
I0701 12:18:24.147696    2117 cli_runner.go:108] Run: docker system info --format "{{json .}}"
I0701 12:18:24.222823    2117 start_flags.go:232] Using suggested 8000MB memory alloc based on sys=32008MB, container=32008MB
I0701 12:18:24.222976    2117 start_flags.go:556] Wait components to verify : map[apiserver:true system_pods:true]
๐Ÿ‘  Starting control plane node minikube in cluster minikube
I0701 12:18:24.224807    2117 cache.go:105] Beginning downloading kic artifacts for docker with docker
I0701 12:18:24.264633    2117 image.go:88] Found gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 in local docker daemon, skipping pull
I0701 12:18:24.264660    2117 preload.go:95] Checking if preload exists for k8s version v1.18.3 and runtime docker
I0701 12:18:24.265052    2117 preload.go:103] Found local preload: /home/developer/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4
I0701 12:18:24.265065    2117 cache.go:49] Caching tarball of preloaded images
I0701 12:18:24.265087    2117 preload.go:129] Found /home/developer/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0701 12:18:24.265096    2117 cache.go:52] Finished verifying existence of preloaded tar for  v1.18.3 on docker
I0701 12:18:24.265397    2117 profile.go:156] Saving config to /home/developer/.minikube/profiles/minikube/config.json ...
I0701 12:18:24.265566    2117 lock.go:35] WriteFile acquiring /home/developer/.minikube/profiles/minikube/config.json: {Name:mkff2598918981c5c5097eedf01e6d4b22cad0b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 12:18:24.265833    2117 cache.go:152] Successfully downloaded all kic artifacts
I0701 12:18:24.265869    2117 start.go:240] acquiring machines lock for minikube: {Name:mke5415729b1366547810f69dae6b31ca074e0af Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0701 12:18:24.265914    2117 start.go:244] acquired machines lock for "minikube" in 33.08ยตs
I0701 12:18:24.265935    2117 start.go:84] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf} {Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]} &{Name: IP: Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}
I0701 12:18:24.265988    2117 start.go:121] createHost starting for "" (driver="docker")
๐Ÿ”ฅ  Creating docker container (CPUs=2, Memory=8000MB) ...
I0701 12:18:24.267937    2117 start.go:157] libmachine.API.Create for "minikube" (driver="docker")
I0701 12:18:24.267970    2117 client.go:161] LocalClient.Create starting
I0701 12:18:24.268005    2117 main.go:110] libmachine: Reading certificate data from /home/developer/.minikube/certs/ca.pem
I0701 12:18:24.268031    2117 main.go:110] libmachine: Decoding PEM data...
I0701 12:18:24.268050    2117 main.go:110] libmachine: Parsing certificate...
I0701 12:18:24.268166    2117 main.go:110] libmachine: Reading certificate data from /home/developer/.minikube/certs/cert.pem
I0701 12:18:24.268184    2117 main.go:110] libmachine: Decoding PEM data...
I0701 12:18:24.268197    2117 main.go:110] libmachine: Parsing certificate...
I0701 12:18:24.268512    2117 cli_runner.go:108] Run: docker ps -a --format {{.Names}}
I0701 12:18:24.302103    2117 cli_runner.go:108] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0701 12:18:24.336684    2117 oci.go:98] Successfully created a docker volume minikube
I0701 12:18:24.336770    2117 preload.go:95] Checking if preload exists for k8s version v1.18.3 and runtime docker
I0701 12:18:24.336843    2117 preload.go:103] Found local preload: /home/developer/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4
I0701 12:18:24.336881    2117 kic.go:134] Starting extracting preloaded images to volume ...
W0701 12:18:24.336915    2117 oci.go:158] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0701 12:18:24.337077    2117 cli_runner.go:108] Run: docker info --format "'{{json .SecurityOptions}}'"
I0701 12:18:24.336946    2117 cli_runner.go:108] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/developer/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir
I0701 12:18:24.410171    2117 cli_runner.go:108] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --security-opt apparmor=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --cpus=2 --memory=8000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438
I0701 12:18:25.145840    2117 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0701 12:18:25.181994    2117 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Status}}
I0701 12:18:25.217768    2117 oci.go:212] the created container "minikube" has a running status.
I0701 12:18:25.217816    2117 kic.go:162] Creating ssh key for kic: /home/developer/.minikube/machines/minikube/id_rsa...
I0701 12:18:25.397017    2117 kic_runner.go:179] docker (temp): /home/developer/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0701 12:18:25.631562    2117 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0701 12:18:25.631587    2117 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0701 12:18:33.051986    2117 cli_runner.go:150] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/developer/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir: (8.714750772s)
I0701 12:18:33.052299    2117 kic.go:139] duration metric: took 8.715438 seconds to extract preloaded images to volume
I0701 12:18:33.052371    2117 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Status}}
I0701 12:18:33.089549    2117 machine.go:88] provisioning docker machine ...
I0701 12:18:33.089596    2117 ubuntu.go:166] provisioning hostname "minikube"
I0701 12:18:33.089952    2117 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0701 12:18:33.129823    2117 main.go:110] libmachine: Using SSH client type: native
I0701 12:18:33.130101    2117 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf820] 0x7bf7f0 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0701 12:18:33.130120    2117 main.go:110] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0701 12:18:33.251914    2117 main.go:110] libmachine: SSH cmd err, output: <nil>: minikube

I0701 12:18:33.252032    2117 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0701 12:18:33.289872    2117 main.go:110] libmachine: Using SSH client type: native
I0701 12:18:33.290047    2117 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf820] 0x7bf7f0 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0701 12:18:33.290069    2117 main.go:110] libmachine: About to run SSH command:

        if ! grep -xq '.*\sminikube' /etc/hosts; then
            if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
            else
                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts;
            fi
        fi
I0701 12:18:33.399208    2117 main.go:110] libmachine: SSH cmd err, output: <nil>:
I0701 12:18:33.399252    2117 ubuntu.go:172] set auth options {CertDir:/home/developer/.minikube CaCertPath:/home/developer/.minikube/certs/ca.pem CaPrivateKeyPath:/home/developer/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/developer/.minikube/machines/server.pem ServerKeyPath:/home/developer/.minikube/machines/server-key.pem ClientKeyPath:/home/developer/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/developer/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/developer/.minikube}
I0701 12:18:33.399283    2117 ubuntu.go:174] setting up certificates
I0701 12:18:33.399293    2117 provision.go:82] configureAuth start
I0701 12:18:33.399355    2117 cli_runner.go:108] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0701 12:18:33.436390    2117 provision.go:131] copyHostCerts
I0701 12:18:33.436458    2117 exec_runner.go:91] found /home/developer/.minikube/ca.pem, removing ...
I0701 12:18:33.436558    2117 exec_runner.go:98] cp: /home/developer/.minikube/certs/ca.pem --> /home/developer/.minikube/ca.pem (1046 bytes)
I0701 12:18:33.436710    2117 exec_runner.go:91] found /home/developer/.minikube/cert.pem, removing ...
I0701 12:18:33.436801    2117 exec_runner.go:98] cp: /home/developer/.minikube/certs/cert.pem --> /home/developer/.minikube/cert.pem (1086 bytes)
I0701 12:18:33.436930    2117 exec_runner.go:91] found /home/developer/.minikube/key.pem, removing ...
I0701 12:18:33.436983    2117 exec_runner.go:98] cp: /home/developer/.minikube/certs/key.pem --> /home/developer/.minikube/key.pem (1675 bytes)
I0701 12:18:33.437070    2117 provision.go:105] generating server cert: /home/developer/.minikube/machines/server.pem ca-key=/home/developer/.minikube/certs/ca.pem private-key=/home/developer/.minikube/certs/ca-key.pem org=developer.minikube san=[172.17.0.3 localhost 127.0.0.1]
I0701 12:18:33.528278    2117 provision.go:159] copyRemoteCerts
I0701 12:18:33.528652    2117 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0701 12:18:33.528699    2117 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0701 12:18:33.565378    2117 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/developer/.minikube/machines/minikube/id_rsa Username:docker}
I0701 12:18:33.645092    2117 ssh_runner.go:215] scp /home/developer/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1046 bytes)
I0701 12:18:33.660207    2117 ssh_runner.go:215] scp /home/developer/.minikube/machines/server.pem --> /etc/docker/server.pem (1127 bytes)
I0701 12:18:33.673730    2117 ssh_runner.go:215] scp /home/developer/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0701 12:18:33.687372    2117 provision.go:85] duration metric: configureAuth took 288.060118ms
I0701 12:18:33.687406    2117 ubuntu.go:190] setting minikube options for container-runtime
I0701 12:18:33.687607    2117 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0701 12:18:33.723587    2117 main.go:110] libmachine: Using SSH client type: native
I0701 12:18:33.723800    2117 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf820] 0x7bf7f0 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0701 12:18:33.723827    2117 main.go:110] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0701 12:18:33.831221    2117 main.go:110] libmachine: SSH cmd err, output: <nil>: xfs

I0701 12:18:33.831250    2117 ubuntu.go:71] root file system type: xfs
I0701 12:18:33.831390    2117 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I0701 12:18:33.831463    2117 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0701 12:18:33.868225    2117 main.go:110] libmachine: Using SSH client type: native
I0701 12:18:33.868418    2117 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf820] 0x7bf7f0 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0701 12:18:33.868543    2117 main.go:110] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0701 12:18:33.981107    2117 main.go:110] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0701 12:18:33.981272    2117 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0701 12:18:34.018685    2117 main.go:110] libmachine: Using SSH client type: native
I0701 12:18:34.018851    2117 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf820] 0x7bf7f0 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0701 12:18:34.018894    2117 main.go:110] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0701 12:18:34.776032    2117 main.go:110] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service   2019-08-29 04:42:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new  2020-07-01 11:18:33.979000000 +0000
@@ -8,24 +8,22 @@

 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP

 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -33,9 +31,10 @@
 LimitNPROC=infinity
 LimitCORE=infinity

-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0

 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes

I0701 12:18:34.776147    2117 machine.go:91] provisioned docker machine in 1.686572166s
I0701 12:18:34.776161    2117 client.go:164] LocalClient.Create took 10.508182636s
I0701 12:18:34.776174    2117 start.go:162] duration metric: libmachine.API.Create for "minikube" took 10.508237416s
I0701 12:18:34.776185    2117 start.go:203] post-start starting for "minikube" (driver="docker")
I0701 12:18:34.776190    2117 start.go:213] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0701 12:18:34.776261    2117 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0701 12:18:34.776302    2117 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0701 12:18:34.813561    2117 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/developer/.minikube/machines/minikube/id_rsa Username:docker}
I0701 12:18:34.893126    2117 ssh_runner.go:148] Run: cat /etc/os-release
I0701 12:18:34.895723    2117 main.go:110] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0701 12:18:34.895744    2117 main.go:110] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0701 12:18:34.895754    2117 main.go:110] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0701 12:18:34.895761    2117 info.go:96] Remote host: Ubuntu 19.10
I0701 12:18:34.895774    2117 filesync.go:118] Scanning /home/developer/.minikube/addons for local assets ...
I0701 12:18:34.895806    2117 filesync.go:118] Scanning /home/developer/.minikube/files for local assets ...
I0701 12:18:34.895823    2117 start.go:206] post-start completed in 119.63287ms
I0701 12:18:34.896145    2117 cli_runner.go:108] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0701 12:18:34.933379    2117 profile.go:156] Saving config to /home/developer/.minikube/profiles/minikube/config.json ...
I0701 12:18:34.933662    2117 start.go:124] duration metric: createHost completed in 10.667659586s
I0701 12:18:34.933678    2117 start.go:75] releasing machines lock for "minikube", held for 10.667751236s
I0701 12:18:34.933755    2117 cli_runner.go:108] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0701 12:18:34.970453    2117 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I0701 12:18:34.970526    2117 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0701 12:18:34.970584    2117 ssh_runner.go:148] Run: systemctl --version
I0701 12:18:34.970642    2117 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0701 12:18:35.008927    2117 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/developer/.minikube/machines/minikube/id_rsa Username:docker}
I0701 12:18:35.009204    2117 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/developer/.minikube/machines/minikube/id_rsa Username:docker}
I0701 12:18:35.084356    2117 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0701 12:18:35.092561    2117 cruntime.go:189] skipping containerd shutdown because we are bound to it
I0701 12:18:35.092633    2117 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I0701 12:18:35.101324    2117 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0701 12:18:35.140405    2117 ssh_runner.go:148] Run: sudo systemctl start docker
I0701 12:18:35.148004    2117 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
๐Ÿณ  Preparing Kubernetes v1.18.3 on Docker 19.03.2 ...
I0701 12:18:35.233957    2117 cli_runner.go:108] Run: docker network ls --filter name=bridge --format {{.ID}}
I0701 12:18:35.271028    2117 cli_runner.go:108] Run: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" 22ff83b0ebaf
I0701 12:18:35.308358    2117 network.go:77] got host ip for mount in container by inspect docker network: 172.17.0.1
I0701 12:18:35.308401    2117 start.go:268] checking
I0701 12:18:35.308480    2117 ssh_runner.go:148] Run: grep 172.17.0.1   host.minikube.internal$ /etc/hosts
I0701 12:18:35.313273    2117 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "172.17.0.1  host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
    โ–ช kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
    โ–ช kubeadm.pod-network-cidr=10.244.0.0/16
I0701 12:18:35.332842    2117 preload.go:95] Checking if preload exists for k8s version v1.18.3 and runtime docker
I0701 12:18:35.332891    2117 preload.go:103] Found local preload: /home/developer/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4
I0701 12:18:35.332937    2117 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0701 12:18:35.372897    2117 docker.go:379] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.3
k8s.gcr.io/kube-apiserver:v1.18.3
k8s.gcr.io/kube-scheduler:v1.18.3
k8s.gcr.io/kube-controller-manager:v1.18.3
kubernetesui/dashboard:v2.0.0
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
gcr.io/k8s-minikube/storage-provisioner:v1.8.1

-- /stdout --
I0701 12:18:35.372948    2117 docker.go:317] Images already preloaded, skipping extraction
I0701 12:18:35.373005    2117 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0701 12:18:35.416003    2117 docker.go:379] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.3
k8s.gcr.io/kube-controller-manager:v1.18.3
k8s.gcr.io/kube-scheduler:v1.18.3
k8s.gcr.io/kube-apiserver:v1.18.3
kubernetesui/dashboard:v2.0.0
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
gcr.io/k8s-minikube/storage-provisioner:v1.8.1

-- /stdout --
I0701 12:18:35.416044    2117 cache_images.go:69] Images are preloaded, skipping loading
I0701 12:18:35.416096    2117 kubeadm.go:124] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.3 APIServerPort:8443 KubernetesVersion:v1.18.3 EtcdDataDir:/var/lib/minikube/etcd ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.3"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.3 ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0701 12:18:35.416211    2117 kubeadm.go:128] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.17.0.3
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 172.17.0.3
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "172.17.0.3"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.18.3
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 172.17.0.3:10249

I0701 12:18:35.416345    2117 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
I0701 12:18:35.461764    2117 kubeadm.go:755] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.3/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.3 --pod-manifest-path=/etc/kubernetes/manifests --resolv-conf=/run/systemd/resolve/resolv.conf

[Install]
 config:
{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf} {Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:}
I0701 12:18:35.461837    2117 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.18.3
I0701 12:18:35.468346    2117 binaries.go:43] Found k8s binaries, skipping transfer
I0701 12:18:35.468380    2117 ssh_runner.go:148] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0701 12:18:35.473775    2117 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (579 bytes)
I0701 12:18:35.488295    2117 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
I0701 12:18:35.506137    2117 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1458 bytes)
I0701 12:18:35.521825    2117 start.go:268] checking
I0701 12:18:35.521857    2117 ssh_runner.go:148] Run: grep 172.17.0.3   control-plane.minikube.internal$ /etc/hosts
I0701 12:18:35.524207    2117 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.3 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0701 12:18:35.531739    2117 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0701 12:18:35.568165    2117 ssh_runner.go:148] Run: sudo systemctl start kubelet
I0701 12:18:35.582940    2117 certs.go:52] Setting up /home/developer/.minikube/profiles/minikube for IP: 172.17.0.3
I0701 12:18:35.582985    2117 certs.go:169] skipping minikubeCA CA generation: /home/developer/.minikube/ca.key
I0701 12:18:35.583002    2117 certs.go:169] skipping proxyClientCA CA generation: /home/developer/.minikube/proxy-client-ca.key
I0701 12:18:35.583048    2117 certs.go:273] generating minikube-user signed cert: /home/developer/.minikube/profiles/minikube/client.key
I0701 12:18:35.583061    2117 crypto.go:69] Generating cert /home/developer/.minikube/profiles/minikube/client.crt with IP's: []
I0701 12:18:35.822360    2117 crypto.go:157] Writing cert to /home/developer/.minikube/profiles/minikube/client.crt ...
I0701 12:18:35.822397    2117 lock.go:35] WriteFile acquiring /home/developer/.minikube/profiles/minikube/client.crt: {Name:mk03263bf2f7643668e5b28b38566bb9267ce519 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 12:18:35.822996    2117 crypto.go:165] Writing key to /home/developer/.minikube/profiles/minikube/client.key ...
I0701 12:18:35.823008    2117 lock.go:35] WriteFile acquiring /home/developer/.minikube/profiles/minikube/client.key: {Name:mkbd233eee7ad636b62d4202793bd35d8ecee3ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 12:18:35.823108    2117 certs.go:273] generating minikube signed cert: /home/developer/.minikube/profiles/minikube/apiserver.key.0f3e66d0
I0701 12:18:35.823121    2117 crypto.go:69] Generating cert /home/developer/.minikube/profiles/minikube/apiserver.crt.0f3e66d0 with IP's: [172.17.0.3 10.96.0.1 127.0.0.1 10.0.0.1]
I0701 12:18:35.893481    2117 crypto.go:157] Writing cert to /home/developer/.minikube/profiles/minikube/apiserver.crt.0f3e66d0 ...
I0701 12:18:35.893496    2117 lock.go:35] WriteFile acquiring /home/developer/.minikube/profiles/minikube/apiserver.crt.0f3e66d0: {Name:mkff1b2dc662839fd63290ddb12fd6767ff4b188 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 12:18:35.893599    2117 crypto.go:165] Writing key to /home/developer/.minikube/profiles/minikube/apiserver.key.0f3e66d0 ...
I0701 12:18:35.893613    2117 lock.go:35] WriteFile acquiring /home/developer/.minikube/profiles/minikube/apiserver.key.0f3e66d0: {Name:mke9f65076ed5150a0bf30c939b0ecbf24c4497e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 12:18:35.893740    2117 certs.go:284] copying /home/developer/.minikube/profiles/minikube/apiserver.crt.0f3e66d0 -> /home/developer/.minikube/profiles/minikube/apiserver.crt
I0701 12:18:35.893818    2117 certs.go:288] copying /home/developer/.minikube/profiles/minikube/apiserver.key.0f3e66d0 -> /home/developer/.minikube/profiles/minikube/apiserver.key
I0701 12:18:35.893932    2117 certs.go:273] generating aggregator signed cert: /home/developer/.minikube/profiles/minikube/proxy-client.key
I0701 12:18:35.893944    2117 crypto.go:69] Generating cert /home/developer/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0701 12:18:36.127384    2117 crypto.go:157] Writing cert to /home/developer/.minikube/profiles/minikube/proxy-client.crt ...
I0701 12:18:36.127422    2117 lock.go:35] WriteFile acquiring /home/developer/.minikube/profiles/minikube/proxy-client.crt: {Name:mkacee8abb20a2e96d1aed26b36096fa941fe92b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 12:18:36.127665    2117 crypto.go:165] Writing key to /home/developer/.minikube/profiles/minikube/proxy-client.key ...
I0701 12:18:36.127678    2117 lock.go:35] WriteFile acquiring /home/developer/.minikube/profiles/minikube/proxy-client.key: {Name:mk20b76469a4fa4c0ee3474896ba02d677b76af9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 12:18:36.127848    2117 certs.go:348] found cert: /home/developer/.minikube/certs/home/developer/.minikube/certs/ca-key.pem (1679 bytes)
I0701 12:18:36.127923    2117 certs.go:348] found cert: /home/developer/.minikube/certs/home/developer/.minikube/certs/ca.pem (1046 bytes)
I0701 12:18:36.127954    2117 certs.go:348] found cert: /home/developer/.minikube/certs/home/developer/.minikube/certs/cert.pem (1086 bytes)
I0701 12:18:36.127977    2117 certs.go:348] found cert: /home/developer/.minikube/certs/home/developer/.minikube/certs/key.pem (1675 bytes)
I0701 12:18:36.129166    2117 ssh_runner.go:215] scp /home/developer/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1350 bytes)
I0701 12:18:36.146629    2117 ssh_runner.go:215] scp /home/developer/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0701 12:18:36.160811    2117 ssh_runner.go:215] scp /home/developer/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1103 bytes)
I0701 12:18:36.174715    2117 ssh_runner.go:215] scp /home/developer/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0701 12:18:36.189275    2117 ssh_runner.go:215] scp /home/developer/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes)
I0701 12:18:36.203636    2117 ssh_runner.go:215] scp /home/developer/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0701 12:18:36.217205    2117 ssh_runner.go:215] scp /home/developer/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes)
I0701 12:18:36.230940    2117 ssh_runner.go:215] scp /home/developer/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0701 12:18:36.244492    2117 ssh_runner.go:215] scp /home/developer/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes)
I0701 12:18:36.258762    2117 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I0701 12:18:36.272318    2117 ssh_runner.go:148] Run: openssl version
I0701 12:18:36.277168    2117 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0701 12:18:36.283847    2117 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0701 12:18:36.286553    2117 certs.go:389] hashing: -rw-r--r--. 1 root root 1066 Jun 29 15:37 /usr/share/ca-certificates/minikubeCA.pem
I0701 12:18:36.286581    2117 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0701 12:18:36.290799    2117 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0701 12:18:36.296527    2117 kubeadm.go:293] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf} {Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]}
I0701 12:18:36.296666    2117 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0701 12:18:36.334805    2117 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0701 12:18:36.340308    2117 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0701 12:18:36.346077    2117 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver
I0701 12:18:36.346116    2117 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0701 12:18:36.351317    2117 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0701 12:18:36.351347    2117 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0701 12:22:39.154971    2117 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (4m2.803595649s)
๐Ÿ’ฅ  initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 3.10.0-1127.13.1.el7.x86_64
DOCKER_VERSION: 19.03.2
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

    Unfortunately, an error has occurred:
        timed out waiting for the condition

    This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in docker:
        - 'docker ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'docker logs CONTAINERID'

stderr:
W0701 11:18:36.390788     782 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
    [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/3.10.0-1127.13.1.el7.x86_64\n", err: exit status 1
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0701 11:18:39.145372     782 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0701 11:18:39.146258     782 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

I0701 12:22:39.155299    2117 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0701 12:22:39.676884    2117 ssh_runner.go:148] Run: sudo systemctl stop -f kubelet
I0701 12:22:39.686016    2117 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0701 12:22:39.723419    2117 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver
I0701 12:22:39.723483    2117 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0701 12:22:39.729183    2117 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0701 12:22:39.729220    2117 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0701 12:26:40.931424    2117 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (4m1.202166467s)
I0701 12:26:40.931489    2117 kubeadm.go:295] StartCluster complete in 8m4.634971786s
I0701 12:26:40.931817    2117 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0701 12:26:40.970183    2117 logs.go:203] 0 containers: []
W0701 12:26:40.970210    2117 logs.go:205] No container was found matching "kube-apiserver"
I0701 12:26:40.970263    2117 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0701 12:26:41.007065    2117 logs.go:203] 0 containers: []
W0701 12:26:41.007094    2117 logs.go:205] No container was found matching "etcd"
I0701 12:26:41.007163    2117 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0701 12:26:41.044693    2117 logs.go:203] 0 containers: []
W0701 12:26:41.044720    2117 logs.go:205] No container was found matching "coredns"
I0701 12:26:41.044783    2117 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0701 12:26:41.080720    2117 logs.go:203] 0 containers: []
W0701 12:26:41.080750    2117 logs.go:205] No container was found matching "kube-scheduler"
I0701 12:26:41.080804    2117 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0701 12:26:41.117049    2117 logs.go:203] 0 containers: []
W0701 12:26:41.117075    2117 logs.go:205] No container was found matching "kube-proxy"
I0701 12:26:41.117131    2117 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0701 12:26:41.153116    2117 logs.go:203] 0 containers: []
W0701 12:26:41.153144    2117 logs.go:205] No container was found matching "kubernetes-dashboard"
I0701 12:26:41.153204    2117 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0701 12:26:41.190395    2117 logs.go:203] 0 containers: []
W0701 12:26:41.190428    2117 logs.go:205] No container was found matching "storage-provisioner"
I0701 12:26:41.190490    2117 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0701 12:26:41.228263    2117 logs.go:203] 0 containers: []
W0701 12:26:41.228289    2117 logs.go:205] No container was found matching "kube-controller-manager"
I0701 12:26:41.228320    2117 logs.go:117] Gathering logs for kubelet ...
I0701 12:26:41.228337    2117 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0701 12:26:41.263314    2117 logs.go:117] Gathering logs for dmesg ...
I0701 12:26:41.263341    2117 ssh_runner.go:148] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0701 12:26:41.293538    2117 logs.go:117] Gathering logs for describe nodes ...
I0701 12:26:41.293555    2117 ssh_runner.go:148] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0701 12:26:41.344646    2117 logs.go:124] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
 output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?

** /stderr **
I0701 12:26:41.344678    2117 logs.go:117] Gathering logs for Docker ...
I0701 12:26:41.344693    2117 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0701 12:26:41.356353    2117 logs.go:117] Gathering logs for container status ...
I0701 12:26:41.356371    2117 ssh_runner.go:148] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0701 12:26:41.380179    2117 out.go:201] Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 3.10.0-1127.13.1.el7.x86_64
DOCKER_VERSION: 19.03.2
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

    Unfortunately, an error has occurred:
        timed out waiting for the condition

    This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in docker:
        - 'docker ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'docker logs CONTAINERID'

stderr:
W0701 11:22:39.770201    4183 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
    [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/3.10.0-1127.13.1.el7.x86_64\n", err: exit status 1
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0701 11:22:40.917136    4183 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0701 11:22:40.918064    4183 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

๐Ÿ’ฃ  Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 3.10.0-1127.13.1.el7.x86_64
DOCKER_VERSION: 19.03.2
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

    Unfortunately, an error has occurred:
        timed out waiting for the condition

    This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in docker:
        - 'docker ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'docker logs CONTAINERID'

stderr:
W0701 11:22:39.770201    4183 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
    [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/3.10.0-1127.13.1.el7.x86_64\n", err: exit status 1
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0701 11:22:40.917136    4183 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0701 11:22:40.918064    4183 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

๐Ÿ˜ฟ  minikube is exiting due to an error. If the above message is not useful, open an issue:
๐Ÿ‘‰  https://github.com/kubernetes/minikube/issues/new/choose
I0701 12:26:41.380780    2117 exit.go:58] WithError(failed to start node)=startup failed: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 3.10.0-1127.13.1.el7.x86_64
DOCKER_VERSION: 19.03.2
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

    Unfortunately, an error has occurred:
        timed out waiting for the condition

    This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in docker:
        - 'docker ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'docker logs CONTAINERID'

stderr:
W0701 11:22:39.770201    4183 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
    [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/3.10.0-1127.13.1.el7.x86_64\n", err: exit status 1
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0701 11:22:40.917136    4183 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0701 11:22:40.918064    4183 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
 called from:
goroutine 1 [running]:
runtime/debug.Stack(0x0, 0x0, 0x0)
    /usr/local/go/src/runtime/debug/stack.go:24 +0x9d
k8s.io/minikube/pkg/minikube/exit.WithError(0x1adff52, 0x14, 0x1db6b80, 0xc0009191e0)
    /app/pkg/minikube/exit/exit.go:58 +0x34
k8s.io/minikube/cmd/minikube/cmd.runStart(0x2b00360, 0xc0007115c0, 0x0, 0x4)
    /app/cmd/minikube/cmd/start.go:203 +0x7f7
github.com/spf13/cobra.(*Command).execute(0x2b00360, 0xc000711580, 0x4, 0x4, 0x2b00360, 0xc000711580)
    /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:846 +0x2aa
github.com/spf13/cobra.(*Command).ExecuteC(0x2b05220, 0x0, 0x1, 0xc000714ca0)
    /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950 +0x349
github.com/spf13/cobra.(*Command).Execute(...)
    /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887
k8s.io/minikube/cmd/minikube/cmd.Execute()
    /app/cmd/minikube/cmd/root.go:112 +0x747
main.main()
    /app/cmd/minikube/main.go:66 +0xea

โŒ  [NONE_KUBELET] failed to start node startup failed: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 3.10.0-1127.13.1.el7.x86_64
DOCKER_VERSION: 19.03.2
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

    Unfortunately, an error has occurred:
        timed out waiting for the condition

    This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in docker:
        - 'docker ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'docker logs CONTAINERID'

stderr:
W0701 11:22:39.770201    4183 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
    [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/3.10.0-1127.13.1.el7.x86_64\n", err: exit status 1
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0701 11:22:40.917136    4183 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0701 11:22:40.918064    4183 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster

Full output of minikube start command used, if not already included:

Optional: Full output of minikube logs command:

``` ==> Docker <== -- Logs begin at Wed 2020-07-01 11:18:25 UTC, end at Wed 2020-07-01 11:33:25 UTC. -- Jul 01 11:18:26 minikube systemd[1]: Starting Docker Application Container Engine... Jul 01 11:18:26 minikube dockerd[117]: time="2020-07-01T11:18:26.291012099Z" level=info msg="Starting up" Jul 01 11:18:26 minikube dockerd[117]: time="2020-07-01T11:18:26.299339327Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 01 11:18:26 minikube dockerd[117]: time="2020-07-01T11:18:26.299369227Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 01 11:18:26 minikube dockerd[117]: time="2020-07-01T11:18:26.299389737Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Jul 01 11:18:26 minikube dockerd[117]: time="2020-07-01T11:18:26.299402517Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 01 11:18:26 minikube dockerd[117]: time="2020-07-01T11:18:26.299549328Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0009164f0, CONNECTING" module=grpc Jul 01 11:18:26 minikube dockerd[117]: time="2020-07-01T11:18:26.299579808Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc Jul 01 11:18:26 minikube dockerd[117]: time="2020-07-01T11:18:26.310672149Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0009164f0, READY" module=grpc Jul 01 11:18:26 minikube dockerd[117]: time="2020-07-01T11:18:26.311620220Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 01 11:18:26 minikube dockerd[117]: time="2020-07-01T11:18:26.311641680Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 01 11:18:26 minikube dockerd[117]: time="2020-07-01T11:18:26.311658340Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Jul 01 11:18:26 minikube dockerd[117]: time="2020-07-01T11:18:26.311668100Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 01 11:18:26 minikube dockerd[117]: time="2020-07-01T11:18:26.311720280Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0001998d0, CONNECTING" module=grpc Jul 01 11:18:26 minikube dockerd[117]: time="2020-07-01T11:18:26.311739950Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc Jul 01 11:18:26 minikube dockerd[117]: time="2020-07-01T11:18:26.311942490Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0001998d0, READY" module=grpc Jul 01 11:18:32 minikube dockerd[117]: time="2020-07-01T11:18:32.502490179Z" level=warning msg="overlay2: the backing xfs filesystem is formatted without d_type support, which leads to incorrect behavior. Reformat the filesystem with ftype=1 to enable d_type support. Backing filesystems without d_type support are not supported." storage-driver=overlay2 Jul 01 11:18:32 minikube dockerd[117]: time="2020-07-01T11:18:32.503497850Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Jul 01 11:18:32 minikube dockerd[117]: time="2020-07-01T11:18:32.524039331Z" level=info msg="Loading containers: start." Jul 01 11:18:32 minikube dockerd[117]: time="2020-07-01T11:18:32.568441725Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 01 11:18:32 minikube dockerd[117]: time="2020-07-01T11:18:32.591907258Z" level=info msg="Loading containers: done." Jul 01 11:18:32 minikube dockerd[117]: time="2020-07-01T11:18:32.651850028Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2 Jul 01 11:18:32 minikube dockerd[117]: time="2020-07-01T11:18:32.651998139Z" level=info msg="Daemon has completed initialization" Jul 01 11:18:32 minikube dockerd[117]: time="2020-07-01T11:18:32.672200089Z" level=info msg="API listen on /run/docker.sock" Jul 01 11:18:32 minikube systemd[1]: Started Docker Application Container Engine. Jul 01 11:18:34 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed. Jul 01 11:18:34 minikube systemd[1]: Stopping Docker Application Container Engine... Jul 01 11:18:34 minikube dockerd[117]: time="2020-07-01T11:18:34.378610665Z" level=info msg="Processing signal 'terminated'" Jul 01 11:18:34 minikube dockerd[117]: time="2020-07-01T11:18:34.379352535Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby Jul 01 11:18:34 minikube dockerd[117]: time="2020-07-01T11:18:34.379703876Z" level=info msg="Daemon shutdown complete" Jul 01 11:18:34 minikube systemd[1]: docker.service: Succeeded. Jul 01 11:18:34 minikube systemd[1]: Stopped Docker Application Container Engine. Jul 01 11:18:34 minikube systemd[1]: Starting Docker Application Container Engine... Jul 01 11:18:34 minikube dockerd[365]: time="2020-07-01T11:18:34.569767206Z" level=info msg="Starting up" Jul 01 11:18:34 minikube dockerd[365]: time="2020-07-01T11:18:34.571829018Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 01 11:18:34 minikube dockerd[365]: time="2020-07-01T11:18:34.571846758Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 01 11:18:34 minikube dockerd[365]: time="2020-07-01T11:18:34.571874598Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Jul 01 11:18:34 minikube dockerd[365]: time="2020-07-01T11:18:34.571885648Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 01 11:18:34 minikube dockerd[365]: time="2020-07-01T11:18:34.571958698Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000835820, CONNECTING" module=grpc Jul 01 11:18:34 minikube dockerd[365]: time="2020-07-01T11:18:34.572313368Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000835820, READY" module=grpc Jul 01 11:18:34 minikube dockerd[365]: time="2020-07-01T11:18:34.573040759Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 01 11:18:34 minikube dockerd[365]: time="2020-07-01T11:18:34.573058639Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 01 11:18:34 minikube dockerd[365]: time="2020-07-01T11:18:34.573077649Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Jul 01 11:18:34 minikube dockerd[365]: time="2020-07-01T11:18:34.573088159Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 01 11:18:34 minikube dockerd[365]: time="2020-07-01T11:18:34.573167529Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000835dc0, CONNECTING" module=grpc Jul 01 11:18:34 minikube dockerd[365]: time="2020-07-01T11:18:34.573178189Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc Jul 01 11:18:34 minikube dockerd[365]: time="2020-07-01T11:18:34.574485161Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000835dc0, READY" module=grpc Jul 01 11:18:34 minikube dockerd[365]: time="2020-07-01T11:18:34.597292383Z" level=warning msg="overlay2: the backing xfs filesystem is formatted without d_type support, which leads to incorrect behavior. Reformat the filesystem with ftype=1 to enable d_type support. Backing filesystems without d_type support are not supported." storage-driver=overlay2 Jul 01 11:18:34 minikube dockerd[365]: time="2020-07-01T11:18:34.597492474Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Jul 01 11:18:34 minikube dockerd[365]: time="2020-07-01T11:18:34.611209967Z" level=info msg="Loading containers: start." Jul 01 11:18:34 minikube dockerd[365]: time="2020-07-01T11:18:34.699761456Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 01 11:18:34 minikube dockerd[365]: time="2020-07-01T11:18:34.735057951Z" level=info msg="Loading containers: done." Jul 01 11:18:34 minikube dockerd[365]: time="2020-07-01T11:18:34.751818718Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2 Jul 01 11:18:34 minikube dockerd[365]: time="2020-07-01T11:18:34.751889238Z" level=info msg="Daemon has completed initialization" Jul 01 11:18:34 minikube systemd[1]: Started Docker Application Container Engine. Jul 01 11:18:34 minikube dockerd[365]: time="2020-07-01T11:18:34.774474311Z" level=info msg="API listen on /var/run/docker.sock" Jul 01 11:18:34 minikube dockerd[365]: time="2020-07-01T11:18:34.774507121Z" level=info msg="API listen on [::]:2376" ==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID ==> describe nodes <== E0701 12:33:25.622601 15334 logs.go:178] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1 stdout: stderr: The connection to the server localhost:8443 was refused - did you specify the right host or port? output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **" ==> dmesg <== [ +13.029358] Firewall: *TCP_IN Blocked* IN=eth0 OUT= MAC=2a:62:ba:b5:55:02:00:1b:53:34:6f:c2:08:00 SRC=185.39.11.38 DST=185.79.227.66 LEN=40 TOS=0x00 PREC=0x00 TTL=249 ID=20839 PROTO=TCP SPT=43934 DPT=25589 WINDOW=1024 RES=0x00 SYN URGP=0 [ +25.893642] Firewall: *TCP_IN Blocked* IN=eth0 OUT= MAC=2a:62:ba:b5:55:02:00:1b:53:34:6f:c2:08:00 SRC=89.248.174.201 DST=185.79.227.66 LEN=40 TOS=0x00 PREC=0x00 TTL=249 ID=46386 PROTO=TCP SPT=55536 DPT=5579 WINDOW=1024 RES=0x00 SYN URGP=0 [Jun29 16:34] Firewall: *TCP_IN Blocked* IN=eth0 OUT= MAC=2a:62:ba:b5:55:02:00:1b:53:34:6f:c2:08:00 SRC=185.176.27.14 DST=185.79.227.66 LEN=40 TOS=0x00 PREC=0x00 TTL=247 ID=55253 PROTO=TCP SPT=41962 DPT=31287 WINDOW=1024 RES=0x00 SYN URGP=0 [Jun29 16:35] Firewall: *TCP_IN Blocked* IN=eth0 OUT= MAC=2a:62:ba:b5:55:02:00:1b:53:34:6f:c2:08:00 SRC=23.253.109.27 DST=185.79.227.66 LEN=40 TOS=0x08 PREC=0x40 TTL=238 ID=56337 PROTO=TCP SPT=46390 DPT=12654 WINDOW=1024 RES=0x00 SYN URGP=0 [ +9.278561] Firewall: *TCP_IN Blocked* IN=eth0 OUT= MAC=2a:62:ba:b5:55:02:00:1b:53:34:6f:c2:08:00 SRC=193.27.228.221 DST=185.79.227.66 LEN=40 TOS=0x00 PREC=0x00 TTL=243 ID=37823 PROTO=TCP SPT=43830 DPT=5700 WINDOW=1024 RES=0x00 SYN URGP=0 [ +0.892341] Firewall: *UDP_IN Blocked* IN=eth0 OUT= MAC=2a:62:ba:b5:55:02:00:1b:53:34:6f:c2:08:00 SRC=89.248.168.217 DST=185.79.227.66 LEN=57 TOS=0x00 PREC=0x00 TTL=250 ID=54321 PROTO=UDP SPT=42925 DPT=6656 LEN=37 [ +25.742343] Firewall: *TCP_IN Blocked* IN=eth0 OUT= MAC=2a:62:ba:b5:55:02:00:1b:53:34:6f:c2:08:00 SRC=222.186.19.210 DST=185.79.227.66 LEN=40 TOS=0x08 PREC=0x20 TTL=237 ID=54321 PROTO=TCP SPT=58450 DPT=3129 WINDOW=65535 RES=0x00 SYN URGP=0 [Jun29 16:36] overlayfs: upper fs needs to support d_type. [ +0.107395] overlayfs: upper fs needs to support d_type. [Jun29 16:40] overlayfs: upper fs needs to support d_type. [ +0.109043] overlayfs: upper fs needs to support d_type. [Jun30 18:02] conntrack: generic helper won't handle protocol 47. Please consider loading the specific helper module. [Jul 1 09:04] overlayfs: upper fs needs to support d_type. [ +7.613614] overlayfs: upper fs needs to support d_type. [ +1.953702] overlayfs: upper fs needs to support d_type. [ +0.102965] overlayfs: upper fs needs to support d_type. [ +0.589074] overlayfs: upper fs needs to support d_type. [ +0.102715] overlayfs: upper fs needs to support d_type. [Jul 1 09:19] overlayfs: upper fs needs to support d_type. [ +0.102207] overlayfs: upper fs needs to support d_type. [ +21.488652] overlayfs: upper fs needs to support d_type. [ +7.332885] overlayfs: upper fs needs to support d_type. [ +2.010534] overlayfs: upper fs needs to support d_type. [ +0.137813] overlayfs: upper fs needs to support d_type. [ +0.596313] overlayfs: upper fs needs to support d_type. [ +0.102391] overlayfs: upper fs needs to support d_type. [Jul 1 09:29] overlayfs: upper fs needs to support d_type. [Jul 1 09:30] overlayfs: upper fs needs to support d_type. [ +1.932887] overlayfs: upper fs needs to support d_type. [ +0.106888] overlayfs: upper fs needs to support d_type. [ +0.593185] overlayfs: upper fs needs to support d_type. [ +0.105689] overlayfs: upper fs needs to support d_type. [Jul 1 09:41] overlayfs: upper fs needs to support d_type. [ +0.106641] overlayfs: upper fs needs to support d_type. [Jul 1 09:42] overlayfs: upper fs needs to support d_type. [ +0.105748] overlayfs: upper fs needs to support d_type. [Jul 1 09:57] overlayfs: upper fs needs to support d_type. [ +7.820441] overlayfs: upper fs needs to support d_type. [ +2.219126] overlayfs: upper fs needs to support d_type. [ +0.108520] overlayfs: upper fs needs to support d_type. [Jul 1 10:55] overlayfs: upper fs needs to support d_type. [ +2.219523] overlayfs: upper fs needs to support d_type. [Jul 1 10:57] overlayfs: upper fs needs to support d_type. [ +2.241404] overlayfs: upper fs needs to support d_type. [ +2.254684] overlayfs: upper fs needs to support d_type. [Jul 1 10:58] overlayfs: upper fs needs to support d_type. [ +2.242040] overlayfs: upper fs needs to support d_type. [ +2.242205] overlayfs: upper fs needs to support d_type. [Jul 1 11:03] overlayfs: upper fs needs to support d_type. [ +6.707916] overlayfs: upper fs needs to support d_type. [ +20.751802] overlayfs: upper fs needs to support d_type. [ +6.906139] overlayfs: upper fs needs to support d_type. [ +0.226186] device-mapper: thin: Deletion of thin device 205 failed. [ +0.016530] device-mapper: ioctl: remove_all left 5 open device(s) [ +1.876379] overlayfs: upper fs needs to support d_type. [ +0.198387] overlayfs: upper fs needs to support d_type. [Jul 1 11:18] overlayfs: upper fs needs to support d_type. [ +6.329076] overlayfs: upper fs needs to support d_type. [ +1.933939] overlayfs: upper fs needs to support d_type. [ +0.163276] overlayfs: upper fs needs to support d_type. ==> kernel <== 11:33:25 up 1 day, 19:58, 0 users, load average: 0.04, 0.04, 0.09 Linux minikube 3.10.0-1127.13.1.el7.x86_64 #1 SMP Tue Jun 23 15:46:38 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 19.10" ==> kubelet <== -- Logs begin at Wed 2020-07-01 11:18:25 UTC, end at Wed 2020-07-01 11:33:25 UTC. -- Jul 01 11:33:21 minikube kubelet[4457]: E0701 11:33:21.158436 4457 kuberuntime_sandbox.go:41] GeneratePodSandboxConfig for pod "kube-controller-manager-minikube_kube-system(0cc28924ac57b7780c934826bdeba80a)" failed: open /run/systemd/resolve/resolv.conf: no such file or directory Jul 01 11:33:21 minikube kubelet[4457]: E0701 11:33:21.158460 4457 kuberuntime_manager.go:727] createPodSandbox for pod "kube-controller-manager-minikube_kube-system(0cc28924ac57b7780c934826bdeba80a)" failed: open /run/systemd/resolve/resolv.conf: no such file or directory Jul 01 11:33:21 minikube kubelet[4457]: E0701 11:33:21.158494 4457 pod_workers.go:191] Error syncing pod 0cc28924ac57b7780c934826bdeba80a ("kube-controller-manager-minikube_kube-system(0cc28924ac57b7780c934826bdeba80a)"), skipping: failed to "CreatePodSandbox" for "kube-controller-manager-minikube_kube-system(0cc28924ac57b7780c934826bdeba80a)" with CreatePodSandboxError: "GeneratePodSandboxConfig for pod \"kube-controller-manager-minikube_kube-system(0cc28924ac57b7780c934826bdeba80a)\" failed: open /run/systemd/resolve/resolv.conf: no such file or directory" Jul 01 11:33:21 minikube kubelet[4457]: E0701 11:33:21.177786 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:21 minikube kubelet[4457]: E0701 11:33:21.277984 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:21 minikube kubelet[4457]: E0701 11:33:21.378321 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:21 minikube kubelet[4457]: E0701 11:33:21.412981 4457 event.go:269] Unable to write event: 'Post https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events: dial tcp 172.17.0.3:8443: connect: connection refused' (may retry after sleeping) Jul 01 11:33:21 minikube kubelet[4457]: E0701 11:33:21.413015 4457 event.go:214] Unable to write event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.161d9d177488c21f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfb73a61cd593c1f, ext:6857332797, loc:(*time.Location)(0x701d4a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfb73a61cd593c1f, ext:6857332797, loc:(*time.Location)(0x701d4a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}' (retry limit exceeded!) Jul 01 11:33:21 minikube kubelet[4457]: E0701 11:33:21.413721 4457 event.go:269] Unable to write event: 'Patch https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events/minikube.161d9d17744652f5: dial tcp 172.17.0.3:8443: connect: connection refused' (may retry after sleeping) Jul 01 11:33:21 minikube kubelet[4457]: E0701 11:33:21.478519 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:21 minikube kubelet[4457]: E0701 11:33:21.578825 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:21 minikube kubelet[4457]: E0701 11:33:21.679160 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:21 minikube kubelet[4457]: E0701 11:33:21.779489 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:21 minikube kubelet[4457]: E0701 11:33:21.879734 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:21 minikube kubelet[4457]: E0701 11:33:21.979929 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:22 minikube kubelet[4457]: E0701 11:33:22.080209 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:22 minikube kubelet[4457]: E0701 11:33:22.180421 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:22 minikube kubelet[4457]: E0701 11:33:22.280648 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:22 minikube kubelet[4457]: E0701 11:33:22.381045 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:22 minikube kubelet[4457]: E0701 11:33:22.481309 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:22 minikube kubelet[4457]: E0701 11:33:22.581645 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:22 minikube kubelet[4457]: E0701 11:33:22.681937 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:22 minikube kubelet[4457]: E0701 11:33:22.782243 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:22 minikube kubelet[4457]: E0701 11:33:22.802858 4457 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: Get https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s: dial tcp 172.17.0.3:8443: connect: connection refused Jul 01 11:33:22 minikube kubelet[4457]: E0701 11:33:22.882559 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:22 minikube kubelet[4457]: E0701 11:33:22.982880 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:23 minikube kubelet[4457]: E0701 11:33:23.083204 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:23 minikube kubelet[4457]: I0701 11:33:23.141410 4457 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach Jul 01 11:33:23 minikube kubelet[4457]: E0701 11:33:23.157077 4457 kuberuntime_sandbox.go:41] GeneratePodSandboxConfig for pod "kube-apiserver-minikube_kube-system(6ff2e3bf96dbdcdd33879625130d5ccc)" failed: open /run/systemd/resolve/resolv.conf: no such file or directory Jul 01 11:33:23 minikube kubelet[4457]: E0701 11:33:23.157096 4457 kuberuntime_manager.go:727] createPodSandbox for pod "kube-apiserver-minikube_kube-system(6ff2e3bf96dbdcdd33879625130d5ccc)" failed: open /run/systemd/resolve/resolv.conf: no such file or directory Jul 01 11:33:23 minikube kubelet[4457]: E0701 11:33:23.157123 4457 pod_workers.go:191] Error syncing pod 6ff2e3bf96dbdcdd33879625130d5ccc ("kube-apiserver-minikube_kube-system(6ff2e3bf96dbdcdd33879625130d5ccc)"), skipping: failed to "CreatePodSandbox" for "kube-apiserver-minikube_kube-system(6ff2e3bf96dbdcdd33879625130d5ccc)" with CreatePodSandboxError: "GeneratePodSandboxConfig for pod \"kube-apiserver-minikube_kube-system(6ff2e3bf96dbdcdd33879625130d5ccc)\" failed: open /run/systemd/resolve/resolv.conf: no such file or directory" Jul 01 11:33:23 minikube kubelet[4457]: E0701 11:33:23.183557 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:23 minikube kubelet[4457]: E0701 11:33:23.283802 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:23 minikube kubelet[4457]: E0701 11:33:23.384075 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:23 minikube kubelet[4457]: E0701 11:33:23.484280 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:23 minikube kubelet[4457]: E0701 11:33:23.584638 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:23 minikube kubelet[4457]: E0701 11:33:23.684889 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:23 minikube kubelet[4457]: E0701 11:33:23.785031 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:23 minikube kubelet[4457]: E0701 11:33:23.885263 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:23 minikube kubelet[4457]: E0701 11:33:23.985586 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:24 minikube kubelet[4457]: E0701 11:33:24.085813 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:24 minikube kubelet[4457]: E0701 11:33:24.104307 4457 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused Jul 01 11:33:24 minikube kubelet[4457]: E0701 11:33:24.186133 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:24 minikube kubelet[4457]: E0701 11:33:24.286509 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:24 minikube kubelet[4457]: E0701 11:33:24.386753 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:24 minikube kubelet[4457]: I0701 11:33:24.391086 4457 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach Jul 01 11:33:24 minikube kubelet[4457]: I0701 11:33:24.407379 4457 kubelet_node_status.go:70] Attempting to register node minikube Jul 01 11:33:24 minikube kubelet[4457]: E0701 11:33:24.407624 4457 kubelet_node_status.go:92] Unable to register node "minikube" with API server: Post https://control-plane.minikube.internal:8443/api/v1/nodes: dial tcp 172.17.0.3:8443: connect: connection refused Jul 01 11:33:24 minikube kubelet[4457]: E0701 11:33:24.487092 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:24 minikube kubelet[4457]: E0701 11:33:24.587412 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:24 minikube kubelet[4457]: E0701 11:33:24.687679 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:24 minikube kubelet[4457]: E0701 11:33:24.787907 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:24 minikube kubelet[4457]: E0701 11:33:24.888263 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:24 minikube kubelet[4457]: E0701 11:33:24.988599 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:25 minikube kubelet[4457]: E0701 11:33:25.088854 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:25 minikube kubelet[4457]: E0701 11:33:25.189048 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:25 minikube kubelet[4457]: E0701 11:33:25.289228 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:25 minikube kubelet[4457]: E0701 11:33:25.389414 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:25 minikube kubelet[4457]: E0701 11:33:25.489559 4457 kubelet.go:2267] node "minikube" not found Jul 01 11:33:25 minikube kubelet[4457]: E0701 11:33:25.589815 4457 kubelet.go:2267] node "minikube" not found ```

Any help would be greatly appreciated!

Jimmyjim123 commented 4 years ago

Based on what you've shown, it looks like you want to just get it up and running. When I use Minikube, I just use the command

minikube start --vm-driver=virtualbox --memory=12000 --cpus=4 --kubernetes-version=v1.15.9

DNCoelho commented 4 years ago

@Jimmyjim123 Thank you for the answer, however, I don't intend to get minikube working with a VM driver. What I want is to get it working with a docker driver in a VM. Sorry if I was not clear.

afbjorklund commented 4 years ago

Strange that it says that The system verification failed., but then don't show any errors (only warnings) ?

afbjorklund commented 4 years ago

I wonder about this one: --extra-config=kubelet.resolv-conf=/run/systemd/resolve/resolv.conf

It seems related to the failed: open /run/systemd/resolve/resolv.conf: no such file or directory

Why was it added ?

DNCoelho commented 4 years ago

@afbjorklund
Thank you for noticing that! That config was used in an old host to solve a problem relative to access to the host's network, or so it was explained to me. As this new VM was created with that host as a base I assumed that config was needed, but it does not seem to be the case. The cluster seems to start correctly without it so I'll close the issue for now. If I do confirm that some problem persists I'll reopen it.

Thank you very much for your help!