Closed solarnz closed 2 years ago
Apr 28 07:25:03 minikube kubelet[25347]: F0428 07:25:03.360543 25347 kubelet.go:1383] Failed to start ContainerManager failed to get rootfs info: failed to get device for dir "/var/lib/kubelet": could not find device with major: 0, minor: 28 in cached partitions map
Any idea what that's about? I've never seen this error before. Any chance that you are using btrfs
? I ask because this may be related to https://github.com/kubernetes/kubernetes/issues/65204
To help debug, do you mind sharing the result of:
minikube ssh "sudo grep '/var ' /proc/mounts"
I suspect that we have an issue with btrfs here:
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.371571 25556 docker_service.go:258] Docker Info: &{ID:JJU7:OSC4:67QH:5P6G:ZRID:BJZK:5B3A:SRU5:K4BX:YQBV:2H22:MGXF Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem btrfs] [Supports d_type true] [Native Overlay Diff false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2020-04-28T07:25:04.366616394Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.6.7-arch1-1 OperatingSys
Yep - I am using btrfs. My setup is using dm-crypt + LUKS on the physical driver, then I mount a btrfs subvolume to /
% minikube ssh "sudo grep '/var ' /proc/mounts"
/dev/mapper/cryptroot /var btrfs rw,relatime,ssd,space_cache,subvolid=257,subvol=/root/var/lib/docker/volumes/minikube/_data 0 0
For reference, the btrfs mounts directly on my system are
% cat /proc/mounts | grep btrfs
/dev/mapper/cryptroot / btrfs rw,relatime,ssd,space_cache,subvolid=257,subvol=/root 0 0
/dev/mapper/cryptroot /mnt/system btrfs rw,relatime,ssd,space_cache,subvolid=5,subvol=/ 0 0
/dev/mapper/cryptroot /var/lib/docker/btrfs btrfs rw,relatime,ssd,space_cache,subvolid=257,subvol=/root/var/lib/docker/btrfs 0 0
@solarnz I noticed you are using arch linux " minikube v1.10.0-beta.1 on Arch " do you mind checking if the overlayfs module been loaded in the kernel? because even though your own docker is installed on btrfs, minikube's inner docker is installed on overlay2 (the default) because kubeadm does NOT like btrfs and it fails its system verification.
@medyagh sure,
% lsmod | grep overlay
overlay 135168 0
It looks like it has been loaded into the kernel.
I also tried forcing docker to use the overlay2 storage driver, including removing the /var/lib/docker
directory, however there was no change to the result
@solarnz could u plz paste the output of
cat /etc/docker/daemon.json
u would need to change the docker daemon settings on your system to use overlay2
I have modified my docker daemon.json file to include the setting to use overlay2,
{
"bip": "10.255.0.1/17",
"fixed-cidr": "10.255.0.0/17",
"default-address-pools" : [
{
"base" : "10.255.128.0/17",
"size" : 24
}
],
"exec-opts": ["native.cgroupdriver=systemd"],
"storage-driver": "overlay2"
}
# docker info
Client:
Debug Mode: false
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 19.03.8-ce
Storage Driver: overlay2
Backing Filesystem: <unknown>
Supports d_type: true
Native Overlay Diff: false
Logging Driver: json-file
Cgroup Driver: systemd
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: d76c121f76a5fc8a462dc64594aea72fe18e1178.m
runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 5.6.13-arch1-1
Operating System: Arch Linux
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 31.25GiB
Name: chris-trotman-laptop
ID: HRXK:HE53:NN4E:I5MZ:XTSQ:6VPD:YNGZ:XX7U:2JY6:SCSP:4OIC:3YL7
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Minikube still couldn't start.
% minikube start --driver=docker --v=5 --alsologtostderr
I0520 09:14:05.607018 31269 start.go:99] hostinfo: {"hostname":"chris-trotman-laptop","uptime":57738,"bootTime":1589872307,"procs":308,"os":"linux","platform":"arch","platformFamily":"arch","platformVersion":"","kernelVersion":"5.6.13-arch1-1","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"a4ab05cf-cb22-4c07-88c2-9bf79092f646"}
I0520 09:14:05.607479 31269 start.go:109] virtualization: kvm host
π minikube v1.10.1 on Arch
I0520 09:14:05.607629 31269 driver.go:253] Setting default libvirt URI to qemu:///system
I0520 09:14:05.654899 31269 docker.go:95] docker version: linux-19.03.8-ce
β¨ Using the docker driver based on user configuration
I0520 09:14:05.655029 31269 start.go:215] selected driver: docker
I0520 09:14:05.655040 31269 start.go:594] validating driver "docker" against <nil>
I0520 09:14:05.655052 31269 start.go:600] status for docker: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0520 09:14:05.655071 31269 start.go:917] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
I0520 09:14:05.655145 31269 start_flags.go:217] no existing cluster config was found, will generate one from the flags
I0520 09:14:05.655361 31269 cli_runner.go:108] Run: docker system info --format "{{json .}}"
I0520 09:14:05.750523 31269 start_flags.go:231] Using suggested 8000MB memory alloc based on sys=32002MB, container=32002MB
I0520 09:14:05.750709 31269 start_flags.go:558] Wait components to verify : map[apiserver:true system_pods:true]
π Starting control plane node minikube in cluster minikube
I0520 09:14:05.750848 31269 cache.go:104] Beginning downloading kic artifacts for docker with docker
π Pulling base image ...
I0520 09:14:05.794120 31269 preload.go:81] Checking if preload exists for k8s version v1.18.2 and runtime docker
I0520 09:14:05.794159 31269 cache.go:110] Downloading gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 to local daemon
I0520 09:14:05.794179 31269 preload.go:96] Found local preload: /home/chris/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4
I0520 09:14:05.794189 31269 image.go:98] Writing gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 to local daemon
I0520 09:14:05.794191 31269 cache.go:48] Caching tarball of preloaded images
I0520 09:14:05.794216 31269 preload.go:122] Found /home/chris/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0520 09:14:05.794226 31269 cache.go:51] Finished verifying existence of preloaded tar for v1.18.2 on docker
I0520 09:14:05.794226 31269 image.go:103] Getting image gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438
I0520 09:14:05.794451 31269 profile.go:156] Saving config to /home/chris/.minikube/profiles/minikube/config.json ...
I0520 09:14:05.794577 31269 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/config.json: {Name:mk450fd4eda337c7ddd64ef0cf55f5d70f3fb5cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0520 09:14:07.543763 31269 image.go:112] Writing image gcr.io/k8s-minikube/kicbase:v0.0.10
I0520 09:15:48.765594 31269 image.go:123] Pulling image gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438
I0520 09:15:51.417704 31269 cache.go:132] Successfully downloaded all kic artifacts
I0520 09:15:51.417832 31269 start.go:223] acquiring machines lock for minikube: {Name:mkec809913d626154fe8c3badcd878ae0c8a6125 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0520 09:15:51.418100 31269 start.go:227] acquired machines lock for "minikube" in 203.847Β΅s
I0520 09:15:51.418187 31269 start.go:83] Provisioning new machine with config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]} {Name: IP: Port:8443 KubernetesVersion:v1.18.2 ControlPlane:true Worker:true}
I0520 09:15:51.418447 31269 start.go:104] createHost starting for "" (driver="docker")
π₯ Creating docker container (CPUs=2, Memory=8000MB) ...
I0520 09:15:51.419532 31269 start.go:140] libmachine.API.Create for "minikube" (driver="docker")
I0520 09:15:51.419640 31269 client.go:161] LocalClient.Create starting
I0520 09:15:51.419751 31269 main.go:110] libmachine: Reading certificate data from /home/chris/.minikube/certs/ca.pem
I0520 09:15:51.419874 31269 main.go:110] libmachine: Decoding PEM data...
I0520 09:15:51.419960 31269 main.go:110] libmachine: Parsing certificate...
I0520 09:15:51.420470 31269 main.go:110] libmachine: Reading certificate data from /home/chris/.minikube/certs/cert.pem
I0520 09:15:51.420567 31269 main.go:110] libmachine: Decoding PEM data...
I0520 09:15:51.420633 31269 main.go:110] libmachine: Parsing certificate...
I0520 09:15:51.422165 31269 cli_runner.go:108] Run: docker ps -a --format {{.Names}}
I0520 09:15:51.477156 31269 cli_runner.go:108] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0520 09:15:51.537453 31269 oci.go:98] Successfully created a docker volume minikube
W0520 09:15:51.537515 31269 oci.go:158] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0520 09:15:51.537770 31269 cli_runner.go:108] Run: docker info --format "'{{json .SecurityOptions}}'"
I0520 09:15:51.537562 31269 preload.go:81] Checking if preload exists for k8s version v1.18.2 and runtime docker
I0520 09:15:51.537857 31269 preload.go:96] Found local preload: /home/chris/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4
I0520 09:15:51.537870 31269 kic.go:134] Starting extracting preloaded images to volume ...
I0520 09:15:51.537919 31269 cli_runner.go:108] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/chris/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir
I0520 09:15:51.868168 31269 cli_runner.go:108] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --security-opt apparmor=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --cpus=2 --memory=8000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438
I0520 09:15:54.630666 31269 cli_runner.go:150] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --security-opt apparmor=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --cpus=2 --memory=8000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438: (2.762422589s)
I0520 09:15:54.630742 31269 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0520 09:15:54.696283 31269 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0520 09:15:54.762298 31269 oci.go:212] the created container "minikube" has a running status.
I0520 09:15:54.762331 31269 kic.go:162] Creating ssh key for kic: /home/chris/.minikube/machines/minikube/id_rsa...
I0520 09:15:54.865978 31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/machines/minikube/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0520 09:15:54.866048 31269 kic_runner.go:179] docker (temp): /home/chris/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0520 09:15:55.071540 31269 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0520 09:15:55.071566 31269 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0520 09:16:01.909935 31269 cli_runner.go:150] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/chris/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir: (10.371913284s)
I0520 09:16:01.910010 31269 kic.go:139] duration metric: took 10.372128 seconds to extract preloaded images to volume
I0520 09:16:01.910267 31269 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0520 09:16:01.977143 31269 machine.go:86] provisioning docker machine ...
I0520 09:16:01.977217 31269 ubuntu.go:166] provisioning hostname "minikube"
I0520 09:16:01.977269 31269 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0520 09:16:02.023760 31269 main.go:110] libmachine: Using SSH client type: native
I0520 09:16:02.023986 31269 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf820] 0x7bf7f0 <nil> [] 0s} 127.0.0.1 32771 <nil> <nil>}
I0520 09:16:02.024012 31269 main.go:110] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0520 09:16:02.179030 31269 main.go:110] libmachine: SSH cmd err, output: <nil>: minikube
I0520 09:16:02.179165 31269 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0520 09:16:02.247783 31269 main.go:110] libmachine: Using SSH client type: native
I0520 09:16:02.247966 31269 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf820] 0x7bf7f0 <nil> [] 0s} 127.0.0.1 32771 <nil> <nil>}
I0520 09:16:02.248000 31269 main.go:110] libmachine: About to run SSH command:
if ! grep -xq '.*\sminikube' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
else
echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts;
fi
fi
I0520 09:16:02.381849 31269 main.go:110] libmachine: SSH cmd err, output: <nil>:
I0520 09:16:02.381926 31269 ubuntu.go:172] set auth options {CertDir:/home/chris/.minikube CaCertPath:/home/chris/.minikube/certs/ca.pem CaPrivateKeyPath:/home/chris/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/chris/.minikube/machines/server.pem ServerKeyPath:/home/chris/.minikube/machines/server-key.pem ClientKeyPath:/home/chris/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/chris/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/chris/.minikube}
I0520 09:16:02.381982 31269 ubuntu.go:174] setting up certificates
I0520 09:16:02.382127 31269 provision.go:82] configureAuth start
I0520 09:16:02.382228 31269 cli_runner.go:108] Run: docker inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0520 09:16:02.479016 31269 provision.go:131] copyHostCerts
I0520 09:16:02.479072 31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/certs/ca.pem -> /home/chris/.minikube/ca.pem
I0520 09:16:02.479107 31269 exec_runner.go:91] found /home/chris/.minikube/ca.pem, removing ...
I0520 09:16:02.479266 31269 exec_runner.go:98] cp: /home/chris/.minikube/certs/ca.pem --> /home/chris/.minikube/ca.pem (1034 bytes)
I0520 09:16:02.479352 31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/certs/cert.pem -> /home/chris/.minikube/cert.pem
I0520 09:16:02.479380 31269 exec_runner.go:91] found /home/chris/.minikube/cert.pem, removing ...
I0520 09:16:02.479435 31269 exec_runner.go:98] cp: /home/chris/.minikube/certs/cert.pem --> /home/chris/.minikube/cert.pem (1074 bytes)
I0520 09:16:02.479499 31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/certs/key.pem -> /home/chris/.minikube/key.pem
I0520 09:16:02.479527 31269 exec_runner.go:91] found /home/chris/.minikube/key.pem, removing ...
I0520 09:16:02.479571 31269 exec_runner.go:98] cp: /home/chris/.minikube/certs/key.pem --> /home/chris/.minikube/key.pem (1679 bytes)
I0520 09:16:02.479637 31269 provision.go:105] generating server cert: /home/chris/.minikube/machines/server.pem ca-key=/home/chris/.minikube/certs/ca.pem private-key=/home/chris/.minikube/certs/ca-key.pem org=chris.minikube san=[10.255.0.3 localhost 127.0.0.1]
I0520 09:16:02.695001 31269 provision.go:159] copyRemoteCerts
I0520 09:16:02.695070 31269 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0520 09:16:02.695136 31269 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0520 09:16:02.739944 31269 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32771 SSHKeyPath:/home/chris/.minikube/machines/minikube/id_rsa Username:docker}
I0520 09:16:02.838402 31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0520 09:16:02.838568 31269 ssh_runner.go:215] scp /home/chris/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1034 bytes)
I0520 09:16:02.871941 31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/machines/server.pem -> /etc/docker/server.pem
I0520 09:16:02.872012 31269 ssh_runner.go:215] scp /home/chris/.minikube/machines/server.pem --> /etc/docker/server.pem (1115 bytes)
I0520 09:16:02.890317 31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0520 09:16:02.890369 31269 ssh_runner.go:215] scp /home/chris/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0520 09:16:02.919053 31269 provision.go:85] duration metric: configureAuth took 536.867488ms
I0520 09:16:02.919084 31269 ubuntu.go:190] setting minikube options for container-runtime
I0520 09:16:02.919303 31269 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0520 09:16:02.963663 31269 main.go:110] libmachine: Using SSH client type: native
I0520 09:16:02.963855 31269 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf820] 0x7bf7f0 <nil> [] 0s} 127.0.0.1 32771 <nil> <nil>}
I0520 09:16:02.963874 31269 main.go:110] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0520 09:16:03.095784 31269 main.go:110] libmachine: SSH cmd err, output: <nil>: overlay
I0520 09:16:03.095907 31269 ubuntu.go:71] root file system type: overlay
I0520 09:16:03.096365 31269 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I0520 09:16:03.096529 31269 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0520 09:16:03.183131 31269 main.go:110] libmachine: Using SSH client type: native
I0520 09:16:03.183327 31269 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf820] 0x7bf7f0 <nil> [] 0s} 127.0.0.1 32771 <nil> <nil>}
I0520 09:16:03.183490 31269 main.go:110] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
[Service]
Type=notify
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0520 09:16:03.325533 31269 main.go:110] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
[Service]
Type=notify
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0520 09:16:03.325709 31269 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0520 09:16:03.380286 31269 main.go:110] libmachine: Using SSH client type: native
I0520 09:16:03.380511 31269 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf820] 0x7bf7f0 <nil> [] 0s} 127.0.0.1 32771 <nil> <nil>}
I0520 09:16:03.380557 31269 main.go:110] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0520 09:16:04.254668 31269 main.go:110] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2019-08-29 04:42:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2020-05-19 23:16:03.321615588 +0000
@@ -8,24 +8,22 @@
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -33,9 +31,10 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
I0520 09:16:04.254770 31269 machine.go:89] provisioned docker machine in 2.277583293s
I0520 09:16:04.254782 31269 client.go:164] LocalClient.Create took 12.835107386s
I0520 09:16:04.254802 31269 start.go:145] duration metric: libmachine.API.Create for "minikube" took 12.835283642s
I0520 09:16:04.254813 31269 start.go:186] post-start starting for "minikube" (driver="docker")
I0520 09:16:04.254824 31269 start.go:196] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0520 09:16:04.254892 31269 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0520 09:16:04.254931 31269 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0520 09:16:04.301793 31269 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32771 SSHKeyPath:/home/chris/.minikube/machines/minikube/id_rsa Username:docker}
I0520 09:16:04.411769 31269 ssh_runner.go:148] Run: cat /etc/os-release
I0520 09:16:04.416204 31269 main.go:110] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0520 09:16:04.416249 31269 main.go:110] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0520 09:16:04.416284 31269 main.go:110] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0520 09:16:04.416302 31269 info.go:96] Remote host: Ubuntu 19.10
I0520 09:16:04.416333 31269 filesync.go:118] Scanning /home/chris/.minikube/addons for local assets ...
I0520 09:16:04.416420 31269 filesync.go:118] Scanning /home/chris/.minikube/files for local assets ...
I0520 09:16:04.416480 31269 start.go:189] post-start completed in 161.654948ms
I0520 09:16:04.416882 31269 start.go:107] duration metric: createHost completed in 12.998370713s
I0520 09:16:04.416899 31269 start.go:74] releasing machines lock for "minikube", held for 12.998755005s
I0520 09:16:04.416976 31269 cli_runner.go:108] Run: docker inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0520 09:16:04.465697 31269 profile.go:156] Saving config to /home/chris/.minikube/profiles/minikube/config.json ...
I0520 09:16:04.465708 31269 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I0520 09:16:04.465770 31269 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0520 09:16:04.465993 31269 ssh_runner.go:148] Run: systemctl --version
I0520 09:16:04.466035 31269 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0520 09:16:04.514169 31269 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32771 SSHKeyPath:/home/chris/.minikube/machines/minikube/id_rsa Username:docker}
I0520 09:16:04.515778 31269 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32771 SSHKeyPath:/home/chris/.minikube/machines/minikube/id_rsa Username:docker}
I0520 09:16:04.609972 31269 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0520 09:16:04.658251 31269 cruntime.go:185] skipping containerd shutdown because we are bound to it
I0520 09:16:04.658460 31269 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I0520 09:16:04.696971 31269 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0520 09:16:04.917972 31269 ssh_runner.go:148] Run: sudo systemctl start docker
I0520 09:16:04.930668 31269 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
π³ Preparing Kubernetes v1.18.2 on Docker 19.03.2 ...
I0520 09:16:04.982822 31269 cli_runner.go:108] Run: docker network ls --filter name=bridge --format {{.ID}}
I0520 09:16:05.041490 31269 cli_runner.go:108] Run: docker inspect --format "{{(index .IPAM.Config 0).Gateway}}" 68d467ba56af
I0520 09:16:05.106394 31269 network.go:77] got host ip for mount in container by inspect docker network: 10.255.0.1
I0520 09:16:05.106471 31269 start.go:251] checking
I0520 09:16:05.106655 31269 ssh_runner.go:148] Run: grep 10.255.0.1 host.minikube.internal$ /etc/hosts
I0520 09:16:05.111611 31269 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "10.255.0.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
βͺ kubeadm.pod-network-cidr=10.244.0.0/16
I0520 09:16:05.121644 31269 preload.go:81] Checking if preload exists for k8s version v1.18.2 and runtime docker
I0520 09:16:05.121675 31269 preload.go:96] Found local preload: /home/chris/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4
I0520 09:16:05.121716 31269 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0520 09:16:05.173742 31269 docker.go:379] Got preloaded images: -- stdout --
kubernetesui/dashboard:v2.0.0
k8s.gcr.io/kube-proxy:v1.18.2
k8s.gcr.io/kube-apiserver:v1.18.2
k8s.gcr.io/kube-scheduler:v1.18.2
k8s.gcr.io/kube-controller-manager:v1.18.2
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
gcr.io/k8s-minikube/storage-provisioner:v1.8.1
-- /stdout --
I0520 09:16:05.173784 31269 docker.go:317] Images already preloaded, skipping extraction
I0520 09:16:05.173836 31269 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0520 09:16:05.242959 31269 docker.go:379] Got preloaded images: -- stdout --
kubernetesui/dashboard:v2.0.0
k8s.gcr.io/kube-proxy:v1.18.2
k8s.gcr.io/kube-controller-manager:v1.18.2
k8s.gcr.io/kube-scheduler:v1.18.2
k8s.gcr.io/kube-apiserver:v1.18.2
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
gcr.io/k8s-minikube/storage-provisioner:v1.8.1
-- /stdout --
I0520 09:16:05.242991 31269 cache_images.go:69] Images are preloaded, skipping loading
I0520 09:16:05.243037 31269 kubeadm.go:124] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.255.0.3 APIServerPort:8443 KubernetesVersion:v1.18.2 EtcdDataDir:/var/lib/minikube/etcd ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.255.0.3"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:10.255.0.3 ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0520 09:16:05.243137 31269 kubeadm.go:128] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.255.0.3
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "minikube"
kubeletExtraArgs:
node-ip: 10.255.0.3
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "10.255.0.3"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.18.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 10.255.0.3:10249
I0520 09:16:05.243248 31269 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
I0520 09:16:05.324882 31269 kubeadm.go:737] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.2/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.255.0.3 --pod-manifest-path=/etc/kubernetes/manifests
[Install]
config:
{KubernetesVersion:v1.18.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:}
I0520 09:16:05.324966 31269 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.18.2
I0520 09:16:05.332697 31269 binaries.go:43] Found k8s binaries, skipping transfer
I0520 09:16:05.332761 31269 ssh_runner.go:148] Run: sudo mkdir -p /var/tmp/minikube /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0520 09:16:05.348705 31269 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1458 bytes)
I0520 09:16:05.372021 31269 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (532 bytes)
I0520 09:16:05.393360 31269 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
I0520 09:16:05.413667 31269 start.go:251] checking
I0520 09:16:05.413754 31269 ssh_runner.go:148] Run: grep 10.255.0.3 control-plane.minikube.internal$ /etc/hosts
I0520 09:16:05.417168 31269 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "10.255.0.3 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0520 09:16:05.427435 31269 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0520 09:16:05.501418 31269 ssh_runner.go:148] Run: sudo systemctl start kubelet
I0520 09:16:05.513809 31269 certs.go:52] Setting up /home/chris/.minikube/profiles/minikube for IP: 10.255.0.3
I0520 09:16:05.513875 31269 certs.go:169] skipping minikubeCA CA generation: /home/chris/.minikube/ca.key
I0520 09:16:05.513908 31269 certs.go:169] skipping proxyClientCA CA generation: /home/chris/.minikube/proxy-client-ca.key
I0520 09:16:05.513976 31269 certs.go:267] generating minikube-user signed cert: /home/chris/.minikube/profiles/minikube/client.key
I0520 09:16:05.513987 31269 crypto.go:69] Generating cert /home/chris/.minikube/profiles/minikube/client.crt with IP's: []
I0520 09:16:05.682270 31269 crypto.go:157] Writing cert to /home/chris/.minikube/profiles/minikube/client.crt ...
I0520 09:16:05.682299 31269 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/client.crt: {Name:mka07a58dd5663c2670aeceac28b6f674efc8b39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0520 09:16:05.682547 31269 crypto.go:165] Writing key to /home/chris/.minikube/profiles/minikube/client.key ...
I0520 09:16:05.682558 31269 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/client.key: {Name:mkf7666bb385a6e9ae21189ba35d84d3b807484f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0520 09:16:05.682683 31269 certs.go:267] generating minikube signed cert: /home/chris/.minikube/profiles/minikube/apiserver.key.eb746497
I0520 09:16:05.682692 31269 crypto.go:69] Generating cert /home/chris/.minikube/profiles/minikube/apiserver.crt.eb746497 with IP's: [10.255.0.3 10.96.0.1 127.0.0.1 10.0.0.1]
I0520 09:16:05.810389 31269 crypto.go:157] Writing cert to /home/chris/.minikube/profiles/minikube/apiserver.crt.eb746497 ...
I0520 09:16:05.810413 31269 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/apiserver.crt.eb746497: {Name:mk6f1a42b5c3dc17333e7b9453fb03df3d061b07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0520 09:16:05.810735 31269 crypto.go:165] Writing key to /home/chris/.minikube/profiles/minikube/apiserver.key.eb746497 ...
I0520 09:16:05.810750 31269 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/apiserver.key.eb746497: {Name:mkc24f38a4bd64f8ff1af66d64fadd9799e384fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0520 09:16:05.810908 31269 certs.go:278] copying /home/chris/.minikube/profiles/minikube/apiserver.crt.eb746497 -> /home/chris/.minikube/profiles/minikube/apiserver.crt
I0520 09:16:05.811011 31269 certs.go:282] copying /home/chris/.minikube/profiles/minikube/apiserver.key.eb746497 -> /home/chris/.minikube/profiles/minikube/apiserver.key
I0520 09:16:05.811078 31269 certs.go:267] generating aggregator signed cert: /home/chris/.minikube/profiles/minikube/proxy-client.key
I0520 09:16:05.811087 31269 crypto.go:69] Generating cert /home/chris/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0520 09:16:05.884279 31269 crypto.go:157] Writing cert to /home/chris/.minikube/profiles/minikube/proxy-client.crt ...
I0520 09:16:05.884307 31269 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/proxy-client.crt: {Name:mkdc81e217efe8a180042cd6a4ac0d23a55e96c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0520 09:16:05.884589 31269 crypto.go:165] Writing key to /home/chris/.minikube/profiles/minikube/proxy-client.key ...
I0520 09:16:05.884601 31269 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/proxy-client.key: {Name:mk49b6ed585271800d057c65a7f077c5e7fbddc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0520 09:16:05.884738 31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/profiles/minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0520 09:16:05.884757 31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/profiles/minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0520 09:16:05.884767 31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/profiles/minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0520 09:16:05.884778 31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/profiles/minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0520 09:16:05.884800 31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0520 09:16:05.884815 31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0520 09:16:05.884824 31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0520 09:16:05.884834 31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0520 09:16:05.884889 31269 certs.go:342] found cert: /home/chris/.minikube/certs/home/chris/.minikube/certs/ca-key.pem (1679 bytes)
I0520 09:16:05.884953 31269 certs.go:342] found cert: /home/chris/.minikube/certs/home/chris/.minikube/certs/ca.pem (1034 bytes)
I0520 09:16:05.885019 31269 certs.go:342] found cert: /home/chris/.minikube/certs/home/chris/.minikube/certs/cert.pem (1074 bytes)
I0520 09:16:05.885055 31269 certs.go:342] found cert: /home/chris/.minikube/certs/home/chris/.minikube/certs/key.pem (1679 bytes)
I0520 09:16:05.885096 31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0520 09:16:05.885779 31269 ssh_runner.go:215] scp /home/chris/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1350 bytes)
I0520 09:16:05.904914 31269 ssh_runner.go:215] scp /home/chris/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0520 09:16:05.926662 31269 ssh_runner.go:215] scp /home/chris/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1103 bytes)
I0520 09:16:05.946542 31269 ssh_runner.go:215] scp /home/chris/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0520 09:16:05.967015 31269 ssh_runner.go:215] scp /home/chris/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes)
I0520 09:16:05.987855 31269 ssh_runner.go:215] scp /home/chris/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0520 09:16:06.007271 31269 ssh_runner.go:215] scp /home/chris/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes)
I0520 09:16:06.028665 31269 ssh_runner.go:215] scp /home/chris/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0520 09:16:06.050704 31269 ssh_runner.go:215] scp /home/chris/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes)
I0520 09:16:06.082922 31269 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I0520 09:16:06.103789 31269 ssh_runner.go:148] Run: openssl version
I0520 09:16:06.110910 31269 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0520 09:16:06.118544 31269 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0520 09:16:06.121690 31269 certs.go:383] hashing: -rw-r--r-- 1 root root 1066 Apr 27 00:27 /usr/share/ca-certificates/minikubeCA.pem
I0520 09:16:06.121750 31269 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0520 09:16:06.126884 31269 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0520 09:16:06.135075 31269 kubeadm.go:293] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.255.0.3 Port:8443 KubernetesVersion:v1.18.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]}
I0520 09:16:06.135264 31269 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0520 09:16:06.192050 31269 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0520 09:16:06.200042 31269 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0520 09:16:06.209841 31269 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver
I0520 09:16:06.209917 31269 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0520 09:16:06.216880 31269 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0520 09:16:06.216934 31269 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0520 09:18:05.040143 31269 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (1m58.823132347s)
π₯ initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [10.255.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [10.255.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0519 23:16:06.276458 702 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0519 23:16:10.008863 702 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0519 23:16:10.010762 702 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0520 09:18:05.041356 31269 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0520 09:18:06.437551 31269 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1.396114953s)
I0520 09:18:06.437633 31269 ssh_runner.go:148] Run: sudo systemctl stop -f kubelet
I0520 09:18:06.448879 31269 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0520 09:18:06.501067 31269 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver
I0520 09:18:06.501135 31269 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0520 09:18:06.508102 31269 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0520 09:18:06.508150 31269 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0520 09:22:07.871833 31269 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (4m1.363602055s)
I0520 09:22:07.872009 31269 kubeadm.go:295] StartCluster complete in 6m1.736946372s
I0520 09:22:07.872184 31269 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0520 09:22:07.977395 31269 logs.go:203] 0 containers: []
W0520 09:22:07.977425 31269 logs.go:205] No container was found matching "kube-apiserver"
I0520 09:22:07.977483 31269 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0520 09:22:08.033266 31269 logs.go:203] 0 containers: []
W0520 09:22:08.033293 31269 logs.go:205] No container was found matching "etcd"
I0520 09:22:08.033354 31269 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0520 09:22:08.080712 31269 logs.go:203] 0 containers: []
W0520 09:22:08.080737 31269 logs.go:205] No container was found matching "coredns"
I0520 09:22:08.080786 31269 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0520 09:22:08.126296 31269 logs.go:203] 0 containers: []
W0520 09:22:08.126322 31269 logs.go:205] No container was found matching "kube-scheduler"
I0520 09:22:08.126384 31269 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0520 09:22:08.173131 31269 logs.go:203] 0 containers: []
W0520 09:22:08.173160 31269 logs.go:205] No container was found matching "kube-proxy"
I0520 09:22:08.173223 31269 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0520 09:22:08.219810 31269 logs.go:203] 0 containers: []
W0520 09:22:08.219835 31269 logs.go:205] No container was found matching "kubernetes-dashboard"
I0520 09:22:08.219887 31269 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0520 09:22:08.267037 31269 logs.go:203] 0 containers: []
W0520 09:22:08.267080 31269 logs.go:205] No container was found matching "storage-provisioner"
I0520 09:22:08.267134 31269 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0520 09:22:08.316399 31269 logs.go:203] 0 containers: []
W0520 09:22:08.316423 31269 logs.go:205] No container was found matching "kube-controller-manager"
I0520 09:22:08.316436 31269 logs.go:117] Gathering logs for Docker ...
I0520 09:22:08.316452 31269 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0520 09:22:08.331981 31269 logs.go:117] Gathering logs for container status ...
I0520 09:22:08.332005 31269 ssh_runner.go:148] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0520 09:22:08.351479 31269 logs.go:117] Gathering logs for kubelet ...
I0520 09:22:08.351504 31269 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0520 09:22:08.434044 31269 logs.go:117] Gathering logs for dmesg ...
I0520 09:22:08.434078 31269 ssh_runner.go:148] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0520 09:22:08.451904 31269 logs.go:117] Gathering logs for describe nodes ...
I0520 09:22:08.451932 31269 ssh_runner.go:148] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0520 09:22:08.551087 31269 logs.go:124] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
W0520 09:22:08.551132 31269 out.go:201] Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: read tcp 127.0.0.1:46806->127.0.0.1:10248: read: connection reset by peer.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0519 23:18:06.554295 5829 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0519 23:18:07.856341 5829 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0519 23:18:07.857340 5829 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
π£ Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: read tcp 127.0.0.1:46806->127.0.0.1:10248: read: connection reset by peer.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0519 23:18:06.554295 5829 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0519 23:18:07.856341 5829 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0519 23:18:07.857340 5829 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
πΏ minikube is exiting due to an error. If the above message is not useful, open an issue:
π https://github.com/kubernetes/minikube/issues/new/choose
I0520 09:22:08.551471 31269 exit.go:58] WithError(failed to start node)=startup failed: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: read tcp 127.0.0.1:46806->127.0.0.1:10248: read: connection reset by peer.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0519 23:18:06.554295 5829 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0519 23:18:07.856341 5829 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0519 23:18:07.857340 5829 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
called from:
goroutine 1 [running]:
runtime/debug.Stack(0x0, 0x0, 0x0)
/usr/local/go/src/runtime/debug/stack.go:24 +0x9d
k8s.io/minikube/pkg/minikube/exit.WithError(0x1adae43, 0x14, 0x1d9bf60, 0xc0007199c0)
/app/pkg/minikube/exit/exit.go:58 +0x34
k8s.io/minikube/cmd/minikube/cmd.runStart(0x2ae78c0, 0xc000355800, 0x0, 0x3)
/app/cmd/minikube/cmd/start.go:204 +0x7f7
github.com/spf13/cobra.(*Command).execute(0x2ae78c0, 0xc0003557d0, 0x3, 0x3, 0x2ae78c0, 0xc0003557d0)
/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:846 +0x2aa
github.com/spf13/cobra.(*Command).ExecuteC(0x2ae6900, 0x0, 0x1, 0xc000044480)
/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950 +0x349
github.com/spf13/cobra.(*Command).Execute(...)
/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887
k8s.io/minikube/cmd/minikube/cmd.Execute()
/app/cmd/minikube/cmd/root.go:112 +0x747
main.main()
/app/cmd/minikube/main.go:66 +0xea
β [NONE_KUBELET] failed to start node startup failed: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: read tcp 127.0.0.1:46806->127.0.0.1:10248: read: connection reset by peer.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0519 23:18:06.554295 5829 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0519 23:18:07.856341 5829 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0519 23:18:07.857340 5829 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
π‘ Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
βοΈ Related issue: https://github.com/kubernetes/minikube/issues/4172
I grabbed the output of docker inspect minikube
while it was running as well,
% docker inspect minikube
[
{
"Id": "9ae896e2d7353160d850b1ab1cb8d276c99ddd5692e9b676998468232cd1cf76",
"Created": "2020-05-19T23:15:52.108544092Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 31958,
"ExitCode": 0,
"Error": "",
"StartedAt": "2020-05-19T23:15:54.607483251Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:e6bc41c39dc48b2b472936db36aedb28527ce0f675ed1bc20d029125c9ccf578",
"ResolvConfPath": "/var/lib/docker/containers/9ae896e2d7353160d850b1ab1cb8d276c99ddd5692e9b676998468232cd1cf76/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/9ae896e2d7353160d850b1ab1cb8d276c99ddd5692e9b676998468232cd1cf76/hostname",
"HostsPath": "/var/lib/docker/containers/9ae896e2d7353160d850b1ab1cb8d276c99ddd5692e9b676998468232cd1cf76/hosts",
"LogPath": "/var/lib/docker/containers/9ae896e2d7353160d850b1ab1cb8d276c99ddd5692e9b676998468232cd1cf76/9ae896e2d7353160d850b1ab1cb8d276c99ddd5692e9b676998468232cd1cf76-json.log",
"Name": "/minikube",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"minikube:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "default",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 8388608000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 16777216000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/109202af1d5c4f3b9ec67e5b85035432870eff5ff2e473fd32e00f04a4bc7e10-init/diff:/var/lib/docker/overlay2/97001f1726d3aa4d8faa2791ab60afc80013efd05ce43b653fa8f778c04be09b/diff:/var/lib/docker/overlay2/825c8843c005b044e6cbe80b6f2fa1072ce45e07bd70a956af0c790fdcb54403/diff:/var/lib/docker/overlay2/29da775ccc9c6c136279f449ad732ec1b0e70e8245ca514c17eb8d800734ac86/diff:/var/lib/docker/overlay2/67f6c8e036869da61915d6e65e3764e9a4f42aabf864ba1f4750e50f9a0a2b5e/diff:/var/lib/docker/overlay2/73f77edfc6525c22482758136be0c6ba6276b3b07e18fce486dd0794085c4d7f/diff:/var/lib/docker/overlay2/5e62407e45ebcae5afdedddba4daeabfd82b3f7a21e52479585642511aa010d7/diff:/var/lib/docker/overlay2/1eab65627407c25724d041b5832571577ed3b46cc89166bf3fac9c38c54ba993/diff:/var/lib/docker/overlay2/3cdf1937c2639fa8ac54730be38af5f9ffc33c62941d36c3eba700be533f81fa/diff:/var/lib/docker/overlay2/7ea2109e2eed83eead30384979e04774f9cff2d53aeb453c89cb7d19f7a81e73/diff:/var/lib/docker/overlay2/c3901ca0d6396e8261e05c3dbaa17cc6587fe49af684915bd611c09d7eb75b65/diff:/var/lib/docker/overlay2/9f9497aac94cabb29f8db9f956172f0e1389d7beca548b8146a0dc409a08b6a6/diff:/var/lib/docker/overlay2/d0f26800b7b92db24e144c9c968d60470e41047ffd2c34e1f1652153a65fb287/diff:/var/lib/docker/overlay2/bcb502be953c08c7c698ffe0bb8bcc3b81144a65d2246b26d57e1cb05889e504/diff:/var/lib/docker/overlay2/4a62de26d9833bf297669ac36fa2f55f4035a68c46905c001f4c4d4fe9be2ef4/diff:/var/lib/docker/overlay2/e5264314d7045586164cf3d94ac62caed3ca69d65568054522f8a3a4c93157e7/diff:/var/lib/docker/overlay2/f042b33a7039e4769ea068f8279457d08d1ff0b2a57f403a40d87b5d54bc24b4/diff:/var/lib/docker/overlay2/bc6af7651a08e06b82fabba53021b1dffff4f744055571601a1fb6d3d4ebf900/diff:/var/lib/docker/overlay2/d7fdce89c13587dbcc2fb8f778fc1e76d2698d6bf6aca14884c6dce0dd769f8f/diff:/var/lib/docker/overlay2/e598b3e6d891b9e08921d600a3f5a0e2a59bf0c1629058a9eb3cf16ba15d5683/diff:/var/lib/docker/overlay2/fa206236361099372568dc154f812906577a6ec9c3addca581fdf8c0abe077cf/diff:/var/lib/docker/overlay2/db05f4ddb7a3daec133bef6f64e778bc22aa721b6def8f6556c445dc2934f34b/diff:/var/lib/docker/overlay2/ec1c0879089261d5dd75303b368afb0aac3c368355498621637e8a4020ef67c6/diff",
"MergedDir": "/var/lib/docker/overlay2/109202af1d5c4f3b9ec67e5b85035432870eff5ff2e473fd32e00f04a4bc7e10/merged",
"UpperDir": "/var/lib/docker/overlay2/109202af1d5c4f3b9ec67e5b85035432870eff5ff2e473fd32e00f04a4bc7e10/diff",
"WorkDir": "/var/lib/docker/overlay2/109202af1d5c4f3b9ec67e5b85035432870eff5ff2e473fd32e00f04a4bc7e10/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "minikube",
"Source": "/var/lib/docker/volumes/minikube/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "minikube",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "minikube",
"name.minikube.sigs.k8s.io": "minikube",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "35292450a73751900a855e4e6fcf4e186b88708fbd64a575387b6dbe86e5f3ed",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32771"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32770"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32769"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32768"
}
]
},
"SandboxKey": "/var/run/docker/netns/35292450a737",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "eae87075bf1bf18bd457ab4b8bd9e41830740b5cc0cf745606848557fa44a25d",
"Gateway": "10.255.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "10.255.0.3",
"IPPrefixLen": 17,
"IPv6Gateway": "",
"MacAddress": "02:42:0a:ff:00:03",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "68d467ba56af2e7b140b8239c644cf0c04a63a9c317e0a9505203beb4da7fec2",
"EndpointID": "eae87075bf1bf18bd457ab4b8bd9e41830740b5cc0cf745606848557fa44a25d",
"Gateway": "10.255.0.1",
"IPAddress": "10.255.0.3",
"IPPrefixLen": 17,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:0a:ff:00:03",
"DriverOpts": null
}
}
}
}
]
and it does appear that it is using the overlay2 storage driver.
Hey @solarnz I noticed your local docker is running systemd
as cgroup manager and docker within minikube is running cgroupfs
could you try running:
minikube start --driver docker --force-systemd
which will force docker in minikube to run systemd. (Sometimes, conflicting cgroup managers can cause issues)
% minikube start --driver docker --force-systemd --v=5
π minikube v1.10.1 on Arch
β¨ Using the docker driver based on user configuration
π Starting control plane node minikube in cluster minikube
π₯ Creating docker container (CPUs=2, Memory=8000MB) ...
π³ Preparing Kubernetes v1.18.2 on Docker 19.03.2 ...
βͺ kubeadm.pod-network-cidr=10.244.0.0/16
π₯ initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [10.255.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [10.255.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0528 01:23:45.818744 847 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0528 01:23:49.767756 847 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0528 01:23:49.769578 847 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
π£ Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0528 01:25:46.047664 6109 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0528 01:25:48.008828 6109 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0528 01:25:48.010652 6109 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
πΏ minikube is exiting due to an error. If the above message is not useful, open an issue:
π https://github.com/kubernetes/minikube/issues/new/choose
β [NONE_KUBELET] failed to start node startup failed: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0528 01:25:46.047664 6109 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0528 01:25:48.008828 6109 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0528 01:25:48.010652 6109 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
π‘ Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
βοΈ Related issue: https://github.com/kubernetes/minikube/issues/4172
@solarnz - can you add the output of minikube logs
as well?
hi there, I have same similar issues. The relevant part of the log files from docker systemd outputs is this one (other errors are recovered by further attemps to start kubelet service):
Jun 05 09:14:11 test kubelet[1675]: W0605 09:14:11.887944 1675 fs.go:540] stat failed on /dev/mapper/nvme0n1p3_crypt with error: no such file or directory
Jun 05 09:14:11 test kubelet[1675]: F0605 09:14:11.887951 1675 kubelet.go:1383] Failed to start ContainerManager failed to get rootfs info: failed to get device for dir "/var/lib/kubelet": could not find devi
ce with major: 0, minor: 29 in cached partitions map
I entered the docker container bash and /dev/mapper/ is only having the control not the mapped partition from the host... can this be a cgroup or volume binding issue?
The issues is generated by code "workaround" in google/cadvisor at https://github.com/google/cadvisor/blob/366d59d3b625bd7761040ce152d5213fbf19c88a/fs/fs.go#L203 and https://github.com/google/cadvisor/blob/366d59d3b625bd7761040ce152d5213fbf19c88a/fs/fs.go#L540
// btrfs fix: following workaround fixes wrong btrfs Major and Minor Ids reported in /proc/self/mountinfo.
// instead of using values from /proc/self/mountinfo we use stat to get Ids from btrfs mount point
if mount.FsType == "btrfs" && mount.Major == 0 && strings.HasPrefix(mount.Source, "/dev/") {
major, minor, err := getBtrfsMajorMinorIds(&mount)
if err != nil {
klog.Warningf("%s", err)
that executes a stat command on not existing /dev/mapper/xxx in docker container at https://github.com/google/cadvisor/blob/366d59d3b625bd7761040ce152d5213fbf19c88a/fs/fs.go#L736
// Get major and minor Ids for a mount point using btrfs as filesystem.
func getBtrfsMajorMinorIds(mount *mount.MountInfo) (int, int, error) {
// btrfs fix: following workaround fixes wrong btrfs Major and Minor Ids reported in /proc/self/mountinfo.
// instead of using values from /proc/self/mountinfo we use stat to get Ids from btrfs mount point
buf := new(syscall.Stat_t)
err := syscall.Stat(mount.Source, buf)
if err != nil {
err = fmt.Errorf("stat failed on %s with error: %s", mount.Source, err)
return 0, 0, err
}
btrfs is not currently supported by minikube, we test against overlayfs driver. I would be happy to accept PRs that adds btrfs support to minikbue's innner docker setup.
Using the hints from @marcominetti above, I observed that the device /dev/mapper/cryptroot
did not exist inside the minikube container.
Comparing with my host, /dev/mapper/cryptroot
is symlink'd to /dev/dm-0
and dm-0
does exist inside the container. After creating this symlink inside the minikube container I was able to start successfully.
Thanks @kppullin... The following command worked for me
export MISSING_MOUNT_BIND=nvme0n1p3_crypt
docker exec -ti lss /bin/bash -c "ln -s /dev/dm-0 /dev/mapper/$MISSING_MOUNT_BIND"
I execute it immediately after the logged task "Creating docker container (CPUs=4, Memory=16384MB) ..." is finished
minettim@nuc:~$ minikube start -p lss --cpus 4 --memory 16384
π [lss] minikube v1.13.1 on Ubuntu 20.04
β¨ Automatically selected the docker driver
β docker is currently using the btrfs storage driver, consider switching to overlay2 for better performance
π Starting control plane node lss in cluster lss
π₯ Creating docker container (CPUs=4, Memory=16384MB) ...
π³ Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
π Verifying Kubernetes components...
π Enabled addons: default-storageclass, storage-provisioner
π Done! kubectl is now configured to use "lss" by default
Experiencing a similar issue with Fedora 33 Beta (btrfs root), Docker 19.3.13, and minikube 1.13.1 / 1.14.0-beta.0.
The error:
Oct 09 15:06:02 minikube kubelet[7343]: W1009 15:06:02.975497 7343 fs.go:208] stat failed on /dev/mapper/luks-e8784b69-7ba6-4cad-823e-afb6f3314f9e with error: no such file or directory
My outer Docker daemon is using overlay2:
$ cat /etc/docker/daemon.json
{
"storage-driver": "overlay2"
}
Inside the minikube
container I see:
root@minikube:/# docker info
Client:
Debug Mode: false
Server:
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 11
Server Version: 19.03.8
Storage Driver: overlay2
Backing Filesystem: <unknown>
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 449e926990f8539fd00844b26c07e2f1e306c760
runc version:
init version:
Security Options:
seccomp
Profile: default
Kernel Version: 5.8.13-300.fc33.x86_64
Operating System: Ubuntu 20.04 LTS (containerized)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.57GiB
Name: minikube
ID: FBKC:4LI5:N5KR:JMMK:CZLB:SVXL:SF5E:TYTF:A7P7:XYZE:MSJ3:I36Y
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
provider=docker
Experimental: false
Insecure Registries:
10.96.0.0/12
127.0.0.0/8
Live Restore Enabled: false
And outside:
$ docker info
Client:
Debug Mode: false
Server:
Containers: 3
Running: 1
Paused: 0
Stopped: 2
Images: 9
Server Version: 19.03.13
Storage Driver: overlay2
Backing Filesystem: btrfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 8fba4e9a7d01810a393d5d25a3621dc101981175
runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 5.8.13-300.fc33.x86_64
Operating System: Fedora 33 (Workstation Edition Prerelease)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.57GiB
Name: desktop
ID: KYB6:WQZM:7EYH:Q3FL:7CKF:ODDP:YC2W:TODP:7FQK:TJHY:CCBD:CSUW
Docker Root Dir: /var/lib/docker
Debug Mode: false
Username: gtirloni
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Quick update... I had similar problem in a real cluster after upgrading from kubernetes 1.18 to 1.19... Instead of mounting paths at startup of the container... It's working for me with feature gate disabled.
minikube start --feature-gates="LocalStorageCapacityIsolation=false"
Hope it helps
if using btrfs, what I would try first thing is to see if you're not using a 'tree' style layout instead of a flat one. run something like "btrfs subvolume list /" and if you have a dedicated entry for var/lib/kubelet for example, then it is a tree style It looks like this is still is a problem in various parts of code related to kubernetes. I just bumped into a similar problem with kubelet not starting and doing a dedicated mount helped me. See :
My fix for my issue today, was to add to /etc/fstab the following line and to mount /var/lib/kubelet: /dev/mapper/vgroot-root /var/lib/kubelet btrfs rw,relatime,ssd,space_cache,subvolid=269,subvol=/var/lib/kubelet 0 0 You need to see what is the root mount device (to replace /dev/mapper/vgroot-root) and the subvolume id for /var/lib/kubelet - in my case, 269.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
date... I had similar problem in a real cluster after upgrading from kubernetes 1.18 to 1.19... Instead of mounting paths at startup of the container... It's working for me with feature gate disabled.
Quick update... I had similar problem in a real cluster after upgrading from kubernetes 1.18 to 1.19... Instead of mounting paths at startup of the container... It's working for me with feature gate disabled.
minikube start --feature-gates="LocalStorageCapacityIsolation=false"
Hope it helps
how about we add this to our FAQ and also the Docker Driver Troubleshooting Docs ? so other users who want to use BTRFS could find this solution easily.
https://minikube.sigs.k8s.io/docs/faq/
and also maybe here
this is task is available for anyone who likes to pick it up
OS: openSUSE Leap 15.3 x86_64 Kernel: 5.3.18 Filesystem: Btrfs Encryption: LUKS
I had two problems with minikube:
"Failed to start ContainerManager" err="failed to get rootfs info: failed to get device for dir \"/var/lib/kubelet\": could not find device with major: 0, minor: 70 in cached partitions map"
[kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Here's what I did to fix it:
I just wanted to say, suggestions by @kppullin @medyagh and @marcoceppi worked for me. I can either link my volume:
minikube ssh "sudo ln -s /dev/dm-1 /dev/mapper/cryptroot"
Or run with:
minikube start --feature-gates="LocalStorageCapacityIsolation=false"
Based on comments by @marcoceppi, it appears this would need to be fixed in cAdvisor?
@braderhart if you mean me with marcoceppi yep, i think that a good focus to start with should be cAdvisor at least to avoid the exception, they were open to receive a PR... Don't know if the tentative/workaround code for btrfs is still there now...
In any case, because our workaround is based on mounting devs into the minikube container, the real solution might be in the docker initialization code within minikube itself (creating the mapped symlinks to devs or better to bind mount the devs)
@marcominetti Do you have time to assist with this? I can confirm the solution you mentioned fixes the issue for me, where I have overlay2
setup on top of btrfs.
Storage Driver: overlay2
Backing Filesystem: btrfs
Supports d_type: true
Native Overlay Diff: false
userxattr: false
Yes, sure, I'll try to delve into the code of minikube and cAdvisor. Can someone here review and accept an eventual PR against minikube?
I'd be happy to review any PR that fixes this issue.
Kubernetes 1.23 will support btrfs
It appears that Kubernetes 1.23 is released: https://www.kubernetes.dev/resources/release/
As I run minikube v1.24.0, it seems that kubernetes 1.22 is used. Is there a way to use kubernetes 1.23 to I can use docker with btrfs? Or should I wait for minikube 1.25 to run with kubernetes 1.23?
It appears that Kubernetes 1.23 is released: https://www.kubernetes.dev/resources/release/
As I run minikube v1.24.0, it seems that kubernetes 1.22 is used. Is there a way to use kubernetes 1.23 to I can use docker with btrfs? Or should I wait for minikube 1.25 to run with kubernetes 1.23?
minikube start --kubernetes-version=v1.23.0
I can confirm this works! (openSUSE tumbleweed, full disk encryption with cryptsetup / dm-crypt, btrfs)
β― minikube start --extra-config=kubelet.cgroup-driver=systemd -v=5 --kubernetes-version=1.23.0
π minikube v1.24.0 auf Opensuse-Tumbleweed
β¨ Using the docker driver based on existing profile
β docker is currently using the btrfs storage driver, consider switching to overlay2 for better performance
π Starting control plane node minikube in cluster minikube
π Pulling base image ...
π Updating the running docker "minikube" container ...
π§― Docker is nearly out of disk space, which may cause deployments to fail! (94% of capacity)
π‘ Suggestion:
Try one or more of the following to free up space on the device:
1. Run "docker system prune" to remove unused Docker data (optionally with "-a")
2. Increase the storage allocated to Docker for Desktop by clicking on:
Docker icon > Preferences > Resources > Disk Image Size
3. Run "minikube ssh -- docker system prune" if using the Docker container runtime
πΏ Related issue: https://github.com/kubernetes/minikube/issues/9024
π³ Vorbereiten von Kubernetes v1.23.0 auf Docker 20.10.8...
βͺ kubelet.cgroup-driver=systemd
> kubelet.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
> kubectl.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
> kubeadm.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
> kubectl: 44.42 MiB / 44.42 MiB [---------------] 100.00% 1.11 MiB p/s 40s
> kubeadm: 43.11 MiB / 43.11 MiB [-----------] 100.00% 464.77 KiB p/s 1m35s
> kubelet: 118.73 MiB / 118.73 MiB [---------] 100.00% 755.90 KiB p/s 2m41s
π Verifying Kubernetes components...
βͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
π Enabled addons: storage-provisioner, default-storageclass
π‘ kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
π Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
β€οΈ
Glad to hear @LeoniePhiline, thanks for testing!
I believe this has been fixed with k8s 1.23, I'm going to close this, but if it's not resolved feel free to respond and I'll reopen the issue, thanks!
Steps to reproduce the issue:
minikube start --driver=docker --v=5 --alsologtostderr
I'm at a loss as to how to proceed any further here, I'm not sure if this is related to my system configuration or if it's a bug in minikube.
Full output of failed command:
Optional: Full output of
minikube logs
command: