kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.19k stars 4.87k forks source link

Minikube Path to Binaries Incorrect on MacOS #14667

Closed bknitter-panw closed 2 years ago

bknitter-panw commented 2 years ago

What Happened?

When trying to start a cluster with 1.26.0 I see that minikube is referencing /var/lib/minikube for the binaries. Alas, when Homebrew installs minikube, the binaries are not installed into that directory. This precludes the cluster from being created and started.

I have tried various minikube commands such as delete (purge) and reinstalling minikube itself. All Homebrew dependencies are met. Not sure if this is a Homebrew or minikube issue. Things worked properly prior to 1.26.0.

This is the command that is failing when viewing the logs (attached):

W0728 09:16:12.637289 29099 out.go:239] ๐Ÿ’ข initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1

This would seem to indicate that the dependencies are not installed into the expected directory.

Happy to debug further with you!

Attach the log file

Logs coming

Operating System

macOS (Default)

Driver

HyperKit

bknitter-panw commented 2 years ago

I0728 09:10:09.481084 29099 install.go:99] testing: [sudo -n chown root:wheel /Users/bknitter/.minikube/bin/docker-machine-driver-hyperkit] I0728 09:10:09.525863 29099 install.go:101] [sudo chown root:wheel /Users/bknitter/.minikube/bin/docker-machine-driver-hyperkit] may require a password: exit status 1 I0728 09:10:09.526061 29099 install.go:106] running: [sudo chown root:wheel /Users/bknitter/.minikube/bin/docker-machine-driver-hyperkit] I0728 09:10:43.345613 29099 install.go:99] testing: [sudo -n chmod u+s /Users/bknitter/.minikube/bin/docker-machine-driver-hyperkit] I0728 09:10:43.398913 29099 install.go:106] running: [sudo chmod u+s /Users/bknitter/.minikube/bin/docker-machine-driver-hyperkit] I0728 09:10:43.438338 29099 start_flags.go:296] no existing cluster config was found, will generate one from the flags I0728 09:10:43.438620 29099 start_flags.go:377] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB I0728 09:10:43.438772 29099 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true] I0728 09:10:43.438794 29099 cni.go:95] Creating CNI manager for "" I0728 09:10:43.438801 29099 cni.go:169] CNI unnecessary in this configuration, recommending no CNI I0728 09:10:43.438806 29099 start_flags.go:310] config: {Name:clustera KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:clustera Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} I0728 09:10:43.439228 29099 iso.go:128] acquiring lock: {Name:mkfd0b3776b26e750cc885dc73f6435efa63fc10 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0728 09:10:43.459756 29099 out.go:177] ๐Ÿ’ฟ Downloading VM boot image ... I0728 09:10:43.479267 29099 download.go:101] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso.sha256 -> /Users/bknitter/.minikube/cache/iso/amd64/minikube-v1.26.0-amd64.iso I0728 09:11:03.459321 29099 out.go:177] ๐Ÿ‘ Starting control plane node clustera in cluster clustera I0728 09:11:03.481431 29099 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker I0728 09:11:03.605905 29099 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.1/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4 I0728 09:11:03.605948 29099 cache.go:57] Caching tarball of preloaded images I0728 09:11:03.606576 29099 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker I0728 09:11:03.627600 29099 out.go:177] ๐Ÿ’พ Downloading Kubernetes v1.24.1 preload ... I0728 09:11:03.666505 29099 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4 ... I0728 09:11:03.830228 29099 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.1/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4?checksum=md5:cbe3dd94d1bd66cb19d71bba2673d3e7 -> /Users/bknitter/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4 I0728 09:11:33.489534 29099 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4 ... I0728 09:11:33.489810 29099 preload.go:256] verifying checksumm of /Users/bknitter/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4 ... I0728 09:11:34.295850 29099 cache.go:60] Finished verifying existence of preloaded tar for v1.24.1 on docker I0728 09:11:34.296133 29099 profile.go:148] Saving config to /Users/bknitter/.minikube/profiles/clustera/config.json ... I0728 09:11:34.296158 29099 lock.go:35] WriteFile acquiring /Users/bknitter/.minikube/profiles/clustera/config.json: {Name:mk39430ae0af5737bf3144bc1328b8aad6203140 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0728 09:11:34.296796 29099 cache.go:208] Successfully downloaded all kic artifacts I0728 09:11:34.296826 29099 start.go:352] acquiring machines lock for clustera: {Name:mka64d9114eb80e13eb18766ed45b7552bb58cbe Clock:{} Delay:500ms Timeout:13m0s Cancel:} I0728 09:11:34.297039 29099 start.go:356] acquired machines lock for "clustera" in 204.632ยตs I0728 09:11:34.297062 29099 start.go:91] Provisioning new machine with config: &{Name:clustera KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:clustera Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true} I0728 09:11:34.297190 29099 start.go:131] createHost starting for "" (driver="hyperkit") I0728 09:11:34.319539 29099 out.go:204] ๐Ÿ”ฅ Creating hyperkit VM (CPUs=2, Memory=4000MB, Disk=20000MB) ... I0728 09:11:34.320826 29099 main.go:134] libmachine: Found binary path at /Users/bknitter/.minikube/bin/docker-machine-driver-hyperkit I0728 09:11:34.321025 29099 main.go:134] libmachine: Launching plugin server for driver hyperkit I0728 09:11:34.611306 29099 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:58718 I0728 09:11:34.612117 29099 main.go:134] libmachine: () Calling .GetVersion I0728 09:11:34.613209 29099 main.go:134] libmachine: Using API Version 1 I0728 09:11:34.613221 29099 main.go:134] libmachine: () Calling .SetConfigRaw I0728 09:11:34.613689 29099 main.go:134] libmachine: () Calling .GetMachineName I0728 09:11:34.613894 29099 main.go:134] libmachine: (clustera) Calling .GetMachineName I0728 09:11:34.614110 29099 main.go:134] libmachine: (clustera) Calling .DriverName I0728 09:11:34.614305 29099 start.go:165] libmachine.API.Create for "clustera" (driver="hyperkit") I0728 09:11:34.614380 29099 client.go:168] LocalClient.Create starting I0728 09:11:34.614446 29099 main.go:134] libmachine: Creating CA: /Users/bknitter/.minikube/certs/ca.pem I0728 09:11:34.791790 29099 main.go:134] libmachine: Creating client certificate: /Users/bknitter/.minikube/certs/cert.pem I0728 09:11:34.885054 29099 main.go:134] libmachine: Running pre-create checks... I0728 09:11:34.885065 29099 main.go:134] libmachine: (clustera) Calling .PreCreateCheck I0728 09:11:34.885313 29099 main.go:134] libmachine: (clustera) DBG | exe=/Users/bknitter/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0728 09:11:34.885485 29099 main.go:134] libmachine: (clustera) Calling .GetConfigRaw I0728 09:11:34.886104 29099 main.go:134] libmachine: Creating machine... I0728 09:11:34.886118 29099 main.go:134] libmachine: (clustera) Calling .Create I0728 09:11:34.886224 29099 main.go:134] libmachine: (clustera) DBG | exe=/Users/bknitter/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0728 09:11:34.886578 29099 main.go:134] libmachine: (clustera) DBG | I0728 09:11:34.886212 29201 common.go:107] Making disk image using store path: /Users/bknitter/.minikube I0728 09:11:34.886595 29099 main.go:134] libmachine: (clustera) Downloading /Users/bknitter/.minikube/cache/boot2docker.iso from file:///Users/bknitter/.minikube/cache/iso/amd64/minikube-v1.26.0-amd64.iso... I0728 09:11:35.193082 29099 main.go:134] libmachine: (clustera) DBG | I0728 09:11:35.193013 29201 common.go:114] Creating ssh key: /Users/bknitter/.minikube/machines/clustera/id_rsa... I0728 09:11:35.573304 29099 main.go:134] libmachine: (clustera) DBG | I0728 09:11:35.573209 29201 common.go:120] Creating raw disk image: /Users/bknitter/.minikube/machines/clustera/clustera.rawdisk... I0728 09:11:35.573325 29099 main.go:134] libmachine: (clustera) DBG | Writing magic tar header I0728 09:11:35.573337 29099 main.go:134] libmachine: (clustera) DBG | Writing SSH key tar header I0728 09:11:35.574542 29099 main.go:134] libmachine: (clustera) DBG | I0728 09:11:35.574457 29201 common.go:134] Fixing permissions on /Users/bknitter/.minikube/machines/clustera ... I0728 09:11:35.806967 29099 main.go:134] libmachine: (clustera) DBG | exe=/Users/bknitter/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0728 09:11:35.806984 29099 main.go:134] libmachine: (clustera) DBG | clean start, hyperkit pid file doesn't exist: /Users/bknitter/.minikube/machines/clustera/hyperkit.pid I0728 09:11:35.807001 29099 main.go:134] libmachine: (clustera) DBG | Using UUID f34a4c02-0e8f-11ed-8b8d-acde48001122 I0728 09:11:36.489579 29099 main.go:134] libmachine: (clustera) DBG | Generated MAC 86:ad:ed:a3:f4:cf I0728 09:11:36.489654 29099 main.go:134] libmachine: (clustera) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=clustera I0728 09:11:36.489727 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:36 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/bknitter/.minikube/machines/clustera", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f34a4c02-0e8f-11ed-8b8d-acde48001122", Disks:[]hyperkit.Disk{(hyperkit.RawDisk)(0xc0000907e0)}, ISOImages:[]string{"/Users/bknitter/.minikube/machines/clustera/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/bknitter/.minikube/machines/clustera/bzimage", Initrd:"/Users/bknitter/.minikube/machines/clustera/initrd", Bootrom:"", CPUs:2, Memory:4000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(os.Process)(nil)} I0728 09:11:36.489764 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:36 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/bknitter/.minikube/machines/clustera", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f34a4c02-0e8f-11ed-8b8d-acde48001122", Disks:[]hyperkit.Disk{(hyperkit.RawDisk)(0xc0000907e0)}, ISOImages:[]string{"/Users/bknitter/.minikube/machines/clustera/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/bknitter/.minikube/machines/clustera/bzimage", Initrd:"/Users/bknitter/.minikube/machines/clustera/initrd", Bootrom:"", CPUs:2, Memory:4000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(os.Process)(nil)} I0728 09:11:36.489858 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:36 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/bknitter/.minikube/machines/clustera/hyperkit.pid", "-c", "2", "-m", "4000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f34a4c02-0e8f-11ed-8b8d-acde48001122", "-s", "2:0,virtio-blk,/Users/bknitter/.minikube/machines/clustera/clustera.rawdisk", "-s", "3,ahci-cd,/Users/bknitter/.minikube/machines/clustera/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/bknitter/.minikube/machines/clustera/tty,log=/Users/bknitter/.minikube/machines/clustera/console-ring", "-f", "kexec,/Users/bknitter/.minikube/machines/clustera/bzimage,/Users/bknitter/.minikube/machines/clustera/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=clustera"} I0728 09:11:36.489931 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:36 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/bknitter/.minikube/machines/clustera/hyperkit.pid -c 2 -m 4000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f34a4c02-0e8f-11ed-8b8d-acde48001122 -s 2:0,virtio-blk,/Users/bknitter/.minikube/machines/clustera/clustera.rawdisk -s 3,ahci-cd,/Users/bknitter/.minikube/machines/clustera/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/bknitter/.minikube/machines/clustera/tty,log=/Users/bknitter/.minikube/machines/clustera/console-ring -f kexec,/Users/bknitter/.minikube/machines/clustera/bzimage,/Users/bknitter/.minikube/machines/clustera/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=clustera" I0728 09:11:36.489950 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:36 DEBUG: hyperkit: Redirecting stdout/stderr to logger I0728 09:11:36.492520 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:36 DEBUG: hyperkit: Pid is 29207 I0728 09:11:36.493268 29099 main.go:134] libmachine: (clustera) DBG | Attempt 0 I0728 09:11:36.493281 29099 main.go:134] libmachine: (clustera) DBG | exe=/Users/bknitter/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0728 09:11:36.493579 29099 main.go:134] libmachine: (clustera) DBG | hyperkit pid from json: 29207 I0728 09:11:36.496255 29099 main.go:134] libmachine: (clustera) DBG | Searching for 86:ad:ed:a3:f4:cf in /var/db/dhcpd_leases ... I0728 09:11:36.497186 29099 main.go:134] libmachine: (clustera) DBG | Found 6 entries in /var/db/dhcpd_leases! I0728 09:11:36.497200 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:fe:e4:9b:a3:10:58 ID:1,fe:e4:9b:a3:10:58 Lease:0x62e3fd15} I0728 09:11:36.497214 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:7e:3e:d3:d8:9c:fb ID:1,7e:3e:d3:d8:9c:fb Lease:0x62e2ab11} I0728 09:11:36.497222 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:fa:3:d:d2:d8:a9 ID:1,fa:3:d:d2:d8:a9 Lease:0x62e1d5c8} I0728 09:11:36.497234 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:de:9e:dd:27:24:c3 ID:1,de:9e:dd:27:24:c3 Lease:0x62e30f7c} I0728 09:11:36.497247 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:7a:57:95:b6:c3:e6 ID:1,7a:57:95:b6:c3:e6 Lease:0x62e1bc45} I0728 09:11:36.497279 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:e2:29:5d:f7:b0:37 ID:1,e2:29:5d:f7:b0:37 Lease:0x61c243cb} I0728 09:11:36.533896 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:36 INFO : hyperkit: stderr: Using fd 5 for I/O notifications I0728 09:11:36.680101 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:36 INFO : hyperkit: stderr: /Users/bknitter/.minikube/machines/clustera/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD I0728 09:11:36.682015 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 8 unspecified don't care: bit is 0 I0728 09:11:36.682034 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 20 unspecified don't care: bit is 0 I0728 09:11:36.682053 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0 I0728 09:11:36.682067 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 25 unspecified don't care: bit is 0 I0728 09:11:36.682080 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0 I0728 09:11:36.682090 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0 I0728 09:11:36.682105 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0 I0728 09:11:37.449123 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:37 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0 I0728 09:11:37.449144 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:37 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0 I0728 09:11:37.453237 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 8 unspecified don't care: bit is 0 I0728 09:11:37.453255 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 20 unspecified don't care: bit is 0 I0728 09:11:37.453265 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0 I0728 09:11:37.453274 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 25 unspecified don't care: bit is 0 I0728 09:11:37.453295 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0 I0728 09:11:37.453305 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0 I0728 09:11:37.453320 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0 I0728 09:11:37.454326 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:37 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1 I0728 09:11:37.454357 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:37 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1 I0728 09:11:38.497429 29099 main.go:134] libmachine: (clustera) DBG | Attempt 1 I0728 09:11:38.497441 29099 main.go:134] libmachine: (clustera) DBG | exe=/Users/bknitter/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0728 09:11:38.497577 29099 main.go:134] libmachine: (clustera) DBG | hyperkit pid from json: 29207 I0728 09:11:38.498514 29099 main.go:134] libmachine: (clustera) DBG | Searching for 86:ad:ed:a3:f4:cf in /var/db/dhcpd_leases ... I0728 09:11:38.498576 29099 main.go:134] libmachine: (clustera) DBG | Found 6 entries in /var/db/dhcpd_leases! I0728 09:11:38.498583 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:fe:e4:9b:a3:10:58 ID:1,fe:e4:9b:a3:10:58 Lease:0x62e3fd15} I0728 09:11:38.498590 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:7e:3e:d3:d8:9c:fb ID:1,7e:3e:d3:d8:9c:fb Lease:0x62e2ab11} I0728 09:11:38.498597 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:fa:3:d:d2:d8:a9 ID:1,fa:3:d:d2:d8:a9 Lease:0x62e1d5c8} I0728 09:11:38.498602 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:de:9e:dd:27:24:c3 ID:1,de:9e:dd:27:24:c3 Lease:0x62e30f7c} I0728 09:11:38.498607 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:7a:57:95:b6:c3:e6 ID:1,7a:57:95:b6:c3:e6 Lease:0x62e1bc45} I0728 09:11:38.498613 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:e2:29:5d:f7:b0:37 ID:1,e2:29:5d:f7:b0:37 Lease:0x61c243cb} I0728 09:11:40.498856 29099 main.go:134] libmachine: (clustera) DBG | Attempt 2 I0728 09:11:40.498877 29099 main.go:134] libmachine: (clustera) DBG | exe=/Users/bknitter/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0728 09:11:40.499034 29099 main.go:134] libmachine: (clustera) DBG | hyperkit pid from json: 29207 I0728 09:11:40.500940 29099 main.go:134] libmachine: (clustera) DBG | Searching for 86:ad:ed:a3:f4:cf in /var/db/dhcpd_leases ... I0728 09:11:40.501067 29099 main.go:134] libmachine: (clustera) DBG | Found 6 entries in /var/db/dhcpd_leases! I0728 09:11:40.501088 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:fe:e4:9b:a3:10:58 ID:1,fe:e4:9b:a3:10:58 Lease:0x62e3fd15} I0728 09:11:40.501105 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:7e:3e:d3:d8:9c:fb ID:1,7e:3e:d3:d8:9c:fb Lease:0x62e2ab11} I0728 09:11:40.501147 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:fa:3:d:d2:d8:a9 ID:1,fa:3:d:d2:d8:a9 Lease:0x62e1d5c8} I0728 09:11:40.501170 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:de:9e:dd:27:24:c3 ID:1,de:9e:dd:27:24:c3 Lease:0x62e30f7c} I0728 09:11:40.501178 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:7a:57:95:b6:c3:e6 ID:1,7a:57:95:b6:c3:e6 Lease:0x62e1bc45} I0728 09:11:40.501193 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:e2:29:5d:f7:b0:37 ID:1,e2:29:5d:f7:b0:37 Lease:0x61c243cb} I0728 09:11:42.501811 29099 main.go:134] libmachine: (clustera) DBG | Attempt 3 I0728 09:11:42.501834 29099 main.go:134] libmachine: (clustera) DBG | exe=/Users/bknitter/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0728 09:11:42.501955 29099 main.go:134] libmachine: (clustera) DBG | hyperkit pid from json: 29207 I0728 09:11:42.503146 29099 main.go:134] libmachine: (clustera) DBG | Searching for 86:ad:ed:a3:f4:cf in /var/db/dhcpd_leases ... I0728 09:11:42.503215 29099 main.go:134] libmachine: (clustera) DBG | Found 6 entries in /var/db/dhcpd_leases! I0728 09:11:42.503233 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:fe:e4:9b:a3:10:58 ID:1,fe:e4:9b:a3:10:58 Lease:0x62e3fd15} I0728 09:11:42.503284 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:7e:3e:d3:d8:9c:fb ID:1,7e:3e:d3:d8:9c:fb Lease:0x62e2ab11} I0728 09:11:42.503299 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:fa:3:d:d2:d8:a9 ID:1,fa:3:d:d2:d8:a9 Lease:0x62e1d5c8} I0728 09:11:42.503319 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:de:9e:dd:27:24:c3 ID:1,de:9e:dd:27:24:c3 Lease:0x62e30f7c} I0728 09:11:42.503327 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:7a:57:95:b6:c3:e6 ID:1,7a:57:95:b6:c3:e6 Lease:0x62e1bc45} I0728 09:11:42.503337 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:e2:29:5d:f7:b0:37 ID:1,e2:29:5d:f7:b0:37 Lease:0x61c243cb} I0728 09:11:44.015675 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:44 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0 I0728 09:11:44.015703 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:44 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0 I0728 09:11:44.015709 29099 main.go:134] libmachine: (clustera) DBG | 2022/07/28 09:11:44 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0 I0728 09:11:44.503429 29099 main.go:134] libmachine: (clustera) DBG | Attempt 4 I0728 09:11:44.503441 29099 main.go:134] libmachine: (clustera) DBG | exe=/Users/bknitter/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0728 09:11:44.503604 29099 main.go:134] libmachine: (clustera) DBG | hyperkit pid from json: 29207 I0728 09:11:44.504342 29099 main.go:134] libmachine: (clustera) DBG | Searching for 86:ad:ed:a3:f4:cf in /var/db/dhcpd_leases ... I0728 09:11:44.504405 29099 main.go:134] libmachine: (clustera) DBG | Found 6 entries in /var/db/dhcpd_leases! I0728 09:11:44.504413 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:fe:e4:9b:a3:10:58 ID:1,fe:e4:9b:a3:10:58 Lease:0x62e3fd15} I0728 09:11:44.504421 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:7e:3e:d3:d8:9c:fb ID:1,7e:3e:d3:d8:9c:fb Lease:0x62e2ab11} I0728 09:11:44.504426 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:fa:3:d:d2:d8:a9 ID:1,fa:3:d:d2:d8:a9 Lease:0x62e1d5c8} I0728 09:11:44.504437 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:de:9e:dd:27:24:c3 ID:1,de:9e:dd:27:24:c3 Lease:0x62e30f7c} I0728 09:11:44.504447 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:7a:57:95:b6:c3:e6 ID:1,7a:57:95:b6:c3:e6 Lease:0x62e1bc45} I0728 09:11:44.504456 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:e2:29:5d:f7:b0:37 ID:1,e2:29:5d:f7:b0:37 Lease:0x61c243cb} I0728 09:11:46.505164 29099 main.go:134] libmachine: (clustera) DBG | Attempt 5 I0728 09:11:46.505206 29099 main.go:134] libmachine: (clustera) DBG | exe=/Users/bknitter/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0728 09:11:46.505326 29099 main.go:134] libmachine: (clustera) DBG | hyperkit pid from json: 29207 I0728 09:11:46.506183 29099 main.go:134] libmachine: (clustera) DBG | Searching for 86:ad:ed:a3:f4:cf in /var/db/dhcpd_leases ... I0728 09:11:46.506238 29099 main.go:134] libmachine: (clustera) DBG | Found 6 entries in /var/db/dhcpd_leases! I0728 09:11:46.506254 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:fe:e4:9b:a3:10:58 ID:1,fe:e4:9b:a3:10:58 Lease:0x62e3fd15} I0728 09:11:46.506263 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:7e:3e:d3:d8:9c:fb ID:1,7e:3e:d3:d8:9c:fb Lease:0x62e2ab11} I0728 09:11:46.506295 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:fa:3:d:d2:d8:a9 ID:1,fa:3:d:d2:d8:a9 Lease:0x62e1d5c8} I0728 09:11:46.506311 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:de:9e:dd:27:24:c3 ID:1,de:9e:dd:27:24:c3 Lease:0x62e30f7c} I0728 09:11:46.506334 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:7a:57:95:b6:c3:e6 ID:1,7a:57:95:b6:c3:e6 Lease:0x62e1bc45} I0728 09:11:46.506346 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:e2:29:5d:f7:b0:37 ID:1,e2:29:5d:f7:b0:37 Lease:0x61c243cb} I0728 09:11:48.507575 29099 main.go:134] libmachine: (clustera) DBG | Attempt 6 I0728 09:11:48.507686 29099 main.go:134] libmachine: (clustera) DBG | exe=/Users/bknitter/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0728 09:11:48.508076 29099 main.go:134] libmachine: (clustera) DBG | hyperkit pid from json: 29207 I0728 09:11:48.512192 29099 main.go:134] libmachine: (clustera) DBG | Searching for 86:ad:ed:a3:f4:cf in /var/db/dhcpd_leases ... I0728 09:11:48.512738 29099 main.go:134] libmachine: (clustera) DBG | Found 7 entries in /var/db/dhcpd_leases! I0728 09:11:48.512777 29099 main.go:134] libmachine: (clustera) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:86:ad:ed:a3:f4:cf ID:1,86:ad:ed:a3:f4:cf Lease:0x62e406c3} I0728 09:11:48.512808 29099 main.go:134] libmachine: (clustera) DBG | Found match: 86:ad:ed:a3:f4:cf I0728 09:11:48.512830 29099 main.go:134] libmachine: (clustera) DBG | IP: 192.168.64.8 I0728 09:11:48.512979 29099 main.go:134] libmachine: (clustera) Calling .GetConfigRaw I0728 09:11:48.518866 29099 main.go:134] libmachine: (clustera) Calling .DriverName I0728 09:11:48.519364 29099 main.go:134] libmachine: (clustera) Calling .DriverName I0728 09:11:48.519837 29099 main.go:134] libmachine: Waiting for machine to be running, this may take a few minutes... I0728 09:11:48.519859 29099 main.go:134] libmachine: (clustera) Calling .GetState I0728 09:11:48.520307 29099 main.go:134] libmachine: (clustera) DBG | exe=/Users/bknitter/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0728 09:11:48.520566 29099 main.go:134] libmachine: (clustera) DBG | hyperkit pid from json: 29207 I0728 09:11:48.522924 29099 main.go:134] libmachine: Detecting operating system of created instance... I0728 09:11:48.522947 29099 main.go:134] libmachine: Waiting for SSH to be available... I0728 09:11:48.522961 29099 main.go:134] libmachine: Getting to WaitForSSH function... I0728 09:11:48.522977 29099 main.go:134] libmachine: (clustera) Calling .GetSSHHostname I0728 09:11:48.523307 29099 main.go:134] libmachine: (clustera) Calling .GetSSHPort I0728 09:11:48.523663 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath I0728 09:11:48.524116 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath I0728 09:11:48.524630 29099 main.go:134] libmachine: (clustera) Calling .GetSSHUsername I0728 09:11:48.527372 29099 main.go:134] libmachine: Using SSH client type: native I0728 09:11:48.528144 29099 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x43da180] 0x43dd1e0 [] 0s} 192.168.64.8 22 } I0728 09:11:48.528184 29099 main.go:134] libmachine: About to run SSH command: exit 0 I0728 09:11:48.563260 29099 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain I0728 09:11:51.641973 29099 main.go:134] libmachine: SSH cmd err, output: : I0728 09:11:51.641982 29099 main.go:134] libmachine: Detecting the provisioner... I0728 09:11:51.641987 29099 main.go:134] libmachine: (clustera) Calling .GetSSHHostname I0728 09:11:51.642343 29099 main.go:134] libmachine: (clustera) Calling .GetSSHPort I0728 09:11:51.642527 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath I0728 09:11:51.642654 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath I0728 09:11:51.642829 29099 main.go:134] libmachine: (clustera) Calling .GetSSHUsername I0728 09:11:51.643026 29099 main.go:134] libmachine: Using SSH client type: native I0728 09:11:51.643219 29099 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x43da180] 0x43dd1e0 [] 0s} 192.168.64.8 22 } I0728 09:11:51.643223 29099 main.go:134] libmachine: About to run SSH command: cat /etc/os-release I0728 09:11:51.720252 29099 main.go:134] libmachine: SSH cmd err, output: : NAME=Buildroot VERSION=2021.02.12-1-g14f2929-dirty ID=buildroot VERSION_ID=2021.02.12 PRETTY_NAME="Buildroot 2021.02.12"

I0728 09:11:51.721198 29099 main.go:134] libmachine: found compatible host: buildroot I0728 09:11:51.721207 29099 main.go:134] libmachine: Provisioning with buildroot... I0728 09:11:51.721215 29099 main.go:134] libmachine: (clustera) Calling .GetMachineName I0728 09:11:51.721504 29099 buildroot.go:166] provisioning hostname "clustera" I0728 09:11:51.721514 29099 main.go:134] libmachine: (clustera) Calling .GetMachineName I0728 09:11:51.721720 29099 main.go:134] libmachine: (clustera) Calling .GetSSHHostname I0728 09:11:51.721857 29099 main.go:134] libmachine: (clustera) Calling .GetSSHPort I0728 09:11:51.721992 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath I0728 09:11:51.722208 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath I0728 09:11:51.722333 29099 main.go:134] libmachine: (clustera) Calling .GetSSHUsername I0728 09:11:51.722624 29099 main.go:134] libmachine: Using SSH client type: native I0728 09:11:51.722812 29099 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x43da180] 0x43dd1e0 [] 0s} 192.168.64.8 22 } I0728 09:11:51.722818 29099 main.go:134] libmachine: About to run SSH command: sudo hostname clustera && echo "clustera" | sudo tee /etc/hostname I0728 09:11:51.804051 29099 main.go:134] libmachine: SSH cmd err, output: : clustera

I0728 09:11:51.804080 29099 main.go:134] libmachine: (clustera) Calling .GetSSHHostname I0728 09:11:51.804274 29099 main.go:134] libmachine: (clustera) Calling .GetSSHPort I0728 09:11:51.804438 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath I0728 09:11:51.804632 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath I0728 09:11:51.804813 29099 main.go:134] libmachine: (clustera) Calling .GetSSHUsername I0728 09:11:51.805016 29099 main.go:134] libmachine: Using SSH client type: native I0728 09:11:51.805198 29099 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x43da180] 0x43dd1e0 [] 0s} 192.168.64.8 22 } I0728 09:11:51.805207 29099 main.go:134] libmachine: About to run SSH command:

    if ! grep -xq '.*\sclustera' /etc/hosts; then
        if grep -xq '127.0.1.1\s.*' /etc/hosts; then
            sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 clustera/g' /etc/hosts;
        else 
            echo '127.0.1.1 clustera' | sudo tee -a /etc/hosts; 
        fi
    fi

I0728 09:11:51.880780 29099 main.go:134] libmachine: SSH cmd err, output: : I0728 09:11:51.880808 29099 buildroot.go:172] set auth options {CertDir:/Users/bknitter/.minikube CaCertPath:/Users/bknitter/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/bknitter/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/bknitter/.minikube/machines/server.pem ServerKeyPath:/Users/bknitter/.minikube/machines/server-key.pem ClientKeyPath:/Users/bknitter/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/bknitter/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/bknitter/.minikube} I0728 09:11:51.880844 29099 buildroot.go:174] setting up certificates I0728 09:11:51.880867 29099 provision.go:83] configureAuth start I0728 09:11:51.880878 29099 main.go:134] libmachine: (clustera) Calling .GetMachineName I0728 09:11:51.881219 29099 main.go:134] libmachine: (clustera) Calling .GetIP I0728 09:11:51.881422 29099 main.go:134] libmachine: (clustera) Calling .GetSSHHostname I0728 09:11:51.881607 29099 provision.go:138] copyHostCerts I0728 09:11:51.881820 29099 exec_runner.go:151] cp: /Users/bknitter/.minikube/certs/ca.pem --> /Users/bknitter/.minikube/ca.pem (1082 bytes) I0728 09:11:51.883299 29099 exec_runner.go:151] cp: /Users/bknitter/.minikube/certs/cert.pem --> /Users/bknitter/.minikube/cert.pem (1127 bytes) I0728 09:11:51.884019 29099 exec_runner.go:151] cp: /Users/bknitter/.minikube/certs/key.pem --> /Users/bknitter/.minikube/key.pem (1675 bytes) I0728 09:11:51.884615 29099 provision.go:112] generating server cert: /Users/bknitter/.minikube/machines/server.pem ca-key=/Users/bknitter/.minikube/certs/ca.pem private-key=/Users/bknitter/.minikube/certs/ca-key.pem org=bknitter.clustera san=[192.168.64.8 192.168.64.8 localhost 127.0.0.1 minikube clustera] I0728 09:11:51.954750 29099 provision.go:172] copyRemoteCerts I0728 09:11:51.955286 29099 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0728 09:11:51.955312 29099 main.go:134] libmachine: (clustera) Calling .GetSSHHostname I0728 09:11:51.955607 29099 main.go:134] libmachine: (clustera) Calling .GetSSHPort I0728 09:11:51.955747 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath I0728 09:11:51.955893 29099 main.go:134] libmachine: (clustera) Calling .GetSSHUsername I0728 09:11:51.956129 29099 sshutil.go:53] new ssh client: &{IP:192.168.64.8 Port:22 SSHKeyPath:/Users/bknitter/.minikube/machines/clustera/id_rsa Username:docker} I0728 09:11:51.998064 29099 ssh_runner.go:362] scp /Users/bknitter/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0728 09:11:52.023588 29099 ssh_runner.go:362] scp /Users/bknitter/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes) I0728 09:11:52.051352 29099 ssh_runner.go:362] scp /Users/bknitter/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0728 09:11:52.078197 29099 provision.go:86] duration metric: configureAuth took 197.304633ms I0728 09:11:52.078210 29099 buildroot.go:189] setting minikube options for container-runtime I0728 09:11:52.078825 29099 config.go:178] Loaded profile config "clustera": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.24.1 I0728 09:11:52.078847 29099 main.go:134] libmachine: (clustera) Calling .DriverName I0728 09:11:52.079135 29099 main.go:134] libmachine: (clustera) Calling .GetSSHHostname I0728 09:11:52.079426 29099 main.go:134] libmachine: (clustera) Calling .GetSSHPort I0728 09:11:52.079538 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath I0728 09:11:52.079680 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath I0728 09:11:52.079813 29099 main.go:134] libmachine: (clustera) Calling .GetSSHUsername I0728 09:11:52.080007 29099 main.go:134] libmachine: Using SSH client type: native I0728 09:11:52.080183 29099 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x43da180] 0x43dd1e0 [] 0s} 192.168.64.8 22 } I0728 09:11:52.080188 29099 main.go:134] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0728 09:11:52.147168 29099 main.go:134] libmachine: SSH cmd err, output: : tmpfs

I0728 09:11:52.147179 29099 buildroot.go:70] root file system type: tmpfs I0728 09:11:52.147394 29099 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0728 09:11:52.147417 29099 main.go:134] libmachine: (clustera) Calling .GetSSHHostname I0728 09:11:52.147757 29099 main.go:134] libmachine: (clustera) Calling .GetSSHPort I0728 09:11:52.147891 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath I0728 09:11:52.148040 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath I0728 09:11:52.148164 29099 main.go:134] libmachine: (clustera) Calling .GetSSHUsername I0728 09:11:52.148352 29099 main.go:134] libmachine: Using SSH client type: native I0728 09:11:52.148554 29099 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x43da180] 0x43dd1e0 [] 0s} 192.168.64.8 22 } I0728 09:11:52.148601 29099 main.go:134] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target minikube-automount.service docker.socket Requires= minikube-automount.service docker.socket StartLimitBurst=3 StartLimitIntervalSec=60

[Service] Type=notify Restart=on-failure

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0728 09:11:52.232677 29099 main.go:134] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target minikube-automount.service docker.socket Requires= minikube-automount.service docker.socket StartLimitBurst=3 StartLimitIntervalSec=60

[Service] Type=notify Restart=on-failure

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install] WantedBy=multi-user.target

I0728 09:11:52.232705 29099 main.go:134] libmachine: (clustera) Calling .GetSSHHostname I0728 09:11:52.232961 29099 main.go:134] libmachine: (clustera) Calling .GetSSHPort I0728 09:11:52.233105 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath I0728 09:11:52.233247 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath I0728 09:11:52.233398 29099 main.go:134] libmachine: (clustera) Calling .GetSSHUsername I0728 09:11:52.233623 29099 main.go:134] libmachine: Using SSH client type: native I0728 09:11:52.233793 29099 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x43da180] 0x43dd1e0 [] 0s} 192.168.64.8 22 } I0728 09:11:52.233811 29099 main.go:134] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0728 09:11:52.916902 29099 main.go:134] libmachine: SSH cmd err, output: : diff: can't stat '/lib/systemd/system/docker.service': No such file or directory Created symlink /etc/systemd/system/multi-user.target.wants/docker.service โ†’ /usr/lib/systemd/system/docker.service.

I0728 09:11:52.916939 29099 main.go:134] libmachine: Checking connection to Docker... I0728 09:11:52.916949 29099 main.go:134] libmachine: (clustera) Calling .GetURL I0728 09:11:52.917223 29099 main.go:134] libmachine: Docker is up and running! I0728 09:11:52.917230 29099 main.go:134] libmachine: Reticulating splines... I0728 09:11:52.917237 29099 client.go:171] LocalClient.Create took 18.302884241s I0728 09:11:52.917251 29099 start.go:173] duration metric: libmachine.API.Create for "clustera" took 18.302979267s I0728 09:11:52.917262 29099 start.go:306] post-start starting for "clustera" (driver="hyperkit") I0728 09:11:52.917266 29099 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0728 09:11:52.917281 29099 main.go:134] libmachine: (clustera) Calling .DriverName I0728 09:11:52.917573 29099 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0728 09:11:52.917593 29099 main.go:134] libmachine: (clustera) Calling .GetSSHHostname I0728 09:11:52.917781 29099 main.go:134] libmachine: (clustera) Calling .GetSSHPort I0728 09:11:52.917972 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath I0728 09:11:52.918124 29099 main.go:134] libmachine: (clustera) Calling .GetSSHUsername I0728 09:11:52.918237 29099 sshutil.go:53] new ssh client: &{IP:192.168.64.8 Port:22 SSHKeyPath:/Users/bknitter/.minikube/machines/clustera/id_rsa Username:docker} I0728 09:11:52.967039 29099 ssh_runner.go:195] Run: cat /etc/os-release I0728 09:11:52.972114 29099 info.go:137] Remote host: Buildroot 2021.02.12 I0728 09:11:52.972139 29099 filesync.go:126] Scanning /Users/bknitter/.minikube/addons for local assets ... I0728 09:11:52.972569 29099 filesync.go:126] Scanning /Users/bknitter/.minikube/files for local assets ... I0728 09:11:52.972749 29099 start.go:309] post-start completed in 55.478581ms I0728 09:11:52.972772 29099 main.go:134] libmachine: (clustera) Calling .GetConfigRaw I0728 09:11:52.974261 29099 main.go:134] libmachine: (clustera) Calling .GetIP I0728 09:11:52.974488 29099 profile.go:148] Saving config to /Users/bknitter/.minikube/profiles/clustera/config.json ... I0728 09:11:52.975756 29099 start.go:134] duration metric: createHost completed in 18.678594948s I0728 09:11:52.975770 29099 main.go:134] libmachine: (clustera) Calling .GetSSHHostname I0728 09:11:52.975922 29099 main.go:134] libmachine: (clustera) Calling .GetSSHPort I0728 09:11:52.976119 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath I0728 09:11:52.976275 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath I0728 09:11:52.976461 29099 main.go:134] libmachine: (clustera) Calling .GetSSHUsername I0728 09:11:52.976633 29099 main.go:134] libmachine: Using SSH client type: native I0728 09:11:52.976778 29099 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x43da180] 0x43dd1e0 [] 0s} 192.168.64.8 22 } I0728 09:11:52.976783 29099 main.go:134] libmachine: About to run SSH command: date +%!s(MISSING).%!N(MISSING) I0728 09:11:53.050107 29099 main.go:134] libmachine: SSH cmd err, output: : 1659024713.060733821

I0728 09:11:53.050115 29099 fix.go:207] guest clock: 1659024713.060733821 I0728 09:11:53.050120 29099 fix.go:220] Guest: 2022-07-28 09:11:53.060733821 -0700 PDT Remote: 2022-07-28 09:11:52.975763 -0700 PDT m=+108.331453934 (delta=84.970821ms) I0728 09:11:53.050138 29099 fix.go:191] guest clock delta is within tolerance: 84.970821ms I0728 09:11:53.050141 29099 start.go:81] releasing machines lock for "clustera", held for 18.753131607s I0728 09:11:53.050199 29099 main.go:134] libmachine: (clustera) Calling .DriverName I0728 09:11:53.050446 29099 main.go:134] libmachine: (clustera) Calling .GetIP I0728 09:11:53.050623 29099 main.go:134] libmachine: (clustera) Calling .DriverName I0728 09:11:53.050784 29099 main.go:134] libmachine: (clustera) Calling .DriverName I0728 09:11:53.050898 29099 main.go:134] libmachine: (clustera) Calling .DriverName I0728 09:11:53.051797 29099 main.go:134] libmachine: (clustera) Calling .DriverName I0728 09:11:53.051926 29099 main.go:134] libmachine: (clustera) Calling .DriverName I0728 09:11:53.052126 29099 ssh_runner.go:195] Run: systemctl --version I0728 09:11:53.052139 29099 main.go:134] libmachine: (clustera) Calling .GetSSHHostname I0728 09:11:53.052247 29099 main.go:134] libmachine: (clustera) Calling .GetSSHPort I0728 09:11:53.052334 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath I0728 09:11:53.052412 29099 main.go:134] libmachine: (clustera) Calling .GetSSHUsername I0728 09:11:53.052474 29099 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0728 09:11:53.052495 29099 main.go:134] libmachine: (clustera) Calling .GetSSHHostname I0728 09:11:53.052509 29099 sshutil.go:53] new ssh client: &{IP:192.168.64.8 Port:22 SSHKeyPath:/Users/bknitter/.minikube/machines/clustera/id_rsa Username:docker} I0728 09:11:53.052590 29099 main.go:134] libmachine: (clustera) Calling .GetSSHPort I0728 09:11:53.052665 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath I0728 09:11:53.052754 29099 main.go:134] libmachine: (clustera) Calling .GetSSHUsername I0728 09:11:53.052817 29099 sshutil.go:53] new ssh client: &{IP:192.168.64.8 Port:22 SSHKeyPath:/Users/bknitter/.minikube/machines/clustera/id_rsa Username:docker} W0728 09:11:53.228752 29099 start.go:731] [curl -sS -m 2 https://k8s.gcr.io/] failed: curl -sS -m 2 https://k8s.gcr.io/: Process exited with status 60 stdout:

stderr: curl: (60) SSL certificate problem: self signed certificate in certificate chain More details here: https://curl.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above. I0728 09:11:53.228809 29099 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker W0728 09:11:53.228909 29099 out.go:239] โ— This VM is having trouble accessing https://k8s.gcr.io W0728 09:11:53.229055 29099 out.go:239] ๐Ÿ’ก To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/ I0728 09:11:53.229964 29099 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0728 09:11:53.255922 29099 docker.go:602] Got preloaded images: I0728 09:11:53.255930 29099 docker.go:608] k8s.gcr.io/kube-apiserver:v1.24.1 wasn't preloaded I0728 09:11:53.255999 29099 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json I0728 09:11:53.268622 29099 ssh_runner.go:195] Run: which lz4 I0728 09:11:53.273664 29099 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4 I0728 09:11:53.277868 29099 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1 stdout:

stderr: stat: cannot statx '/preloaded.tar.lz4': No such file or directory I0728 09:11:53.277897 29099 ssh_runner.go:362] scp /Users/bknitter/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (425543115 bytes) I0728 09:11:55.431282 29099 docker.go:567] Took 2.158212 seconds to copy over tarball I0728 09:11:55.431355 29099 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4 I0728 09:12:03.726506 29099 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (8.295123237s) I0728 09:12:03.726537 29099 ssh_runner.go:146] rm: /preloaded.tar.lz4 I0728 09:12:03.763937 29099 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json I0728 09:12:03.778195 29099 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes) I0728 09:12:03.795494 29099 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0728 09:12:03.949721 29099 ssh_runner.go:195] Run: sudo systemctl restart docker I0728 09:12:05.546681 29099 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.596942729s) I0728 09:12:05.546874 29099 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0728 09:12:05.559134 29099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0728 09:12:05.576720 29099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0728 09:12:05.589673 29099 ssh_runner.go:195] Run: sudo systemctl stop -f crio I0728 09:12:05.634368 29099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0728 09:12:05.647668 29099 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock image-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0728 09:12:05.672458 29099 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0728 09:12:05.780095 29099 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0728 09:12:05.884883 29099 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0728 09:12:05.992542 29099 ssh_runner.go:195] Run: sudo systemctl restart docker I0728 09:12:07.357208 29099 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.364650088s) I0728 09:12:07.357291 29099 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0728 09:12:07.462491 29099 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0728 09:12:07.574432 29099 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket I0728 09:12:07.589016 29099 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock I0728 09:12:07.589113 29099 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0728 09:12:07.594451 29099 start.go:468] Will wait 60s for crictl version I0728 09:12:07.594517 29099 ssh_runner.go:195] Run: sudo crictl version I0728 09:12:07.631797 29099 start.go:477] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 20.10.16 RuntimeApiVersion: 1.41.0 I0728 09:12:07.631877 29099 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0728 09:12:07.663764 29099 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0728 09:12:07.738937 29099 out.go:204] ๐Ÿณ Preparing Kubernetes v1.24.1 on Docker 20.10.16 ... I0728 09:12:07.740043 29099 ssh_runner.go:195] Run: grep 192.168.64.1 host.minikube.internal$ /etc/hosts I0728 09:12:07.748383 29099 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.64.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0728 09:12:07.765763 29099 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker I0728 09:12:07.765868 29099 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0728 09:12:07.796150 29099 docker.go:602] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.24.1 k8s.gcr.io/kube-controller-manager:v1.24.1 k8s.gcr.io/kube-proxy:v1.24.1 k8s.gcr.io/kube-scheduler:v1.24.1 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/pause:3.7 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5

-- /stdout -- I0728 09:12:07.796164 29099 docker.go:533] Images already preloaded, skipping extraction I0728 09:12:07.796242 29099 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0728 09:12:07.824101 29099 docker.go:602] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.24.1 k8s.gcr.io/kube-scheduler:v1.24.1 k8s.gcr.io/kube-controller-manager:v1.24.1 k8s.gcr.io/kube-proxy:v1.24.1 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/pause:3.7 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5

-- /stdout -- I0728 09:12:07.824131 29099 cache_images.go:84] Images are preloaded, skipping loading I0728 09:12:07.824215 29099 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0728 09:12:07.856704 29099 cni.go:95] Creating CNI manager for "" I0728 09:12:07.856712 29099 cni.go:169] CNI unnecessary in this configuration, recommending no CNI I0728 09:12:07.856734 29099 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0728 09:12:07.856748 29099 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.8 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:clustera NodeName:clustera DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.8"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.64.8 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0728 09:12:07.857074 29099 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.64.8 bindPort: 8443 bootstrapTokens:

I0728 09:12:07.857243 29099 kubeadm.go:961] kubelet [Unit] Wants=docker.socket

[Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=clustera --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.8 --runtime-request-timeout=15m

[Install] config: {KubernetesVersion:v1.24.1 ClusterName:clustera Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0728 09:12:07.857326 29099 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1 I0728 09:12:07.866193 29099 binaries.go:44] Found k8s binaries, skipping transfer I0728 09:12:07.866284 29099 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0728 09:12:07.875513 29099 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (470 bytes) I0728 09:12:07.893133 29099 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0728 09:12:07.908181 29099 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2030 bytes) I0728 09:12:07.925418 29099 ssh_runner.go:195] Run: grep 192.168.64.8 control-plane.minikube.internal$ /etc/hosts I0728 09:12:07.929318 29099 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.64.8 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0728 09:12:07.941052 29099 certs.go:54] Setting up /Users/bknitter/.minikube/profiles/clustera for IP: 192.168.64.8 I0728 09:12:07.941099 29099 certs.go:187] generating minikubeCA CA: /Users/bknitter/.minikube/ca.key I0728 09:12:08.044554 29099 crypto.go:156] Writing cert to /Users/bknitter/.minikube/ca.crt ... I0728 09:12:08.044566 29099 lock.go:35] WriteFile acquiring /Users/bknitter/.minikube/ca.crt: {Name:mkec9a22f4dcd78594716bc157a8738c782c399e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0728 09:12:08.045238 29099 crypto.go:164] Writing key to /Users/bknitter/.minikube/ca.key ... I0728 09:12:08.045243 29099 lock.go:35] WriteFile acquiring /Users/bknitter/.minikube/ca.key: {Name:mk5d538fbd42276d238a9b567d54ba566f48df85 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0728 09:12:08.045862 29099 certs.go:187] generating proxyClientCA CA: /Users/bknitter/.minikube/proxy-client-ca.key I0728 09:12:08.212556 29099 crypto.go:156] Writing cert to /Users/bknitter/.minikube/proxy-client-ca.crt ... I0728 09:12:08.212571 29099 lock.go:35] WriteFile acquiring /Users/bknitter/.minikube/proxy-client-ca.crt: {Name:mk851b75b6093b12dc007d52f9bc56c995755bae Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0728 09:12:08.213170 29099 crypto.go:164] Writing key to /Users/bknitter/.minikube/proxy-client-ca.key ... I0728 09:12:08.213182 29099 lock.go:35] WriteFile acquiring /Users/bknitter/.minikube/proxy-client-ca.key: {Name:mk1ed65ef17f24eec692bfce3c85f26666fbddb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0728 09:12:08.213657 29099 certs.go:302] generating minikube-user signed cert: /Users/bknitter/.minikube/profiles/clustera/client.key I0728 09:12:08.213671 29099 crypto.go:68] Generating cert /Users/bknitter/.minikube/profiles/clustera/client.crt with IP's: [] I0728 09:12:08.284606 29099 crypto.go:156] Writing cert to /Users/bknitter/.minikube/profiles/clustera/client.crt ... I0728 09:12:08.284617 29099 lock.go:35] WriteFile acquiring /Users/bknitter/.minikube/profiles/clustera/client.crt: {Name:mkc4c2f2081dcfec1eeb20c1c5a2e02d26f9d6c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0728 09:12:08.285302 29099 crypto.go:164] Writing key to /Users/bknitter/.minikube/profiles/clustera/client.key ... I0728 09:12:08.285311 29099 lock.go:35] WriteFile acquiring /Users/bknitter/.minikube/profiles/clustera/client.key: {Name:mk4286d34fa9130cce19f7d4aa68fbdc8e892d1c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0728 09:12:08.285953 29099 certs.go:302] generating minikube signed cert: /Users/bknitter/.minikube/profiles/clustera/apiserver.key.a9c8755b I0728 09:12:08.285983 29099 crypto.go:68] Generating cert /Users/bknitter/.minikube/profiles/clustera/apiserver.crt.a9c8755b with IP's: [192.168.64.8 10.96.0.1 127.0.0.1 10.0.0.1] I0728 09:12:08.516165 29099 crypto.go:156] Writing cert to /Users/bknitter/.minikube/profiles/clustera/apiserver.crt.a9c8755b ... I0728 09:12:08.516175 29099 lock.go:35] WriteFile acquiring /Users/bknitter/.minikube/profiles/clustera/apiserver.crt.a9c8755b: {Name:mk9542bf1e4067f3a02426b08e391887bd93e1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0728 09:12:08.518696 29099 crypto.go:164] Writing key to /Users/bknitter/.minikube/profiles/clustera/apiserver.key.a9c8755b ... I0728 09:12:08.518717 29099 lock.go:35] WriteFile acquiring /Users/bknitter/.minikube/profiles/clustera/apiserver.key.a9c8755b: {Name:mka8884871cad8bf8e795df5afcc1b0bedf7e8da Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0728 09:12:08.519248 29099 certs.go:320] copying /Users/bknitter/.minikube/profiles/clustera/apiserver.crt.a9c8755b -> /Users/bknitter/.minikube/profiles/clustera/apiserver.crt I0728 09:12:08.520201 29099 certs.go:324] copying /Users/bknitter/.minikube/profiles/clustera/apiserver.key.a9c8755b -> /Users/bknitter/.minikube/profiles/clustera/apiserver.key I0728 09:12:08.520748 29099 certs.go:302] generating aggregator signed cert: /Users/bknitter/.minikube/profiles/clustera/proxy-client.key I0728 09:12:08.520825 29099 crypto.go:68] Generating cert /Users/bknitter/.minikube/profiles/clustera/proxy-client.crt with IP's: [] I0728 09:12:08.884811 29099 crypto.go:156] Writing cert to /Users/bknitter/.minikube/profiles/clustera/proxy-client.crt ... I0728 09:12:08.884822 29099 lock.go:35] WriteFile acquiring /Users/bknitter/.minikube/profiles/clustera/proxy-client.crt: {Name:mk11df11cb930ff62fe0b428194e863f17e6ab4a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0728 09:12:08.885293 29099 crypto.go:164] Writing key to /Users/bknitter/.minikube/profiles/clustera/proxy-client.key ... I0728 09:12:08.885306 29099 lock.go:35] WriteFile acquiring /Users/bknitter/.minikube/profiles/clustera/proxy-client.key: {Name:mk1985c512b1a6f07981e7dd94cd280ee1633cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0728 09:12:08.886842 29099 certs.go:388] found cert: /Users/bknitter/.minikube/certs/Users/bknitter/.minikube/certs/ca-key.pem (1679 bytes) I0728 09:12:08.887253 29099 certs.go:388] found cert: /Users/bknitter/.minikube/certs/Users/bknitter/.minikube/certs/ca.pem (1082 bytes) I0728 09:12:08.887630 29099 certs.go:388] found cert: /Users/bknitter/.minikube/certs/Users/bknitter/.minikube/certs/cert.pem (1127 bytes) I0728 09:12:08.887942 29099 certs.go:388] found cert: /Users/bknitter/.minikube/certs/Users/bknitter/.minikube/certs/key.pem (1675 bytes) I0728 09:12:08.889748 29099 ssh_runner.go:362] scp /Users/bknitter/.minikube/profiles/clustera/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0728 09:12:08.913571 29099 ssh_runner.go:362] scp /Users/bknitter/.minikube/profiles/clustera/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0728 09:12:08.939150 29099 ssh_runner.go:362] scp /Users/bknitter/.minikube/profiles/clustera/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0728 09:12:08.961957 29099 ssh_runner.go:362] scp /Users/bknitter/.minikube/profiles/clustera/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0728 09:12:08.986919 29099 ssh_runner.go:362] scp /Users/bknitter/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0728 09:12:09.011303 29099 ssh_runner.go:362] scp /Users/bknitter/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0728 09:12:09.036478 29099 ssh_runner.go:362] scp /Users/bknitter/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0728 09:12:09.061792 29099 ssh_runner.go:362] scp /Users/bknitter/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes) I0728 09:12:09.086838 29099 ssh_runner.go:362] scp /Users/bknitter/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0728 09:12:09.109432 29099 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0728 09:12:09.125265 29099 ssh_runner.go:195] Run: openssl version I0728 09:12:09.131899 29099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0728 09:12:09.141977 29099 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0728 09:12:09.146124 29099 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 16:12 /usr/share/ca-certificates/minikubeCA.pem I0728 09:12:09.146181 29099 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0728 09:12:09.150786 29099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0728 09:12:09.159976 29099 kubeadm.go:395] StartCluster: {Name:clustera KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:clustera Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.8 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} I0728 09:12:09.160109 29099 sshrunner.go:195] Run: docker ps --filter status=paused --filter=name=k8s.*(kube-system) --format={{.ID}} I0728 09:12:09.187631 29099 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0728 09:12:09.196211 29099 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0728 09:12:09.204575 29099 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0728 09:12:09.213005 29099 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout:

stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0728 09:12:09.213029 29099 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem" I0728 09:12:09.576490 29099 out.go:204] โ–ช Generating certificates and keys ... I0728 09:12:12.662754 29099 out.go:204] โ–ช Booting up control plane ... W0728 09:16:12.637289 29099 out.go:239] ๐Ÿ’ข initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.24.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [clustera localhost] and IPs [192.168.64.8 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [clustera localhost] and IPs [192.168.64.8 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred: timed out waiting for the condition

This error is likely caused by:

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:

Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl:

stderr: W0728 16:12:09.338466 1260 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher

I0728 09:16:12.639176 29099 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force" I0728 09:16:13.724424 29099 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.085234236s) I0728 09:16:13.724501 29099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0728 09:16:13.736949 29099 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0728 09:16:13.747340 29099 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout:

stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0728 09:16:13.747376 29099 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem" I0728 09:16:14.117570 29099 out.go:204] โ–ช Generating certificates and keys ... I0728 09:16:15.387534 29099 out.go:204] โ–ช Booting up control plane ... I0728 09:20:15.400465 29099 kubeadm.go:397] StartCluster complete in 8m6.252189805s I0728 09:20:15.400818 29099 cri.go:52] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]} I0728 09:20:15.401876 29099 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0728 09:20:15.486555 29099 cri.go:87] found id: "" I0728 09:20:15.486566 29099 logs.go:274] 0 containers: [] W0728 09:20:15.486570 29099 logs.go:276] No container was found matching "kube-apiserver" I0728 09:20:15.486574 29099 cri.go:52] listing CRI containers in root : {State:all Name:etcd Namespaces:[]} I0728 09:20:15.486665 29099 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd I0728 09:20:15.515902 29099 cri.go:87] found id: "" I0728 09:20:15.515919 29099 logs.go:274] 0 containers: [] W0728 09:20:15.515923 29099 logs.go:276] No container was found matching "etcd" I0728 09:20:15.515941 29099 cri.go:52] listing CRI containers in root : {State:all Name:coredns Namespaces:[]} I0728 09:20:15.516035 29099 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns I0728 09:20:15.543769 29099 cri.go:87] found id: "" I0728 09:20:15.543785 29099 logs.go:274] 0 containers: [] W0728 09:20:15.543791 29099 logs.go:276] No container was found matching "coredns" I0728 09:20:15.543824 29099 cri.go:52] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]} I0728 09:20:15.544030 29099 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0728 09:20:15.572906 29099 cri.go:87] found id: "" I0728 09:20:15.572914 29099 logs.go:274] 0 containers: [] W0728 09:20:15.572918 29099 logs.go:276] No container was found matching "kube-scheduler" I0728 09:20:15.572922 29099 cri.go:52] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]} I0728 09:20:15.573018 29099 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy I0728 09:20:15.600383 29099 cri.go:87] found id: "" I0728 09:20:15.600401 29099 logs.go:274] 0 containers: [] W0728 09:20:15.600410 29099 logs.go:276] No container was found matching "kube-proxy" I0728 09:20:15.600416 29099 cri.go:52] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]} I0728 09:20:15.600572 29099 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard I0728 09:20:15.630405 29099 cri.go:87] found id: "" I0728 09:20:15.630414 29099 logs.go:274] 0 containers: [] W0728 09:20:15.630417 29099 logs.go:276] No container was found matching "kubernetes-dashboard" I0728 09:20:15.630429 29099 cri.go:52] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]} I0728 09:20:15.630540 29099 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner I0728 09:20:15.659550 29099 cri.go:87] found id: "" I0728 09:20:15.659560 29099 logs.go:274] 0 containers: [] W0728 09:20:15.659564 29099 logs.go:276] No container was found matching "storage-provisioner" I0728 09:20:15.659570 29099 cri.go:52] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]} I0728 09:20:15.659674 29099 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0728 09:20:15.687851 29099 cri.go:87] found id: "" I0728 09:20:15.687860 29099 logs.go:274] 0 containers: [] W0728 09:20:15.687864 29099 logs.go:276] No container was found matching "kube-controller-manager" I0728 09:20:15.687870 29099 logs.go:123] Gathering logs for kubelet ... I0728 09:20:15.687878 29099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0728 09:20:15.749039 29099 logs.go:123] Gathering logs for dmesg ... I0728 09:20:15.749060 29099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0728 09:20:15.761923 29099 logs.go:123] Gathering logs for describe nodes ... I0728 09:20:15.761937 29099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" W0728 09:20:15.854687 29099 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1 stdout:

stderr: The connection to the server localhost:8443 was refused - did you specify the right host or port? output: stderr The connection to the server localhost:8443 was refused - did you specify the right host or port?

/stderr I0728 09:20:15.854707 29099 logs.go:123] Gathering logs for Docker ... I0728 09:20:15.854714 29099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0728 09:20:15.909940 29099 logs.go:123] Gathering logs for container status ... I0728 09:20:15.909967 29099 ssh_runner.go:195] Run: /bin/bash -c "sudo which crictl || echo crictl ps -a || sudo docker ps -a" W0728 09:20:15.947764 29099 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.24.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred: timed out waiting for the condition

This error is likely caused by:

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:

Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl:

stderr: W0728 16:16:13.897144 2713 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher W0728 09:20:15.947793 29099 out.go:239] W0728 09:20:15.948329 29099 out.go:239] ๐Ÿ’ฃ Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.24.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred: timed out waiting for the condition

This error is likely caused by:

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:

Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl:

stderr: W0728 16:16:13.897144 2713 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher

W0728 09:20:15.948567 29099 out.go:239] W0728 09:20:15.950746 29099 out.go:239] โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ โ”‚ โ”‚ ๐Ÿ˜ฟ If the above advice does not help, please let us know: โ”‚ โ”‚ ๐Ÿ‘‰ https://github.com/kubernetes/minikube/issues/new/choose โ”‚ โ”‚ โ”‚ โ”‚ Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. โ”‚ โ”‚ โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ I0728 09:20:16.027095 29099 out.go:177] W0728 09:20:16.066017 29099 out.go:239] โŒ Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.24.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred: timed out waiting for the condition

This error is likely caused by:

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:

Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl:

stderr: W0728 16:16:13.897144 2713 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher

W0728 09:20:16.068643 29099 out.go:239] ๐Ÿ’ก Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start W0728 09:20:16.068726 29099 out.go:239] ๐Ÿฟ Related issue: https://github.com/kubernetes/minikube/issues/4172 I0728 09:20:16.104831 29099 out.go:177]

bknitter-panw commented 2 years ago

Same issue when using the Docker driver.

afbjorklund commented 2 years ago

The Linux binaries are installed inside the cluster, and not on the host

bknitter-panw commented 2 years ago

Got it. Any reason this is failing to start based on the logs?

afbjorklund commented 2 years ago

It seemed to be a proxy ssl issue, but anyway - not related to path ?

x509: certificate signed by unknown authority

bknitter-panw commented 2 years ago

Good point. I'll close as unrelated to path.

afbjorklund commented 2 years ago

If you have a corporate proxy that scans all traffic, it usually requires you to install a new certificate (for all the steamed-open letters)

bknitter-panw commented 2 years ago

Thanks for the pointer. That's exactly what I needed. Looks like I have resolved it by putting all of the certs into the right sub-directory. Thanks again!

https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/#x509-certificate-signed-by-unknown-authority