Closed bknitter-panw closed 2 years ago
==> Audit <==
--------- | -------------------- | ---------- | ---------- | --------- | --------------------- | ---------- | Command | Args | Profile | User | Version | Start Time | End Time | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
start | --profile=clustera | minikube | bknitter | v1.26.0 | 28 Jul 22 09:10 PDT | |||||||||
--------- | -------------------- | ---------- | ---------- | --------- | --------------------- | ---------- |
==> Last Start <==
Log file created at: 2022/07/28 09:10:04
Running on machine: M-C02FF1LFML85
Binary: Built with gc go1.18.3 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0728 09:10:04.721602 29099 out.go:296] Setting OutFile to fd 1 ...
I0728 09:10:04.722057 29099 out.go:348] isatty.IsTerminal(1) = true
I0728 09:10:04.722061 29099 out.go:309] Setting ErrFile to fd 2...
I0728 09:10:04.722065 29099 out.go:348] isatty.IsTerminal(2) = true
I0728 09:10:04.722634 29099 root.go:329] Updating PATH: /Users/bknitter/.minikube/bin
W0728 09:10:04.722769 29099 root.go:307] Error reading config file at /Users/bknitter/.minikube/config/config.json: open /Users/bknitter/.minikube/config/config.json: no such file or directory
I0728 09:10:04.724593 29099 out.go:303] Setting JSON to false
I0728 09:10:04.761785 29099 start.go:115] hostinfo: {"hostname":"M-C02FF1LFML85","uptime":1452688,"bootTime":1657571916,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"944c8299-cbc3-52d3-88a0-048cc6e9c9de"}
W0728 09:10:04.761899 29099 start.go:123] gopshost.Virtualization returned error: not implemented yet
I0728 09:10:04.782531 29099 out.go:177] ๐ [clustera] minikube v1.26.0 on Darwin 12.4
W0728 09:10:04.820878 29099 preload.go:295] Failed to list preload files: open /Users/bknitter/.minikube/cache/preloaded-tarball: no such file or directory
I0728 09:10:04.820888 29099 notify.go:193] Checking for updates...
I0728 09:10:04.821430 29099 driver.go:360] Setting default libvirt URI to qemu:///system
I0728 09:10:04.821466 29099 global.go:111] Querying for installed drivers using PATH=/Users/bknitter/.minikube/bin:/Users/bknitter/bin/google-cloud-sdk/bin:/Users/bknitter/bin/google-cloud-sdk/bin:/Users/bknitter/bin/google-cloud-sdk/bin:/Users/bknitter/bin/google-cloud-sdk/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/go/bin
I0728 09:10:04.825254 29099 global.go:119] qemu2 default: true priority: 3, state: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:podman system connection list
, or try podman machine init
and podman machine start
to manage a new Linux VM
Error: unable to connect to Podman socket: Get "http://d/v3.4.4/libpod/_ping": dial unix ///var/folders/8g/zpxqvph15ljbs4rkm69bc4691zyvpz/T/podman-run--1/podman/podman.sock: connect: no such file or directory Reason: Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/ Version:}
I0728 09:10:06.395617 29099 global.go:119] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:podman system connection list
, or try podman machine init
and podman machine start
to manage a new Linux VM
Error: unable to connect to Podman socket: Get "http://d/v3.4.4/libpod/_ping": dial unix ///var/folders/8g/zpxqvph15ljbs4rkm69bc4691zyvpz/T/podman-run--1/podman/podman.sock: connect: no such file or directory
I0728 09:10:06.425682 29099 driver.go:330] Picked: hyperkit
I0728 09:10:06.425706 29099 driver.go:331] Alternatives: [ssh qemu2 (experimental)]
I0728 09:10:06.425721 29099 driver.go:332] Rejects: [vmwarefusion docker parallels podman virtualbox vmware]
I0728 09:10:06.472728 29099 out.go:177] โจ Automatically selected the hyperkit driver. Other choices: ssh, qemu2 (experimental)
I0728 09:10:06.513286 29099 start.go:284] selected driver: hyperkit
I0728 09:10:06.513300 29099 start.go:805] validating driver "hyperkit" against
$ sudo chown root:wheel /Users/bknitter/.minikube/bin/docker-machine-driver-hyperkit $ sudo chmod u+s /Users/bknitter/.minikube/bin/docker-machine-driver-hyperkit
I0728 09:10:09.481084 29099 install.go:99] testing: [sudo -n chown root:wheel /Users/bknitter/.minikube/bin/docker-machine-driver-hyperkit]
I0728 09:10:09.525863 29099 install.go:101] [sudo chown root:wheel /Users/bknitter/.minikube/bin/docker-machine-driver-hyperkit] may require a password: exit status 1
I0728 09:10:09.526061 29099 install.go:106] running: [sudo chown root:wheel /Users/bknitter/.minikube/bin/docker-machine-driver-hyperkit]
I0728 09:10:43.345613 29099 install.go:99] testing: [sudo -n chmod u+s /Users/bknitter/.minikube/bin/docker-machine-driver-hyperkit]
I0728 09:10:43.398913 29099 install.go:106] running: [sudo chmod u+s /Users/bknitter/.minikube/bin/docker-machine-driver-hyperkit]
I0728 09:10:43.438338 29099 start_flags.go:296] no existing cluster config was found, will generate one from the flags
I0728 09:10:43.438620 29099 start_flags.go:377] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
I0728 09:10:43.438772 29099 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
I0728 09:10:43.438794 29099 cni.go:95] Creating CNI manager for ""
I0728 09:10:43.438801 29099 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0728 09:10:43.438806 29099 start_flags.go:310] config:
{Name:clustera KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:clustera Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:
I0728 09:11:51.721198 29099 main.go:134] libmachine: found compatible host: buildroot
I0728 09:11:51.721207 29099 main.go:134] libmachine: Provisioning with buildroot...
I0728 09:11:51.721215 29099 main.go:134] libmachine: (clustera) Calling .GetMachineName
I0728 09:11:51.721504 29099 buildroot.go:166] provisioning hostname "clustera"
I0728 09:11:51.721514 29099 main.go:134] libmachine: (clustera) Calling .GetMachineName
I0728 09:11:51.721720 29099 main.go:134] libmachine: (clustera) Calling .GetSSHHostname
I0728 09:11:51.721857 29099 main.go:134] libmachine: (clustera) Calling .GetSSHPort
I0728 09:11:51.721992 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath
I0728 09:11:51.722208 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath
I0728 09:11:51.722333 29099 main.go:134] libmachine: (clustera) Calling .GetSSHUsername
I0728 09:11:51.722624 29099 main.go:134] libmachine: Using SSH client type: native
I0728 09:11:51.722812 29099 main.go:134] libmachine: &{{{
I0728 09:11:51.804080 29099 main.go:134] libmachine: (clustera) Calling .GetSSHHostname
I0728 09:11:51.804274 29099 main.go:134] libmachine: (clustera) Calling .GetSSHPort
I0728 09:11:51.804438 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath
I0728 09:11:51.804632 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath
I0728 09:11:51.804813 29099 main.go:134] libmachine: (clustera) Calling .GetSSHUsername
I0728 09:11:51.805016 29099 main.go:134] libmachine: Using SSH client type: native
I0728 09:11:51.805198 29099 main.go:134] libmachine: &{{{
if ! grep -xq '.*\sclustera' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 clustera/g' /etc/hosts;
else
echo '127.0.1.1 clustera' | sudo tee -a /etc/hosts;
fi
fi
I0728 09:11:51.880780 29099 main.go:134] libmachine: SSH cmd err, output:
I0728 09:11:52.147179 29099 buildroot.go:70] root file system type: tmpfs
I0728 09:11:52.147394 29099 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0728 09:11:52.147417 29099 main.go:134] libmachine: (clustera) Calling .GetSSHHostname
I0728 09:11:52.147757 29099 main.go:134] libmachine: (clustera) Calling .GetSSHPort
I0728 09:11:52.147891 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath
I0728 09:11:52.148040 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath
I0728 09:11:52.148164 29099 main.go:134] libmachine: (clustera) Calling .GetSSHUsername
I0728 09:11:52.148352 29099 main.go:134] libmachine: Using SSH client type: native
I0728 09:11:52.148554 29099 main.go:134] libmachine: &{{{
[Service] Type=notify Restart=on-failure
ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID
LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity
TasksMax=infinity TimeoutStartSec=0
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0728 09:11:52.232677 29099 main.go:134] libmachine: SSH cmd err, output:
[Service] Type=notify Restart=on-failure
ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity
TasksMax=infinity TimeoutStartSec=0
Delegate=yes
KillMode=process
[Install] WantedBy=multi-user.target
I0728 09:11:52.232705 29099 main.go:134] libmachine: (clustera) Calling .GetSSHHostname
I0728 09:11:52.232961 29099 main.go:134] libmachine: (clustera) Calling .GetSSHPort
I0728 09:11:52.233105 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath
I0728 09:11:52.233247 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath
I0728 09:11:52.233398 29099 main.go:134] libmachine: (clustera) Calling .GetSSHUsername
I0728 09:11:52.233623 29099 main.go:134] libmachine: Using SSH client type: native
I0728 09:11:52.233793 29099 main.go:134] libmachine: &{{{
I0728 09:11:52.916939 29099 main.go:134] libmachine: Checking connection to Docker...
I0728 09:11:52.916949 29099 main.go:134] libmachine: (clustera) Calling .GetURL
I0728 09:11:52.917223 29099 main.go:134] libmachine: Docker is up and running!
I0728 09:11:52.917230 29099 main.go:134] libmachine: Reticulating splines...
I0728 09:11:52.917237 29099 client.go:171] LocalClient.Create took 18.302884241s
I0728 09:11:52.917251 29099 start.go:173] duration metric: libmachine.API.Create for "clustera" took 18.302979267s
I0728 09:11:52.917262 29099 start.go:306] post-start starting for "clustera" (driver="hyperkit")
I0728 09:11:52.917266 29099 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0728 09:11:52.917281 29099 main.go:134] libmachine: (clustera) Calling .DriverName
I0728 09:11:52.917573 29099 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0728 09:11:52.917593 29099 main.go:134] libmachine: (clustera) Calling .GetSSHHostname
I0728 09:11:52.917781 29099 main.go:134] libmachine: (clustera) Calling .GetSSHPort
I0728 09:11:52.917972 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath
I0728 09:11:52.918124 29099 main.go:134] libmachine: (clustera) Calling .GetSSHUsername
I0728 09:11:52.918237 29099 sshutil.go:53] new ssh client: &{IP:192.168.64.8 Port:22 SSHKeyPath:/Users/bknitter/.minikube/machines/clustera/id_rsa Username:docker}
I0728 09:11:52.967039 29099 ssh_runner.go:195] Run: cat /etc/os-release
I0728 09:11:52.972114 29099 info.go:137] Remote host: Buildroot 2021.02.12
I0728 09:11:52.972139 29099 filesync.go:126] Scanning /Users/bknitter/.minikube/addons for local assets ...
I0728 09:11:52.972569 29099 filesync.go:126] Scanning /Users/bknitter/.minikube/files for local assets ...
I0728 09:11:52.972749 29099 start.go:309] post-start completed in 55.478581ms
I0728 09:11:52.972772 29099 main.go:134] libmachine: (clustera) Calling .GetConfigRaw
I0728 09:11:52.974261 29099 main.go:134] libmachine: (clustera) Calling .GetIP
I0728 09:11:52.974488 29099 profile.go:148] Saving config to /Users/bknitter/.minikube/profiles/clustera/config.json ...
I0728 09:11:52.975756 29099 start.go:134] duration metric: createHost completed in 18.678594948s
I0728 09:11:52.975770 29099 main.go:134] libmachine: (clustera) Calling .GetSSHHostname
I0728 09:11:52.975922 29099 main.go:134] libmachine: (clustera) Calling .GetSSHPort
I0728 09:11:52.976119 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath
I0728 09:11:52.976275 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath
I0728 09:11:52.976461 29099 main.go:134] libmachine: (clustera) Calling .GetSSHUsername
I0728 09:11:52.976633 29099 main.go:134] libmachine: Using SSH client type: native
I0728 09:11:52.976778 29099 main.go:134] libmachine: &{{{
I0728 09:11:53.050115 29099 fix.go:207] guest clock: 1659024713.060733821 I0728 09:11:53.050120 29099 fix.go:220] Guest: 2022-07-28 09:11:53.060733821 -0700 PDT Remote: 2022-07-28 09:11:52.975763 -0700 PDT m=+108.331453934 (delta=84.970821ms) I0728 09:11:53.050138 29099 fix.go:191] guest clock delta is within tolerance: 84.970821ms I0728 09:11:53.050141 29099 start.go:81] releasing machines lock for "clustera", held for 18.753131607s I0728 09:11:53.050199 29099 main.go:134] libmachine: (clustera) Calling .DriverName I0728 09:11:53.050446 29099 main.go:134] libmachine: (clustera) Calling .GetIP I0728 09:11:53.050623 29099 main.go:134] libmachine: (clustera) Calling .DriverName I0728 09:11:53.050784 29099 main.go:134] libmachine: (clustera) Calling .DriverName I0728 09:11:53.050898 29099 main.go:134] libmachine: (clustera) Calling .DriverName I0728 09:11:53.051797 29099 main.go:134] libmachine: (clustera) Calling .DriverName I0728 09:11:53.051926 29099 main.go:134] libmachine: (clustera) Calling .DriverName I0728 09:11:53.052126 29099 ssh_runner.go:195] Run: systemctl --version I0728 09:11:53.052139 29099 main.go:134] libmachine: (clustera) Calling .GetSSHHostname I0728 09:11:53.052247 29099 main.go:134] libmachine: (clustera) Calling .GetSSHPort I0728 09:11:53.052334 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath I0728 09:11:53.052412 29099 main.go:134] libmachine: (clustera) Calling .GetSSHUsername I0728 09:11:53.052474 29099 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0728 09:11:53.052495 29099 main.go:134] libmachine: (clustera) Calling .GetSSHHostname I0728 09:11:53.052509 29099 sshutil.go:53] new ssh client: &{IP:192.168.64.8 Port:22 SSHKeyPath:/Users/bknitter/.minikube/machines/clustera/id_rsa Username:docker} I0728 09:11:53.052590 29099 main.go:134] libmachine: (clustera) Calling .GetSSHPort I0728 09:11:53.052665 29099 main.go:134] libmachine: (clustera) Calling .GetSSHKeyPath I0728 09:11:53.052754 29099 main.go:134] libmachine: (clustera) Calling .GetSSHUsername I0728 09:11:53.052817 29099 sshutil.go:53] new ssh client: &{IP:192.168.64.8 Port:22 SSHKeyPath:/Users/bknitter/.minikube/machines/clustera/id_rsa Username:docker} W0728 09:11:53.228752 29099 start.go:731] [curl -sS -m 2 https://k8s.gcr.io/] failed: curl -sS -m 2 https://k8s.gcr.io/: Process exited with status 60 stdout:
stderr: curl: (60) SSL certificate problem: self signed certificate in certificate chain More details here: https://curl.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above. I0728 09:11:53.228809 29099 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker W0728 09:11:53.228909 29099 out.go:239] โ This VM is having trouble accessing https://k8s.gcr.io W0728 09:11:53.229055 29099 out.go:239] ๐ก To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/ I0728 09:11:53.229964 29099 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0728 09:11:53.255922 29099 docker.go:602] Got preloaded images: I0728 09:11:53.255930 29099 docker.go:608] k8s.gcr.io/kube-apiserver:v1.24.1 wasn't preloaded I0728 09:11:53.255999 29099 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json I0728 09:11:53.268622 29099 ssh_runner.go:195] Run: which lz4 I0728 09:11:53.273664 29099 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4 I0728 09:11:53.277868 29099 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1 stdout:
stderr: stat: cannot statx '/preloaded.tar.lz4': No such file or directory I0728 09:11:53.277897 29099 ssh_runner.go:362] scp /Users/bknitter/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (425543115 bytes) I0728 09:11:55.431282 29099 docker.go:567] Took 2.158212 seconds to copy over tarball I0728 09:11:55.431355 29099 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4 I0728 09:12:03.726506 29099 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (8.295123237s) I0728 09:12:03.726537 29099 ssh_runner.go:146] rm: /preloaded.tar.lz4 I0728 09:12:03.763937 29099 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json I0728 09:12:03.778195 29099 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes) I0728 09:12:03.795494 29099 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0728 09:12:03.949721 29099 ssh_runner.go:195] Run: sudo systemctl restart docker I0728 09:12:05.546681 29099 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.596942729s) I0728 09:12:05.546874 29099 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0728 09:12:05.559134 29099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0728 09:12:05.576720 29099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0728 09:12:05.589673 29099 ssh_runner.go:195] Run: sudo systemctl stop -f crio I0728 09:12:05.634368 29099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0728 09:12:05.647668 29099 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock image-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0728 09:12:05.672458 29099 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0728 09:12:05.780095 29099 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0728 09:12:05.884883 29099 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0728 09:12:05.992542 29099 ssh_runner.go:195] Run: sudo systemctl restart docker I0728 09:12:07.357208 29099 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.364650088s) I0728 09:12:07.357291 29099 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0728 09:12:07.462491 29099 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0728 09:12:07.574432 29099 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket I0728 09:12:07.589016 29099 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock I0728 09:12:07.589113 29099 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0728 09:12:07.594451 29099 start.go:468] Will wait 60s for crictl version I0728 09:12:07.594517 29099 ssh_runner.go:195] Run: sudo crictl version I0728 09:12:07.631797 29099 start.go:477] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 20.10.16 RuntimeApiVersion: 1.41.0 I0728 09:12:07.631877 29099 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0728 09:12:07.663764 29099 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0728 09:12:07.738937 29099 out.go:204] ๐ณ Preparing Kubernetes v1.24.1 on Docker 20.10.16 ... I0728 09:12:07.740043 29099 ssh_runner.go:195] Run: grep 192.168.64.1 host.minikube.internal$ /etc/hosts I0728 09:12:07.748383 29099 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.64.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0728 09:12:07.765763 29099 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker I0728 09:12:07.765868 29099 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0728 09:12:07.796150 29099 docker.go:602] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.24.1 k8s.gcr.io/kube-controller-manager:v1.24.1 k8s.gcr.io/kube-proxy:v1.24.1 k8s.gcr.io/kube-scheduler:v1.24.1 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/pause:3.7 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout -- I0728 09:12:07.796164 29099 docker.go:533] Images already preloaded, skipping extraction I0728 09:12:07.796242 29099 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0728 09:12:07.824101 29099 docker.go:602] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.24.1 k8s.gcr.io/kube-scheduler:v1.24.1 k8s.gcr.io/kube-controller-manager:v1.24.1 k8s.gcr.io/kube-proxy:v1.24.1 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/pause:3.7 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout -- I0728 09:12:07.824131 29099 cache_images.go:84] Images are preloaded, skipping loading I0728 09:12:07.824215 29099 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0728 09:12:07.856704 29099 cni.go:95] Creating CNI manager for "" I0728 09:12:07.856712 29099 cni.go:169] CNI unnecessary in this configuration, recommending no CNI I0728 09:12:07.856734 29099 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0728 09:12:07.856748 29099 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.8 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:clustera NodeName:clustera DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.8"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.64.8 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0728 09:12:07.857074 29099 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.64.8 bindPort: 8443 bootstrapTokens:
apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd clusterDomain: "cluster.local"
apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0
tcpEstablishedTimeout: 0s
tcpCloseWaitTimeout: 0s
I0728 09:12:07.857243 29099 kubeadm.go:961] kubelet [Unit] Wants=docker.socket
[Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=clustera --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.8 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.24.1 ClusterName:clustera Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0728 09:12:07.857326 29099 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
I0728 09:12:07.866193 29099 binaries.go:44] Found k8s binaries, skipping transfer
I0728 09:12:07.866284 29099 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0728 09:12:07.875513 29099 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (470 bytes)
I0728 09:12:07.893133 29099 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0728 09:12:07.908181 29099 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2030 bytes)
I0728 09:12:07.925418 29099 ssh_runner.go:195] Run: grep 192.168.64.8 control-plane.minikube.internal$ /etc/hosts
I0728 09:12:07.929318 29099 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.64.8 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0728 09:12:07.941052 29099 certs.go:54] Setting up /Users/bknitter/.minikube/profiles/clustera for IP: 192.168.64.8
I0728 09:12:07.941099 29099 certs.go:187] generating minikubeCA CA: /Users/bknitter/.minikube/ca.key
I0728 09:12:08.044554 29099 crypto.go:156] Writing cert to /Users/bknitter/.minikube/ca.crt ...
I0728 09:12:08.044566 29099 lock.go:35] WriteFile acquiring /Users/bknitter/.minikube/ca.crt: {Name:mkec9a22f4dcd78594716bc157a8738c782c399e Clock:{} Delay:500ms Timeout:1m0s Cancel:
stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0728 09:12:09.213029 29099 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem" I0728 09:12:09.576490 29099 out.go:204] โช Generating certificates and keys ... I0728 09:12:12.662754 29099 out.go:204] โช Booting up control plane ... W0728 09:16:12.637289 29099 out.go:239] ๐ข initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.24.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [clustera localhost] and IPs [192.168.64.8 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [clustera localhost] and IPs [192.168.64.8 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred: timed out waiting for the condition
This error is likely caused by:
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl:
stderr: W0728 16:12:09.338466 1260 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher
I0728 09:16:12.639176 29099 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force" I0728 09:16:13.724424 29099 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.085234236s) I0728 09:16:13.724501 29099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0728 09:16:13.736949 29099 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0728 09:16:13.747340 29099 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout:
stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0728 09:16:13.747376 29099 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem" I0728 09:16:14.117570 29099 out.go:204] โช Generating certificates and keys ... I0728 09:16:15.387534 29099 out.go:204] โช Booting up control plane ... I0728 09:20:15.400465 29099 kubeadm.go:397] StartCluster complete in 8m6.252189805s I0728 09:20:15.400818 29099 cri.go:52] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]} I0728 09:20:15.401876 29099 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0728 09:20:15.486555 29099 cri.go:87] found id: "" I0728 09:20:15.486566 29099 logs.go:274] 0 containers: [] W0728 09:20:15.486570 29099 logs.go:276] No container was found matching "kube-apiserver" I0728 09:20:15.486574 29099 cri.go:52] listing CRI containers in root : {State:all Name:etcd Namespaces:[]} I0728 09:20:15.486665 29099 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd I0728 09:20:15.515902 29099 cri.go:87] found id: "" I0728 09:20:15.515919 29099 logs.go:274] 0 containers: [] W0728 09:20:15.515923 29099 logs.go:276] No container was found matching "etcd" I0728 09:20:15.515941 29099 cri.go:52] listing CRI containers in root : {State:all Name:coredns Namespaces:[]} I0728 09:20:15.516035 29099 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns I0728 09:20:15.543769 29099 cri.go:87] found id: "" I0728 09:20:15.543785 29099 logs.go:274] 0 containers: [] W0728 09:20:15.543791 29099 logs.go:276] No container was found matching "coredns" I0728 09:20:15.543824 29099 cri.go:52] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]} I0728 09:20:15.544030 29099 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0728 09:20:15.572906 29099 cri.go:87] found id: "" I0728 09:20:15.572914 29099 logs.go:274] 0 containers: [] W0728 09:20:15.572918 29099 logs.go:276] No container was found matching "kube-scheduler" I0728 09:20:15.572922 29099 cri.go:52] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]} I0728 09:20:15.573018 29099 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy I0728 09:20:15.600383 29099 cri.go:87] found id: "" I0728 09:20:15.600401 29099 logs.go:274] 0 containers: [] W0728 09:20:15.600410 29099 logs.go:276] No container was found matching "kube-proxy" I0728 09:20:15.600416 29099 cri.go:52] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]} I0728 09:20:15.600572 29099 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard I0728 09:20:15.630405 29099 cri.go:87] found id: "" I0728 09:20:15.630414 29099 logs.go:274] 0 containers: [] W0728 09:20:15.630417 29099 logs.go:276] No container was found matching "kubernetes-dashboard" I0728 09:20:15.630429 29099 cri.go:52] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]} I0728 09:20:15.630540 29099 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner I0728 09:20:15.659550 29099 cri.go:87] found id: "" I0728 09:20:15.659560 29099 logs.go:274] 0 containers: [] W0728 09:20:15.659564 29099 logs.go:276] No container was found matching "storage-provisioner" I0728 09:20:15.659570 29099 cri.go:52] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]} I0728 09:20:15.659674 29099 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0728 09:20:15.687851 29099 cri.go:87] found id: "" I0728 09:20:15.687860 29099 logs.go:274] 0 containers: [] W0728 09:20:15.687864 29099 logs.go:276] No container was found matching "kube-controller-manager" I0728 09:20:15.687870 29099 logs.go:123] Gathering logs for kubelet ... I0728 09:20:15.687878 29099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0728 09:20:15.749039 29099 logs.go:123] Gathering logs for dmesg ... I0728 09:20:15.749060 29099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0728 09:20:15.761923 29099 logs.go:123] Gathering logs for describe nodes ... I0728 09:20:15.761937 29099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" W0728 09:20:15.854687 29099 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1 stdout:
stderr: The connection to the server localhost:8443 was refused - did you specify the right host or port? output: stderr The connection to the server localhost:8443 was refused - did you specify the right host or port?
/stderr
I0728 09:20:15.854707 29099 logs.go:123] Gathering logs for Docker ...
I0728 09:20:15.854714 29099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0728 09:20:15.909940 29099 logs.go:123] Gathering logs for container status ...
I0728 09:20:15.909967 29099 ssh_runner.go:195] Run: /bin/bash -c "sudo which crictl || echo crictl
ps -a || sudo docker ps -a"
W0728 09:20:15.947764 29099 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred: timed out waiting for the condition
This error is likely caused by:
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl:
stderr: W0728 16:16:13.897144 2713 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher W0728 09:20:15.947793 29099 out.go:239] W0728 09:20:15.948329 29099 out.go:239] ๐ฃ Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.24.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred: timed out waiting for the condition
This error is likely caused by:
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl:
stderr: W0728 16:16:13.897144 2713 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher
W0728 09:20:15.948567 29099 out.go:239]
W0728 09:20:15.950746 29099 out.go:239] [31mโญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ[0m
[31mโ[0m [31mโ[0m
[31mโ[0m ๐ฟ If the above advice does not help, please let us know: [31mโ[0m
[31mโ[0m ๐ https://github.com/kubernetes/minikube/issues/new/choose [31mโ[0m
[31mโ[0m [31mโ[0m
[31mโ[0m Please run minikube logs --file=logs.txt
and attach logs.txt to the GitHub issue. [31mโ[0m
[31mโ[0m [31mโ[0m
[31mโฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ[0m
I0728 09:20:16.027095 29099 out.go:177]
W0728 09:20:16.066017 29099 out.go:239] โ Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred: timed out waiting for the condition
This error is likely caused by:
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl:
stderr: W0728 16:16:13.897144 2713 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher
W0728 09:20:16.068643 29099 out.go:239] ๐ก Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start W0728 09:20:16.068726 29099 out.go:239] ๐ฟ Related issue: https://github.com/kubernetes/minikube/issues/4172 I0728 09:20:16.104831 29099 out.go:177]
==> Docker <==
-- Journal begins at Thu 2022-07-28 16:11:46 UTC, ends at Thu 2022-07-28 16:20:44 UTC. -- Jul 28 16:19:44 clustera dockerd[981]: time="2022-07-28T16:19:44.181915662Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:19:44 clustera dockerd[981]: time="2022-07-28T16:19:44.182520501Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:19:44 clustera dockerd[981]: time="2022-07-28T16:19:44.182908301Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:19:44 clustera dockerd[981]: time="2022-07-28T16:19:44.182971751Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:19:44 clustera dockerd[981]: time="2022-07-28T16:19:44.184360149Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:19:44 clustera dockerd[981]: time="2022-07-28T16:19:44.186042668Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:19:45 clustera dockerd[981]: time="2022-07-28T16:19:45.166820874Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:19:45 clustera dockerd[981]: time="2022-07-28T16:19:45.167094467Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:19:45 clustera dockerd[981]: time="2022-07-28T16:19:45.169434302Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:19:55 clustera dockerd[981]: time="2022-07-28T16:19:55.212323860Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:19:55 clustera dockerd[981]: time="2022-07-28T16:19:55.212649053Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:19:55 clustera dockerd[981]: time="2022-07-28T16:19:55.215915240Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:19:56 clustera dockerd[981]: time="2022-07-28T16:19:56.175988844Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:19:56 clustera dockerd[981]: time="2022-07-28T16:19:56.176199260Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:19:56 clustera dockerd[981]: time="2022-07-28T16:19:56.178127095Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:19:56 clustera dockerd[981]: time="2022-07-28T16:19:56.180520650Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:19:56 clustera dockerd[981]: time="2022-07-28T16:19:56.180575759Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:19:56 clustera dockerd[981]: time="2022-07-28T16:19:56.182042875Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:19:59 clustera dockerd[981]: time="2022-07-28T16:19:59.178853692Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:19:59 clustera dockerd[981]: time="2022-07-28T16:19:59.179386100Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:19:59 clustera dockerd[981]: time="2022-07-28T16:19:59.189582430Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:07 clustera dockerd[981]: time="2022-07-28T16:20:07.177793858Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:07 clustera dockerd[981]: time="2022-07-28T16:20:07.178469996Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:07 clustera dockerd[981]: time="2022-07-28T16:20:07.180495765Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:08 clustera dockerd[981]: time="2022-07-28T16:20:08.171285471Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:08 clustera dockerd[981]: time="2022-07-28T16:20:08.171593065Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:08 clustera dockerd[981]: time="2022-07-28T16:20:08.179075740Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:10 clustera dockerd[981]: time="2022-07-28T16:20:10.179245823Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:10 clustera dockerd[981]: time="2022-07-28T16:20:10.179931807Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:10 clustera dockerd[981]: time="2022-07-28T16:20:10.182443854Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:10 clustera dockerd[981]: time="2022-07-28T16:20:10.212411911Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:10 clustera dockerd[981]: time="2022-07-28T16:20:10.212471359Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:10 clustera dockerd[981]: time="2022-07-28T16:20:10.214735837Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:18 clustera dockerd[981]: time="2022-07-28T16:20:18.185518454Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:18 clustera dockerd[981]: time="2022-07-28T16:20:18.185598273Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:18 clustera dockerd[981]: time="2022-07-28T16:20:18.187410998Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:21 clustera dockerd[981]: time="2022-07-28T16:20:21.185066531Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:21 clustera dockerd[981]: time="2022-07-28T16:20:21.185137735Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:21 clustera dockerd[981]: time="2022-07-28T16:20:21.187331876Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:24 clustera dockerd[981]: time="2022-07-28T16:20:24.190605705Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:24 clustera dockerd[981]: time="2022-07-28T16:20:24.190760600Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:24 clustera dockerd[981]: time="2022-07-28T16:20:24.194318164Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:25 clustera dockerd[981]: time="2022-07-28T16:20:25.177144822Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:25 clustera dockerd[981]: time="2022-07-28T16:20:25.177204571Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:25 clustera dockerd[981]: time="2022-07-28T16:20:25.178793806Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:32 clustera dockerd[981]: time="2022-07-28T16:20:32.182276293Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:32 clustera dockerd[981]: time="2022-07-28T16:20:32.183468252Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:32 clustera dockerd[981]: time="2022-07-28T16:20:32.186492620Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:33 clustera dockerd[981]: time="2022-07-28T16:20:33.171060575Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:33 clustera dockerd[981]: time="2022-07-28T16:20:33.171125021Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:33 clustera dockerd[981]: time="2022-07-28T16:20:33.172977823Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:37 clustera dockerd[981]: time="2022-07-28T16:20:37.177725792Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:37 clustera dockerd[981]: time="2022-07-28T16:20:37.177930764Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:37 clustera dockerd[981]: time="2022-07-28T16:20:37.183383542Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:39 clustera dockerd[981]: time="2022-07-28T16:20:39.194304706Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:39 clustera dockerd[981]: time="2022-07-28T16:20:39.195713260Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:39 clustera dockerd[981]: time="2022-07-28T16:20:39.199738305Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:43 clustera dockerd[981]: time="2022-07-28T16:20:43.162524636Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:43 clustera dockerd[981]: time="2022-07-28T16:20:43.162625062Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jul 28 16:20:43 clustera dockerd[981]: time="2022-07-28T16:20:43.164489001Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
==> describe nodes <==
==> dmesg <==
[Jul28 16:11] ERROR: earlyprintk= earlyser already used [ +0.000000] You have booted with nomodeset. This means your GPU drivers are DISABLED [ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly [ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it [ +0.176773] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173) [ +6.988930] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182) [ +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618) [ +0.013061] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 [ +2.640643] systemd-fstab-generator[125]: Ignoring "noauto" for root device [ +0.058252] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. [ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.) [ +2.110141] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory [ +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery [ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2) [ +3.485010] systemd-fstab-generator[554]: Ignoring "noauto" for root device [ +0.130733] systemd-fstab-generator[568]: Ignoring "noauto" for root device [Jul28 16:12] systemd-fstab-generator[788]: Ignoring "noauto" for root device [ +1.481457] kauditd_printk_skb: 16 callbacks suppressed [ +0.383172] systemd-fstab-generator[950]: Ignoring "noauto" for root device [ +0.103131] systemd-fstab-generator[961]: Ignoring "noauto" for root device [ +0.104556] systemd-fstab-generator[972]: Ignoring "noauto" for root device [ +1.468305] systemd-fstab-generator[1122]: Ignoring "noauto" for root device [ +0.113508] systemd-fstab-generator[1133]: Ignoring "noauto" for root device [ +5.045744] systemd-fstab-generator[1340]: Ignoring "noauto" for root device [ +0.398524] kauditd_printk_skb: 68 callbacks suppressed [Jul28 16:16] systemd-fstab-generator[2793]: Ignoring "noauto" for root device
==> kernel <==
16:20:45 up 9 min, 0 users, load average: 0.43, 0.23, 0.12 Linux clustera 5.10.57 #1 SMP Thu Jun 16 23:36:20 UTC 2022 x86_64 GNU/Linux PRETTY_NAME="Buildroot 2021.02.12"
==> kubelet <==
-- Journal begins at Thu 2022-07-28 16:11:46 UTC, ends at Thu 2022-07-28 16:20:45 UTC. --
Jul 28 16:20:39 clustera kubelet[2799]: E0728 16:20:39.842507 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:39 clustera kubelet[2799]: E0728 16:20:39.946986 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:41 clustera kubelet[2799]: E0728 16:20:41.012071 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:41 clustera kubelet[2799]: E0728 16:20:41.014178 2799 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/clustera?timeout=10s": dial tcp 192.168.64.8:8443: connect: connection refused
Jul 28 16:20:41 clustera kubelet[2799]: E0728 16:20:41.112946 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:41 clustera kubelet[2799]: E0728 16:20:41.213704 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:41 clustera kubelet[2799]: E0728 16:20:41.314151 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:41 clustera kubelet[2799]: E0728 16:20:41.414658 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:41 clustera kubelet[2799]: E0728 16:20:41.515270 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:41 clustera kubelet[2799]: E0728 16:20:41.616445 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:41 clustera kubelet[2799]: I0728 16:20:41.641375 2799 kubelet_node_status.go:70] "Attempting to register node" node="clustera"
Jul 28 16:20:41 clustera kubelet[2799]: E0728 16:20:41.642209 2799 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.64.8:8443: connect: connection refused" node="clustera"
Jul 28 16:20:41 clustera kubelet[2799]: E0728 16:20:41.717861 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:41 clustera kubelet[2799]: E0728 16:20:41.818795 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:41 clustera kubelet[2799]: E0728 16:20:41.920065 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:42 clustera kubelet[2799]: E0728 16:20:42.020676 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:42 clustera kubelet[2799]: W0728 16:20:42.028232 2799 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.64.8:8443: connect: connection refused
Jul 28 16:20:42 clustera kubelet[2799]: E0728 16:20:42.028322 2799 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch v1.CSIDriver: failed to list v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.64.8:8443: connect: connection refused
Jul 28 16:20:42 clustera kubelet[2799]: E0728 16:20:42.121131 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:42 clustera kubelet[2799]: E0728 16:20:42.221485 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:42 clustera kubelet[2799]: E0728 16:20:42.322748 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:42 clustera kubelet[2799]: E0728 16:20:42.424261 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:42 clustera kubelet[2799]: E0728 16:20:42.524929 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:42 clustera kubelet[2799]: E0728 16:20:42.625797 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:42 clustera kubelet[2799]: E0728 16:20:42.727251 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:42 clustera kubelet[2799]: E0728 16:20:42.827915 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:42 clustera kubelet[2799]: E0728 16:20:42.928763 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:43 clustera kubelet[2799]: E0728 16:20:43.030247 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:43 clustera kubelet[2799]: E0728 16:20:43.131045 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:43 clustera kubelet[2799]: E0728 16:20:43.165128 2799 remote_runtime.go:212] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.6\": Error response from daemon: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority"
Jul 28 16:20:43 clustera kubelet[2799]: E0728 16:20:43.165257 2799 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.6\": Error response from daemon: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" pod="kube-system/kube-scheduler-clustera"
Jul 28 16:20:43 clustera kubelet[2799]: E0728 16:20:43.165276 2799 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.6\": Error response from daemon: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" pod="kube-system/kube-scheduler-clustera"
Jul 28 16:20:43 clustera kubelet[2799]: E0728 16:20:43.165366 2799 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-clustera_kube-system(bd104158b6d4034938fcce57fb58129e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\"kube-scheduler-clustera_kube-system(bd104158b6d4034938fcce57fb58129e)\\": rpc error: code = Unknown desc = failed pulling image \\"k8s.gcr.io/pause:3.6\\": Error response from daemon: Get \\"https://k8s.gcr.io/v2/\\": x509: certificate signed by unknown authority\"" pod="kube-system/kube-scheduler-clustera" podUID=bd104158b6d4034938fcce57fb58129e
Jul 28 16:20:43 clustera kubelet[2799]: E0728 16:20:43.231410 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:43 clustera kubelet[2799]: E0728 16:20:43.332133 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:43 clustera kubelet[2799]: E0728 16:20:43.433148 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:43 clustera kubelet[2799]: E0728 16:20:43.534243 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:43 clustera kubelet[2799]: E0728 16:20:43.634513 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:43 clustera kubelet[2799]: E0728 16:20:43.735360 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:43 clustera kubelet[2799]: E0728 16:20:43.836451 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:43 clustera kubelet[2799]: E0728 16:20:43.937136 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:44 clustera kubelet[2799]: E0728 16:20:44.038211 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:44 clustera kubelet[2799]: E0728 16:20:44.138668 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:44 clustera kubelet[2799]: E0728 16:20:44.239803 2799 kubelet.go:2419] "Error getting node" err="node \"clustera\" not found"
Jul 28 16:20:44 clustera kubelet[2799]: E0728 16:20:44.306584 2799 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"clustera.17060a6d06959375", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:
Same issue when using the Docker driver.
The Linux binaries are installed inside the cluster, and not on the host
Got it. Any reason this is failing to start based on the logs?
It seemed to be a proxy ssl issue, but anyway - not related to path ?
x509: certificate signed by unknown authority
Good point. I'll close as unrelated to path.
If you have a corporate proxy that scans all traffic, it usually requires you to install a new certificate (for all the steamed-open letters)
Thanks for the pointer. That's exactly what I needed. Looks like I have resolved it by putting all of the certs into the right sub-directory. Thanks again!
What Happened?
When trying to start a cluster with 1.26.0 I see that minikube is referencing /var/lib/minikube for the binaries. Alas, when Homebrew installs minikube, the binaries are not installed into that directory. This precludes the cluster from being created and started.
I have tried various minikube commands such as delete (purge) and reinstalling minikube itself. All Homebrew dependencies are met. Not sure if this is a Homebrew or minikube issue. Things worked properly prior to 1.26.0.
This is the command that is failing when viewing the logs (attached):
W0728 09:16:12.637289 29099 out.go:239] ๐ข initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
This would seem to indicate that the dependencies are not installed into the expected directory.
Happy to debug further with you!
Attach the log file
Logs coming
Operating System
macOS (Default)
Driver
HyperKit