Closed Eason0729 closed 1 year ago
I am facing the same issue.
I am running the command in a Ubuntu VM created through Vagrant + Virtualbox.
The error Iām getting while doing minikube start --driver=kvm2
is:
šæ Failed to start kvm2 VM. Running "minikube delete" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP not available after waiting: machine minikube didn't return IP after 1 minute
ā Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP not available after waiting: machine minikube didn't return IP after 1 minute
Validating KVM support:
sudo virsh net-list --all
Name State Autostart Persistent
------------------------------------------------
default active yes yes
mk-minikube active yes yes
virt-host-validate
QEMU: Checking for hardware virtualization : PASS
QEMU: Checking if device /dev/kvm exists : PASS
QEMU: Checking if device /dev/kvm is accessible : PASS
QEMU: Checking if device /dev/vhost-net exists : PASS
QEMU: Checking if device /dev/net/tun exists : PASS
QEMU: Checking for cgroup 'cpu' controller support : PASS
QEMU: Checking for cgroup 'cpuacct' controller support : PASS
QEMU: Checking for cgroup 'cpuset' controller support : PASS
QEMU: Checking for cgroup 'memory' controller support : PASS
QEMU: Checking for cgroup 'devices' controller support : PASS
QEMU: Checking for cgroup 'blkio' controller support : PASS
QEMU: Checking for device assignment IOMMU support : WARN (No ACPI DMAR table found, IOMMU either disabled in BIOS or not supported by this hardware platform)
QEMU: Checking for secure guest support : WARN (Unknown if this platform has Secure Guest support)
LXC: Checking for Linux >= 2.6.26 : PASS
LXC: Checking for namespace ipc : PASS
LXC: Checking for namespace mnt : PASS
LXC: Checking for namespace pid : PASS
LXC: Checking for namespace uts : PASS
LXC: Checking for namespace net : PASS
LXC: Checking for namespace user : PASS
LXC: Checking for cgroup 'cpu' controller support : PASS
LXC: Checking for cgroup 'cpuacct' controller support : PASS
LXC: Checking for cgroup 'cpuset' controller support : PASS
LXC: Checking for cgroup 'memory' controller support : PASS
LXC: Checking for cgroup 'devices' controller support : PASS
LXC: Checking for cgroup 'freezer' controller support : PASS
LXC: Checking for cgroup 'blkio' controller support : PASS
LXC: Checking if device /sys/fs/fuse/connections exists : PASS
I0915 19:47:11.588863 9435 main.go:134] libmachine: (minikube) DBG | unable to find current IP address of domain minikube in network mk-minikube
I0915 19:47:11.588890 9435 main.go:134] libmachine: (minikube) DBG | I0915 19:47:11.588790 9630 retry.go:31] will retry after 9.953714808s: waiting for machine to come up
I0915 19:47:21.547328 9435 main.go:134] libmachine: (minikube) DBG | domain minikube has defined MAC address 52:54:00:ca:cc:bd in network mk-minikube
I0915 19:47:21.547968 9435 main.go:134] libmachine: (minikube) DBG | unable to find current IP address of domain minikube in network mk-minikube
I0915 19:47:21.548231 9435 main.go:134] libmachine: (minikube) DBG | unable to start VM: IP not available after waiting: machine minikube didn't return IP after 1 minute
I0915 19:47:21.554359 9435 client.go:171] LocalClient.Create took 53.955793964s
I0915 19:47:23.558822 9435 start.go:135] duration metric: createHost completed in 55.990871336s
I0915 19:47:23.558836 9435 start.go:82] releasing machines lock for "minikube", held for 55.991711494s
W0915 19:47:23.559944 9435 out.go:239] šæ Failed to start kvm2 VM. Running "minikube delete" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP not available after waiting: machine minikube didn't return IP after 1 minute
I0915 19:47:23.567863 9435 out.go:177]
W0915 19:47:23.569916 9435 out.go:239] ā Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP not available after waiting: machine minikube didn't return IP after 1 minute
W0915 19:47:23.570222 9435 out.go:239]
W0915 19:47:23.573593 9435 out.go:239]
Ubuntu
KVM2
I have good news! š
Looked at the MAC address of the VM trying to start (this shows up right before minikube errors out, found in logs.txt
):
I0916 08:41:52.891415 10345 main.go:134] libmachine: (minikube) DBG | domain minikube has defined MAC address 52:54:00:be:1c:2a in network mk-minikube
There were no DHCP leases for the mk-minikube
VM:
virsh net-dhcp-leases mk-minikube
Expiry Time MAC address Protocol IP address Hostname Client ID or DUID
-----------------------------------------------------------------------------------
However, without doing anything, after a few minutes, there was one!
virsh net-dhcp-leases mk-minikube
Expiry Time MAC address Protocol IP address Hostname Client ID or DUID
-----------------------------------------------------------------------------------------------------------
2022-09-16 09:42:08 52:54:00:be:1c:2a ipv4 192.168.39.134/24 minikube 01:52:54:00:be:1c:2a
It seems like it timed out and was provided an address from the range eventually. This happens a few minutes after minikube has already timed out.
SO TL;DR: We need to increase the timeout of this step: machine minikube didn't return IP after 1 minute
to maybe 5 minutes. Is there currently a way to do that?? Any help would be appreciated :)
Alternatively, is it possible to kickstart/continue a failed minikube start
?
Last thing - can confirm that an IP address was eventually assigned for mk-minikube
's virbr1
š :
arp -e
Address HWtype HWaddress Flags Mask Iface
10.0.2.3 ether 52:54:00:12:35:03 C eth0
192.168.122.124 ether 52:54:00:5e:79:f7 C virbr0
192.168.39.134 ether 52:54:00:be:1c:2a C virbr1
_gateway ether 52:54:00:12:35:02 C eth0
/kind support
Adding the following wait commands solved it!
minikube start --driver=kvm2 --profile=minikube --wait-timeout 15m0s --wait all
. . .
š Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
@Eason0729 I hope this helps with your issue too :)
I will try later
Actually this only solved the problem momentarily. The command started failing again after rebooting my machine. Trying the wait
flags again did not fix the problem, so they are not the solution. š
@Eason0729 have you tried the suggestions in this issue: https://github.com/kubernetes/minikube/issues/3566 ?
Yes, I have tried it.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
What Happened?
In addition, I ran through the trouble shooting in the docs. I follow the instruction, but it didn't work.
Attach the log file
logs.txt
Operating System
Ubuntu
Driver
KVM2