kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.22k stars 4.87k forks source link

minikube with VMWare Workstation - IP not found for MAC 00:0c:29:6f:1e:d2 in DHCP leases #13866

Closed mudskipperwathotmail closed 1 year ago

mudskipperwathotmail commented 2 years ago

hi there, I tried to start minikube with VMWare Workstation (15.1.0 on windows 10). and got the err below. and the content of the workstation net conf file attached after. any ideas? thanks.

===minikube error msg:

C:\WINDOWS\system32>minikube start --driver vmware

===vmware workstation net conf

#

Configuration file for VMware port of ISC 2.0 release running on

Windows.

#

This file is generated by the VMware installation procedure; it

is edited each time you add or delete a VMware host-only network

adapter.

#

We set domain-name-servers to make some clients happy

(dhclient as configued in SuSE, TurboLinux, etc.).

We also supply a domain name to make pump (Red Hat 6.x) happy.

# allow unknown-clients; default-lease-time 1800; # default is 30 minutes max-lease-time 7200; # default is 2 hours

Virtual ethernet segment 1

Added at 03/14/20 09:30:46

subnet 192.168.111.0 netmask 255.255.255.0 { range 192.168.111.128 192.168.111.254; # default allows up to 125 VM's option broadcast-address 192.168.111.255; option domain-name-servers 192.168.111.1; option domain-name "localdomain"; default-lease-time 1800; max-lease-time 7200; } host VMnet1 { hardware ethernet 00:50:56:C0:00:01; fixed-address 192.168.111.1; option domain-name-servers 0.0.0.0; option domain-name ""; }

End

Virtual ethernet segment 8

Added at 03/14/20 09:30:46

subnet 192.168.74.0 netmask 255.255.255.0 { range 192.168.74.128 192.168.74.254; # default allows up to 125 VM's option broadcast-address 192.168.74.255; option domain-name-servers 192.168.74.2; option domain-name "localdomain"; option netbios-name-servers 192.168.74.2; option routers 192.168.74.2; default-lease-time 1800; max-lease-time 7200; } host VMnet8 { hardware ethernet 00:50:56:C0:00:08; fixed-address 192.168.74.1; option domain-name-servers 0.0.0.0; option domain-name ""; option routers 0.0.0.0; }

End

spowelljr commented 2 years ago

Hi @mudskipperwathotmail, thanks fore reporting your issue with minikube!

I'll need more in-depth minikube logging to be able to debug this issue, could you please run minikube logs --file=logs.txt and then upload the log file to this issue, thanks!

mudskipperwathotmail commented 2 years ago

hi Steven, thanks so much for looking into it. here is the log file, i also pasted the content of the vmnetdhcp.leases file below as to be compared with the vmnetdhcp.conf:

===vmnetdhcp.leases

All times in this file are in UTC (GMT), not your local timezone. This is

not a bug, so please don't ask about it. There is no portable way to

store leases in the local timezone, so please don't request this as a

feature. If this is inconvenient or confusing to you, we sincerely

apologize. Seriously, though - don't ask.

The format of this file is documented in the dhcpd.leases(5) manual page.

lease 192.168.74.172 { starts 1 2020/03/16 02:52:32; ends 1 2020/03/16 03:22:32; hardware ethernet 00:0c:29:2d:a5:63; client-hostname "ansible"; } lease 192.168.74.171 { starts 6 2020/03/14 10:04:48; ends 6 2020/03/14 10:04:48; abandoned; client-hostname "ansible"; } lease 192.168.74.170 { starts 6 2020/03/14 05:39:05; ends 6 2020/03/14 05:39:05; abandoned; client-hostname "ansible"; } lease 192.168.74.169 { starts 6 2020/03/14 02:04:11; ends 6 2020/03/14 02:34:11; hardware ethernet 00:0c:29:d4:18:ed; client-hostname "sql"; } lease 192.168.74.168 { starts 6 2020/03/14 01:54:19; ends 6 2020/03/14 02:24:19; hardware ethernet 00:0c:29:ee:1f:48; client-hostname "web"; } lease 192.168.74.128 { starts 5 2020/03/13 22:03:22; ends 5 2020/03/13 22:33:22; hardware ethernet 00:0c:29:47:7b:fc; uid 01:00:0c:29:47:7b:fc; client-hostname "WIN-HUP3EM9ROL2"; } lease 192.168.74.132 { starts 2 2020/03/10 02:07:05; ends 2 2020/03/10 02:37:05; hardware ethernet 00:0c:29:20:99:c4; client-hostname "RHEL01"; } lease 192.168.74.167 { starts 2 2020/03/10 00:29:09; ends 2 2020/03/10 00:59:09; hardware ethernet 00:0c:29:11:43:82; uid 01:00:0c:29:11:43:82; client-hostname "DC01"; } lease 192.168.74.166 { starts 1 2020/03/09 05:43:49; ends 1 2020/03/09 05:45:49; hardware ethernet 00:0c:29:11:43:82; uid 01:00:0c:29:11:43:82; client-hostname "DC01"; } lease 192.168.74.165 { starts 1 2020/03/09 05:43:45; ends 1 2020/03/09 05:45:45; hardware ethernet 00:0c:29:11:43:82; uid 01:00:0c:29:11:43:82; client-hostname "DC01"; } lease 192.168.74.164 { starts 1 2020/03/09 05:43:41; ends 1 2020/03/09 05:43:41; abandoned; client-hostname "DC01"; } lease 192.168.74.163 { starts 0 2020/03/08 10:44:44; ends 0 2020/03/08 10:46:44; hardware ethernet 00:0c:29:11:43:82; uid 01:00:0c:29:11:43:82; client-hostname "DC01"; } lease 192.168.74.162 { starts 0 2020/03/08 10:44:40; ends 0 2020/03/08 10:46:40; hardware ethernet 00:0c:29:11:43:82; uid 01:00:0c:29:11:43:82; client-hostname "DC01"; } lease 192.168.74.161 { starts 0 2020/03/08 10:44:36; ends 0 2020/03/08 10:44:36; abandoned; client-hostname "DC01"; } lease 192.168.74.160 { starts 0 2020/03/08 09:03:00; ends 0 2020/03/08 09:05:00; hardware ethernet 00:0c:29:11:43:82; uid 01:00:0c:29:11:43:82; client-hostname "DC01"; } lease 192.168.74.159 { starts 0 2020/03/08 09:02:56; ends 0 2020/03/08 09:04:56; hardware ethernet 00:0c:29:11:43:82; uid 01:00:0c:29:11:43:82; client-hostname "DC01"; } lease 192.168.74.158 { starts 0 2020/03/08 09:02:51; ends 0 2020/03/08 09:02:51; abandoned; client-hostname "DC01"; } lease 192.168.74.157 { starts 5 2020/03/06 06:49:58; ends 5 2020/03/06 06:51:58; hardware ethernet 00:0c:29:11:43:82; uid 01:00:0c:29:11:43:82; client-hostname "DC01"; } lease 192.168.74.156 { starts 5 2020/03/06 06:49:55; ends 5 2020/03/06 06:51:55; hardware ethernet 00:0c:29:11:43:82; uid 01:00:0c:29:11:43:82; client-hostname "DC01"; } lease 192.168.74.155 { starts 5 2020/03/06 06:49:50; ends 5 2020/03/06 06:49:50; abandoned; client-hostname "DC01"; } lease 192.168.74.154 { starts 5 2020/03/06 01:13:09; ends 5 2020/03/06 01:15:09; hardware ethernet 00:0c:29:11:43:82; uid 01:00:0c:29:11:43:82; client-hostname "DC01"; } lease 192.168.74.153 { starts 5 2020/03/06 01:13:05; ends 5 2020/03/06 01:15:05; hardware ethernet 00:0c:29:11:43:82; uid 01:00:0c:29:11:43:82; client-hostname "DC01"; } lease 192.168.74.152 { starts 5 2020/03/06 01:13:00; ends 5 2020/03/06 01:13:00; abandoned; client-hostname "DC01"; } lease 192.168.74.151 { starts 3 2020/03/04 09:41:46; ends 3 2020/03/04 09:43:46; hardware ethernet 00:0c:29:11:43:82; uid 01:00:0c:29:11:43:82; client-hostname "DC01"; } lease 192.168.74.150 { starts 3 2020/03/04 09:41:43; ends 3 2020/03/04 09:43:43; hardware ethernet 00:0c:29:11:43:82; uid 01:00:0c:29:11:43:82; client-hostname "DC01"; } lease 192.168.74.149 { starts 3 2020/03/04 09:41:40; ends 3 2020/03/04 09:41:40; abandoned; client-hostname "DC01"; } lease 192.168.74.148 { starts 3 2020/03/04 05:10:36; ends 3 2020/03/04 05:12:36; hardware ethernet 00:0c:29:11:43:82; uid 01:00:0c:29:11:43:82; client-hostname "DC01"; } lease 192.168.74.147 { starts 3 2020/03/04 05:10:32; ends 3 2020/03/04 05:10:46; hardware ethernet 00:0c:29:11:43:82; uid 01:00:0c:29:11:43:82; client-hostname "DC01"; } lease 192.168.74.146 { starts 3 2020/03/04 05:10:28; ends 3 2020/03/04 05:10:28; abandoned; client-hostname "DC01"; } lease 192.168.74.129 { starts 2 2020/02/18 02:51:03; ends 2 2020/02/18 03:21:03; hardware ethernet 00:0c:29:e9:fa:fb; uid 01:00:0c:29:e9:fa:fb; client-hostname "WIN-BEAEMMQ88PF"; } lease 192.168.74.143 { starts 4 2019/11/07 00:34:48; ends 4 2019/11/07 01:04:48; hardware ethernet 00:0c:29:d4:18:ed; client-hostname "sql"; } lease 192.168.74.144 { starts 4 2019/11/07 00:31:02; ends 4 2019/11/07 01:01:02; hardware ethernet 00:0c:29:ee:1f:48; client-hostname "web"; } lease 192.168.74.145 { starts 4 2019/11/07 00:35:26; ends 4 2019/11/07 00:46:32; hardware ethernet 00:0c:29:2d:a5:63; client-hostname "ansible"; } lease 192.168.74.140 { starts 3 2019/11/06 12:23:02; ends 3 2019/11/06 12:23:02; abandoned; client-hostname "ansible"; } lease 192.168.74.141 { starts 3 2019/11/06 12:21:57; ends 3 2019/11/06 12:21:57; abandoned; client-hostname "web"; } lease 192.168.74.142 { starts 3 2019/11/06 12:21:12; ends 3 2019/11/06 12:21:12; abandoned; client-hostname "sql"; } lease 192.168.74.139 { starts 3 2019/11/06 06:03:22; ends 3 2019/11/06 06:03:22; abandoned; client-hostname "sql"; } lease 192.168.74.138 { starts 3 2019/11/06 05:54:39; ends 3 2019/11/06 05:54:39; abandoned; client-hostname "web"; } lease 192.168.74.135 { starts 3 2019/11/06 05:53:51; ends 3 2019/11/06 05:53:51; abandoned; client-hostname "ansible"; } lease 192.168.74.137 { starts 2 2019/11/05 10:01:11; ends 2 2019/11/05 10:01:11; abandoned; client-hostname "ubuntu"; } lease 192.168.74.136 { starts 2 2019/11/05 08:18:51; ends 2 2019/11/05 08:18:51; abandoned; client-hostname "ubuntu"; } lease 192.168.74.133 { starts 2 2019/11/05 03:54:11; ends 2 2019/11/05 03:54:11; abandoned; client-hostname "ubuntu"; } lease 192.168.74.134 { starts 2 2019/11/05 01:07:53; ends 2 2019/11/05 01:14:07; hardware ethernet 00:0c:29:7c:cd:0c; client-hostname "ubuntu"; } lease 192.168.74.130 { starts 0 2019/10/20 02:37:43; ends 0 2019/10/20 03:07:43; hardware ethernet 00:0c:29:21:32:72; uid 01:00:0c:29:21:32:72; client-hostname "windows-d7138b3"; } lease 192.168.74.131 { starts 5 2019/10/18 10:24:49; ends 5 2019/10/18 10:54:49; hardware ethernet 00:0c:29:1a:a4:c7; uid 01:00:0c:29:1a:a4:c7; client-hostname "swang-o07fuusdv"; } lease 192.168.111.128 { starts 5 2019/10/18 10:04:45; ends 5 2019/10/18 10:23:37; hardware ethernet 00:0c:29:1a:a4:c7; uid 01:00:0c:29:1a:a4:c7; client-hostname "swang-o07fuusdv"; }

spowelljr commented 2 years ago

Hi @mudskipperwathotmail, sorry for the delayed response.

Was this previously working and then it stopped working?

Looking at the logs, you're using an existing VMX instance but it seems the IP is not in C:\ProgramData\VMware\vmnetdhcp.conf or C:\ProgramData\VMware\vmnetdhcp.leases, if this did use to work I'm wondering if the IP got removed somehow, and based on the logs we only keep restarting the instance instead of deleting it and creating a new one.

Either way, if you try running minikube delete --all to clear out your minikube instances, and double check in VMware if the minikube VMX is deleted and if not try manually deleting it, then try starting minikube again and see if that resolves your issue or not, thanks!

mudskipperwathotmail commented 2 years ago

hi Steven, no worries - i was busy with other stuff lately as well, haven't had a chance to try it myself.

i tried "minikubt delete --all", then reran "minikube start --driver vmware" but the problem remained. this time i turned all my VMs off. and check out the log below, only 1 VM was running, which WAS the minikube itself. i believe 00:0c:29:6f:1e:d2 is the minikube. ---> the log posted at the end of my msg.

i also found this thread https://github.com/machine-drivers/docker-machine-driver-xhyve/issues/91, somebody had exactly the same issue, but he said it was the RAM figure of the VM that was not set right(scroll to almost the end to see his solution). i checked on mine: from my log "I0421 23:40:20.557694 7580 start_flags.go:369] Using suggested 4000MB memory alloc based on sys=16292MB, container=0MB" - does "container=0MB" sound right???

thanks Shizhen

=====the log

spowelljr commented 2 years ago

Thanks for the response @mudskipperwathotmail, I'll take a more in-depth look at the logs later, but to answer your questions about container=0MB. That's because you're using a VM driver instead of a container driver like Docker, so that's expected.

mudskipperwathotmail commented 2 years ago

thanks Steven. take your time. it seems to me, the problem is around these few steps:

  1. the procedure tries to run vmrun.exe getGuestIPAddress C:\Users\mud.minikube\machines\minikube\minikube.vmx -wait and gets the MAC ADDR: 00:0c:29:6f:1e:d2
  2. it looks up the ip in my existing vmnetdhcp.conf, which contains only 2 virtual networks: Mnet1 192.168.111.1; VMnet8 192.168.74.1; and the new minikube vm is not on either of them. on VMWare Workstation, i used assign my VMs onto these 2 virtual networks but somehow the internet stopped working, so all my VMs are on "Bridged: connected directly to the physical network(replicate physical network connection state)". for an example: my latest VM Ubuntu20.04(installed a few days ago) is not on the vmnetdhcp.conf or vmnetdhcp.leases. here is my ubuntu20.04 ip info: swang@ubuntu:~$ ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:0c:29:d7:79:f7 brd ff:ff:ff:ff:ff:ff altname enp2s1 inet 172.20.10.13/28 brd 172.20.10.15 scope global dynamic noprefixroute ens33 valid_lft 86316sec preferred_lft 86316sec inet6 fe80::d1b9:8f4a:d9a9:a61b/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:85:98:e0:22 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever

not sure if these are misleading you or it is the clue... but if we can assign the new minikube vm to be the same, and not having to look up to those 2 virtual networks, that might fix the issue.

your thoughts?

thanks Shizhen

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 year ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/minikube/issues/13866#issuecomment-1272388147): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.