fnichol / dvm

An on demand Docker virtual machine, thanks to Vagrant and boot2docker. Works great on Macs and other platforms that don't natively support the Docker daemon. Support VirtualBox, VMware, and Parallels.
http://fnichol.github.io/dvm
Apache License 2.0
457 stars 67 forks source link

VM keeps changing its IP address #8

Closed Peeja closed 10 years ago

Peeja commented 10 years ago

I'm going in circles with this one.

I bring up the fresh VM. I $(dvm env). I also check to see what that exports:

❯❯❯ dvm env
export DOCKER_HOST=tcp://192.168.42.43:4243

If I dvm ssh and check ifconfig, sure enough, that's the IP address.

eth1      Link encap:Ethernet  HWaddr 08:00:27:6B:6F:FF
          inet addr:192.168.42.43  Bcast:192.168.42.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe6b:6fff/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3 errors:0 dropped:0 overruns:0 frame:0
          TX packets:16 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1388 (1.3 KiB)  TX bytes:1838 (1.7 KiB)

And docker commands work fine. Great.

Then, a little while later, docker commands stop working. I can't ping 192.168.42.43 anymore. So I dvm ssh back in:

eth1      Link encap:Ethernet  HWaddr 08:00:27:6B:6F:FF
          inet addr:192.168.56.102  Bcast:192.168.56.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe6b:6fff/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2672 errors:0 dropped:0 overruns:0 frame:0
          TX packets:350 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:3838330 (3.6 MiB)  TX bytes:26060 (25.4 KiB)

It's moved to 192.168.56.102.

dvm reload correctly resets the VM's IP address (along with the entire machine).

Any clue what could be going on here? I haven't found a pattern to my activity that could be causing it.

jopecko commented 10 years ago

@Peeja I was experiencing the same issue recently. I haven't had a chance to determine the root cause, however, I have a workaround that allows me to continue working without this issue recurring. If I export the DOCKER_IP env var with an address in the 24-bit block, i.e., 10.0.0.0 - 10.255.255.255, before invoking dvm up, the IP address on my VM remains stable and available.

dotkrnl commented 10 years ago

Same issue experienced.

corporate-gadfly commented 10 years ago

similar to #4. Do any of you connect to different networks?

Peeja commented 10 years ago

@opeckojo Huh. I've already got a 10.* address on my VM's eth0. Do you? I'm not clear what the difference between those interfaces is.

@corporate-gadfly I just connect to a single Wi-Fi network.

jopecko commented 10 years ago

@Peeja I think it may be interface eth1 which is the one configured by Vagrant for the private network IP. If I bring up the Virtualbox GUI, the MAC address listed for the vboxnet1 adapter matches the MAC Address for eth1. I can't say for certain why an address in the 16-bit block, 192.168.0.0 - 192.168.255.255, causes this issue, however, for me, changing to a 24-bit block address completely resolved this issue. Before changing my VM's IP, I would be able to communicate with the VM via docker for maybe an hour and then I would no longer be able to connect. Doing a full restart or reload were the only ways I could reestablish connectivity. Since I switched to a 24-bit block IP address, I've had my VM up for over 24 hours and am still able to communicate with it via docker running locally on my Mac.

All I did was set the DOCKER_IP environment variable to a 24-bit block address, in my case 10.211.55.255, eval my dvm env to sync up my session, bring up my VM and work as described in the docs herein.

I admit I don't know the specifics of what's actually occurring under the covers and why the 192.168.42.43 address is causing this problem with Virtualbox and Vagrant's private network. If I get time to dive in and uncover anything I'll be sure and update this issue.

fnichol commented 10 years ago

Finally, I think we've tracked this down to a udhcpc process which Tiny Core Linux auto-starts on boot. (More details in #35). Thank you all for your help in diagnosing!