jpetazzo / pipework

Software-Defined Networking tools for LXC (LinuX Containers)
Apache License 2.0
4.21k stars 727 forks source link

Can't communicate with container from vm using same physical interface #207

Closed jfjallid closed 5 months ago

jfjallid commented 7 years ago

In my setup my server has two network interfaces: one for the server to communicate which resides on one network and another for vms and now docker containers to communicate on which connect to a different subnet. The server(host) has only got an ip assigned to to one interface and the other is left up but without ip assigned. I'm starting my container with the command: pipework enp4s6 $(docker start bind) 10.0.x.y/24 && pipework route bind add default via 10.0.x.1

This works perfectly to start my container and let it communicate on the second network interface. I also have a few VMs running which use the second NIC and my problem is that they can't reach the docker container communicating on the same interface.

I tried running the following commands inside my vm but that didn't help: ip addr del 10.0.x.z/24 dev eth0 ip link add link eth0 dev eth0m type macvlan mode bridge ip link set eth0m up ip addr add 10.0.x.z/24 dev eth0m route add default gw 10.0.x.1

The network interface of the vm is of type pcnet and has Promiscuous Mode Allow All.

Am I missing something or would my scenario not work e.g. letting my vms communicate with the container using the same NIC?

jpetazzo commented 7 years ago

Which VM system are you using? I know that some systems will hook themselves at a specific layer in the NIC driver, preventing communication with other sub-interfaces ...

You can try two things:

1) enabling hairpin switching if your network hardware supports it 2) instead of using macvlan, create a bridge, attach the containers to the bridge, and put the NIC in the bridge as well

jfjallid commented 7 years ago

I'm using Virtualbox for my VMs. What would hairpin switching mean in this case? Do you mean using a trunk port from the server to my router and using subinterfaces for different vlans? E.g. traffic from docker container goes out on one vlan to the router and is then routed to another subinterface and takes the trunk back to the server but this time on a differnt vlan.

How does a bridge differ from macvlan? Would it still support my setup such that the Host system(the server) has not got an IP assigned to the interface, and multiple docker containers as well as virtual machines might use the interface with their own IP assigned to it on the same network?

jpetazzo commented 7 years ago

Hairpin switching would let the switch re-forward frames to your machine (potentially on the same subinterface, not requiring a trunk port). I know that it can help in this scenario but I haven't used it directly myself.

Using a bridge is generally more "compatible" but also more costly (in terms of latency and CPU overhead). To phrase things differently — macvlan (as well as the mechanism used by VMware or VirtualBox) are more efficient, but designed to work for one specific use-case, and don't always play nice with each other.

If your machine as an extra NIC (and you have an extra switch port available), you could use one NIC for the VMs and one NIC for the containers. This would let you use fast, efficient mechanisms, at the expense of one extra round-trip to the switch. In my experience, the extra round-trip to the switch is not an issue, especially on gigE networks; and at gigabit/s speeds, the CPU overhead of the internal bridge gets significant.

If your machine doesn't have an extra NIC, you could try to use VLANs, and put the containers on one VLAN, the VMs on the other VLAN, and bridge the VLANs together (if your switch supports it).

I hope this helps!