Closed benbooth493 closed 8 years ago
@benbooth493 Could you try https://github.com/jpetazzo/pipework to set up a dedicated network interface in the container, please? @jpetazzo confirms that this would help with multicast traffic.
@benbooth493 pipework should allow you to set something like this up. Docker 0.9/1.0+ will have support for plugins and I believe that'll make it easier to come up with custom networking setups.
I'll close this issue now since it's not immediately actionable and it's going to be possible to do this via a future pipework Docker plugin. Please feel free to comment.
Is there a reason for veth to be created without multicast flag? This would help to get multicast to docker.
@vincentbernat: it looks like it is created without MULTICAST
because it's the default mode; but I think it's fairly safe to change that. Feel free to submit a pull request!
I had the same problem and confirmed that using pipework to define a dedicated interface did indeed work. That said, I'd very much like to see a way to support multicast out of the box in docker. Can someone point me to where in the docker source the related code lives so I can try a custom build with multicast enabled?
I recently had a conversation with @spahl, who confirmed that it was necessary (and sufficient) to set the MULTICAST
flag if you want to do multicast.
@unclejack: can we reopen that issue, please?
@HackerLlama: I think that the relevant code would be in https://github.com/docker/libcontainer/blob/master/network/veth.go (keep in mind that this code is vendored in the Docker repository).
❦ 30 juin 2014 13:25 -0700, Jérôme Petazzoni notifications@github.com :
@HackerLlama: I think that the relevant code would be in https://github.com/docker/libcontainer/blob/master/network/veth.go (keep in mind that this code is vendored in the Docker repository).
Maybe, it would be easier to modify this here: https://github.com/docker/libcontainer/blob/master/netlink/netlink_linux.go#L867
I suppose that adding:
msg.Flags = syscall.IFF_MULTICAST
would be sufficient (maybe the same thing for the result of
newInfomsgChild
just below).Use variable names that mean something.
Agreed, it makes more sense to edit the netlink package, since MULTICAST can (and should, IMHO!) be the default.
Can we reopen this? It was closed "since it's not immediately actionable". With vincentbernat's comment in mind it now appears not just actionable and simple. Pretty please?
Agreed with @bhyde that this looks doable, and that multicast support would have a substantial positive effect on things like autodiscovery of resources provided through docker.
This would really help me, it makes e.g. ZMQ pub/sub with Docker much easier. Anyone already working on this?
Is rebuilding docker with
msg.Flags = syscall.IFF_MULTICAST
And installing this build as a daemon sufficient to get multicast working or does the docker client (that builds the containers) also need some changes?
Multicast seems to be working fine for me between containers on the same host. In different shells, I start up two containers with:
docker run -it --name node1 ubuntu:14.04 /bin/bash
docker run -it --name node2 ubuntu:14.04 /bin/bash
Then in each one, I run:
apt-get update && apt-get install iperf
Then in node 1, I run:
iperf -s -u -B 224.0.55.55 -i 1
And in node 2, I run:
iperf -c 224.0.55.55 -u -T 32 -t 3 -i 1
I can see the packets from node 2 show up in node 1's console, so looks like it's working. The only thing I haven't figured out yet is multicasting among containers on different hosts. I'm sure that'll require forwarding the multicast traffic through some iptables
magic.
Please make it happen, if it is easy to fix! Thank you!
Hi there,
I'm also highly interested in understanding how to enable multicast in containers (between container and the outside world). Do I have to compile docker myself for now?
Thanks,
Using --net host
option works for now but obviously is less than ideal in
the true isolate networking container flow.
On Oct 9, 2014 6:03 AM, "Sylvain Hellegouarch" notifications@github.com
wrote:
Hi there,
I'm also highly interested in understanding how to enable multicast in containers (between container and the outside world). Do I have to compile docker myself for now?
Thanks,
— Reply to this email directly or view it on GitHub https://github.com/docker/docker/issues/3043#issuecomment-58505450.
Indeed. That's what I'm using and it does work as expected. I was wondering if there could be an update on this ticket regarding what remains to be done in docker. There is a mention of a flag to be set, is there more work to it?
Cheers :)
How can we have multicast on Docker 1.3.2?
@brunoborges use --net host
@defunctzombie yeah, that will work. But are there any known downsides of using --net=host?
@brunoborges, yes there are significant downsides IMHO and should be used if you know what you are doing.
Take a look at:
https://docs.docker.com/articles/networking/#how-docker-networks-a-container
Ok, so --net=host is no option, it can not be used together with --link. Has anyone tried what @defunctzombie said? Does it work? If so, why not integrate it? IMHO multicast is used by too many applications for discovery to ignore this issue.
Ok, I gave it a try myself but to no avail. I modified the code to set the IFF_MULTICAST flag. I see the veth interfaces coming up with MULTICAST enabled, but once the interface is up multicast is gone (ip monitor all):
[LINK]79: vethe9774fa: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
link/ether a2:ae:8c:b8:6c:0a brd ff:ff:ff:ff:ff:ff
[LINK]79: vethe9774fa: <BROADCAST,MULTICAST> mtu 1500 qdisc noop master docker0 state DOWN
link/ether a2:ae:8c:b8:6c:0a brd ff:ff:ff:ff:ff:ff
[LINK]79: vethe9774fa: <BROADCAST,MULTICAST> mtu 1500 qdisc noop master docker0 state DOWN
link/ether a2:ae:8c:b8:6c:0a brd ff:ff:ff:ff:ff:ff
[NEIGH]dev vethe9774fa lladdr a2:ae:8c:b8:6c:0a PERMANENT
[LINK]79: vethe9774fa: <BROADCAST,MULTICAST> mtu 1500 master docker0 state DOWN
link/ether a2:ae:8c:b8:6c:0a
[LINK]79: vethe9774fa: <NO-CARRIER,BROADCAST,UP> mtu 1500 qdisc noqueue master docker0 state DOWN
link/ether a2:ae:8c:b8:6c:0a brd ff:ff:ff:ff:ff:ff
[LINK]79: vethe9774fa: <NO-CARRIER,BROADCAST,UP> mtu 1500 master docker0 state DOWN
link/ether a2:ae:8c:b8:6c:0a
[LINK]79: vethe9774fa: <NO-CARRIER,BROADCAST,UP> mtu 1500 master docker0 state DOWN
link/ether a2:ae:8c:b8:6c:0a
[LINK]Deleted 78: vethf562f68: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
link/ether 56:28:af:c2:e9:a0 brd ff:ff:ff:ff:ff:ff
[ROUTE]ff00::/8 dev vethe9774fa table local metric 256
[ROUTE][ROUTE]fe80::/64 dev vethe9774fa proto kernel metric 256
[LINK]79: vethe9774fa: <BROADCAST,UP,LOWER_UP> mtu 1500
link/ether a2:ae:8c:b8:6c:0a
[LINK]79: vethe9774fa: <BROADCAST,UP,LOWER_UP> mtu 1500 master docker0 state UP
link/ether a2:ae:8c:b8:6c:0a
[LINK]79: vethe9774fa: <BROADCAST,UP,LOWER_UP> mtu 1500 master docker0 state UP
link/ether a2:ae:8c:b8:6c:0a
[LINK]79: vethe9774fa: <BROADCAST,UP,LOWER_UP> mtu 1500 master docker0 state UP
link/ether a2:ae:8c:b8:6c:0a
[LINK]79: vethe9774fa: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP
link/ether a2:ae:8c:b8:6c:0a brd ff:ff:ff:ff:ff:ff
I'd be intrested in helping working on this issue but, at this stage, it's unclear about the multicast support status. Does a docker container fail at being routed multicast stream at all? Or just between running containers?
Well, the plot thickens because I overlooked @rhasselbaum comment. Multicast actually works fine between containers. It is just that the ifconfig or 'ip address show' output doesn't indicate this. I ran the exact same tests as @rhasselbaum and the test was successful. After that I tried my own solution with a distributed EHCache that uses multicast for discovery and that worked as well. So there doesn't seem to be a problem anymore...
Alright. So the stock docker seems to have multicast working between containers. I'm not sure I understand regarding the last part of your comment. More specifically, I'm wondering if multicast can be forwarded from the host to the contaienr with --net host (which is, well, expected).
I can not confirm your question about --net host because I require --link but that combination is impossible.
As do I. Okay, I will play with it as well and report here.
@hmeerlo Multicast works fine between containers on the same host. But I think more work is needed (or a HOWTO) on getting it to work across hosts. I'm sure it would work with --net host
but that has other drawbacks.
I finally got the time to try for myself and I was able to use multicast in the various scenarios that I was interested in:
For the last two, just make sure you have the appropriate route, something along the lines:
$ sudo route add -net 224.0.0.0/4 dev docker0
This worked with the stock docker 1.3.3 without --net host.
@Lawouach when you say "container to container" it's on the same host ?
I require the default docker interface eth0 provide MULTICAST, and trying to patch, but it can not work.
diff --git a/vendor/src/github.com/docker/libcontainer/netlink/netlink_linux.go b/vendor/src/github.com/docker/libcontainer/netlink/netlink_linux.go
index 3ecb81f..c78cd14 100644
--- a/vendor/src/github.com/docker/libcontainer/netlink/netlink_linux.go
+++ b/vendor/src/github.com/docker/libcontainer/netlink/netlink_linux.go
@@ -713,7 +713,7 @@ func NetworkCreateVethPair(name1, name2 string, txQueueLen int) error {
}
defer s.Close()
- wb := newNetlinkRequest(syscall.RTM_NEWLINK, syscall.NLM_F_CREATE|syscall.NLM_F_EXCL|syscall.NLM_F_ACK)
+ wb := newNetlinkRequest(syscall.RTM_NEWLINK, syscall.NLM_F_CREATE|syscall.NLM_F_EXCL|syscall.NLM_F_ACK|syscall.IFF_MULTICAST)
msg := newIfInfomsg(syscall.AF_UNSPEC)
wb.AddData(msg)
@Lawouach I am using Docker 1.6 and adding the route you mention:
$ sudo route add -net 224.0.0.0/4 dev docker0
but my container is not capable of receving multicast. I can not use --net="host" because of its incompatibility with --link.
I have also tried to add that route inside the container but it has no effect.
Any news about this issue?
@gonberdeal Back then I had misread my results and wasn't able to receive multicast with just the route you mention. What I ended up doing (which isn't ideal but is a suitable workaround for now for my own use case) is to enslave a second physical network interface to the docker bridge.
$ brctl addif docker0 eth2 $ route add -net 224.0.0.0/4 dev eth2
That way, I can receive multicast without relying on --net=host.
The lack of IFF_MULTICAST is preventing mDNS from working. I have a small app that tries to resolve a service (mDNS-SD), but this fails early during interface enumeration because it can't find any capable netdevs. It may be that the veth devices do in fact support multicasting, even if the IFF_MULTICAST is not set (as it seems after reading through the comments here). But any application that tries to behave nicely will not work (avahi for example). Creating a veth device using 'ip link' does give it IFF_MULTICAST capability by default.
haze ~/dev/iproute2/ip (master)$ sudo ip link add link eth0 type veth
haze ~/dev/iproute2/ip (master)$ ip addr show veth0
109: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 36:ab:e1:be:c0:94 brd ff:ff:ff:ff:ff:ff
ping @mavenugo
Hi there,
I face the same problem as @Hugne . Have a Mono application running inside docker which iterates all interface and looks for multicast enabled devices (which is a built-in function of Mono / .NET). this doesn't work and therefore breaks my zero-conf discovery mechanism. .
any news on this, would be highty appreciated!
Cheers, christian
I confirm that avahi is not working due to missing IFF_MULTICAST. I have to use pipework.
@aiker would you mind explaining how you're using it to that effect please?
@Lawouach I start docker container with --net=none option and use command:
pipework docker0 -i eth0 CONTAINER_ID IP_ADDRESS/IP_MASK@DEFAULT_ROUTE_IP
which creates eth0 interface inside container with IFF_MULTICAST flag and defined IP address.
Thanks. Will try it out then :)
Good that there's a workaround for this, but the fact that MLD does not work with standard Docker is not.
@Hugne agreed! Actually I don't want to use pipework (my toolbox and tech stack already are huge enough) to just get an active multicast flag.
+1 for IFF_MULTICAST
being the default in... 1.7.1? :)
+1, this would make multi-host Elasticsearch clusters easier
(note that Elastic themselves don't actually recommend this in prod, citing the cast of nodes accidentally joining a cluster, but that seems less likely in a containerized scenario).
+1
@mavenugo cool! So we'll see this in 1.9 ?
I'm on 1.9-dev. The veth/eth pair is created as such:
40: veth660ec14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
(...)
39: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
Yet the multicasts do not seem to work with overlay multihost network
@emsi: have you properly configured your routes?
On underlay? Yes, as the unicast on overlay is working. On overlay there is no need for routing as it's just one segment.
I currently have a bunch of containers configured using veth as the network type. The host system has a bridge device and can ping 224.0.0.1 but the containers can't.
Any ideas?