troglobit / pimd

PIM-SM/SSM multicast routing for UNIX and Linux
http://troglobit.com/projects/pimd/
BSD 3-Clause "New" or "Revised" License
194 stars 86 forks source link

PIMD + Docker vol 2018 #121

Closed sandersaares closed 6 years ago

sandersaares commented 6 years ago

Having read #70 and http://troglobit.com/2016/03/07/testing-multicast-with-docker/ I approached the topic with great expectations but alas, I fail to configure multicast routing with Docker. Perhaps I am doing it wrong. Perhaps times have changed. I describe my scenario here and request guidance.

I have an Ubuntu 16.04 system that is a Docker host, with a default Docker bridge network in which I want to have a container receive multicast traffic.

saares@michaelscarn:~$ sudo ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:58:47:21:dd
          inet addr:172.17.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::42:58ff:fe47:21dd/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:32926 errors:0 dropped:0 overruns:0 frame:0
          TX packets:40983 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:2478032 (2.4 MB)  TX bytes:55008011 (55.0 MB)

eth0      Link encap:Ethernet  HWaddr 00:15:5d:04:eb:d8
          inet addr:10.0.5.224  Bcast:10.0.5.255  Mask:255.255.254.0
          inet6 addr: fe80::215:5dff:fe04:ebd8/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:168647 errors:0 dropped:60 overruns:0 frame:0
          TX packets:39771 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:74643540 (74.6 MB)  TX bytes:7928329 (7.9 MB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:580 errors:0 dropped:0 overruns:0 frame:0
          TX packets:580 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:32840 (32.8 KB)  TX bytes:32840 (32.8 KB)

pimreg    Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
          UP RUNNING NOARP  MTU:1472  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

vethc003bd0 Link encap:Ethernet  HWaddr d6:0a:db:78:1d:f7
          inet6 addr: fe80::d40a:dbff:fe78:1df7/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:17103 errors:0 dropped:0 overruns:0 frame:0
          TX packets:21444 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1515607 (1.5 MB)  TX bytes:28525752 (28.5 MB)

I installed PIMD and started it with the default configuration and the DEBUG log level.

I started a container and in it ran iperf -s -u -B 239.1.2.3 -i 1. In a packet capture on docker0 I see:

 394 1924.633463045   172.17.0.2 → 224.0.0.22   IGMPv3 54 Membership Report / Join group 239.1.2.3 for any sources
  395 1924.773505873   172.17.0.2 → 224.0.0.22   IGMPv3 54 Membership Report / Join group 239.1.2.3 for any sources
  396 1928.777992528   172.17.0.1 → 224.0.0.1    IGMPv3 50 Membership Query, general
  397 1932.661598742   172.17.0.2 → 224.0.0.22   IGMPv3 54 Membership Report / Join group 239.1.2.3 for any sources
  398 1937.877629288   172.17.0.1 → 224.0.0.22   IGMPv3 70 Membership Report / Join group 224.0.0.22 for any sources / Join group 224.0.0.2 for any sources / Join group 224.0.0.13 for any sources

On the host I then do iperf -c 239.1.2.3 -u -T 32 -t 3 -i 1.

In syslog I see Added kernel MFC entry src 10.0.5.224 grp 239.1.2.3 from eth0 to docker0 from pimd.

In pimd routes I see some potentially relevant lines about this IP address:

saares@michaelscarn:~$ sudo pimd -r | grep -B 10 -A 10 239.1.2.3
Asserted oifs: ...
Outgoing oifs: o..
Incoming     : ..I

TIMERS:  Entry    JP    RS  Assert VIFS:  0  1  2
             0    45     0       0        0  0  0
----------------------------------- (S,G) ------------------------------------
----------------------------------- (*,G) ------------------------------------
Source           Group            RP Address       Flags
---------------  ---------------  ---------------  ---------------------------
INADDR_ANY       239.1.2.3        172.17.0.1       WC RP
Joined   oifs: ...
Pruned   oifs: ...
Leaves   oifs: .l.
Asserted oifs: ...
Outgoing oifs: .o.
Incoming     : ..I

TIMERS:  Entry    JP    RS  Assert VIFS:  0  1  2
             0    10     0       0        0  0  0
----------------------------------- (S,G) ------------------------------------
Source           Group            RP Address       Flags
---------------  ---------------  ---------------  ---------------------------
10.0.5.224       239.1.2.3        172.17.0.1       SPT CACHE SG
Joined   oifs: ...
Pruned   oifs: ...
Leaves   oifs: .l.
Asserted oifs: ...
Outgoing oifs: .o.
Incoming     : I..

TIMERS:  Entry    JP    RS  Assert VIFS:  0  1  2
           190    45     0       0        0  0  0
----------------------------------- (*,G) ------------------------------------

sudo ip mroute has (10.0.5.224, 239.1.2.3) Iif: eth0 Oifs: docker0 listed.

Linux firewall is off:

saares@michaelscarn:~$ sudo ufw status
Status: inactive

sysctl -a says this:

net.ipv4.conf.all.mc_forwarding = 1
net.ipv4.conf.default.mc_forwarding = 0
net.ipv4.conf.docker0.mc_forwarding = 1
net.ipv4.conf.eth0.mc_forwarding = 1
net.ipv4.conf.lo.mc_forwarding = 0
net.ipv4.conf.pimreg.mc_forwarding = 1
net.ipv4.conf.vethc003bd0.mc_forwarding = 0

But in the end, no multicast traffic is to be received by the container. In fact, I see no UDP traffic whatsoever if I do a packet capture on the docker0 network.

Is there something obvious I am doing wrong?

troglobit commented 6 years ago

Really have no idea why it doesn't work for you, otoh I've never used iperf to test multicast. Mostly because it doesn't have an -I interface option like ping does. Try ping (with interface + TTL options) and check with tcpdump on the receiver.

sandersaares commented 6 years ago

The idea being that if I send a ping to a multicast address on eth0 it should route to docker0, right? With a listener in docker0 subscribed to the group, of course. This is also a negative result.

The ping shows up on docker0 only if I specify the docker0 interface to be used.

Can you suggest any detailed tracing I could turn on to find out how Linux is routing my packets? My Linux networking knowledge is very basic, so while I would like to get to the bottom of this, I really have no idea where to even look so any hints are appreciated.

More things I tried:

saares@michaelscarn:~$ ip route get 239.1.2.3
multicast 239.1.2.3 dev eth0  src 10.0.5.224
    cache <mc>

saares@michaelscarn:~$ sudo iptables -4 --list -v
Chain INPUT (policy ACCEPT 889K packets, 55M bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain FORWARD (policy DROP 5772 packets, 8638K bytes)
 pkts bytes target     prot opt in     out     source               destination
79095   66M DOCKER-ISOLATION  all  --  any    any     anywhere             anywhere
46341   63M DOCKER     all  --  any    docker0  anywhere             anywhere
40569   54M ACCEPT     all  --  any    docker0  anywhere             anywhere             ctstate RELATED,ESTABLISHED
32754 2471K ACCEPT     all  --  docker0 !docker0  anywhere             anywhere
    0     0 ACCEPT     all  --  docker0 docker0  anywhere             anywhere

Chain OUTPUT (policy ACCEPT 43060 packets, 9907K bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain DOCKER (1 references)
 pkts bytes target     prot opt in     out     source               destination

Chain DOCKER-ISOLATION (1 references)
 pkts bytes target     prot opt in     out     source               destination
79095   66M RETURN     all  --  any    any     anywhere             anywhere

Oh hey does that say DROP there?

saares@michaelscarn:~$ sudo iptables --policy FORWARD ACCEPT

And now it works! Cool!

Is this something expected? That is to say, should changing this policy be a normal part of setting up multicast on a Docker host? Or is this just some overreaching workaround and a more fine tuned real change would be sufficient?

troglobit commented 6 years ago

I just re-read my own howto and realized I actually have used iperf before ... here it is, maybe you can find some pointers there: http://troglobit.com/howto/multicast/

Also, I think docker0 is a bridge, so some of the tips in the howto for bridges may be worth trying out.

sandersaares commented 6 years ago

Thank you for the assistance. I will close this issue now given that with the adjustment in the last comment it seems to work fine.

troglobit commented 6 years ago

Great to hear, good luck! :)