troglobit / pimd

PIM-SM/SSM multicast routing for UNIX and Linux
http://troglobit.com/projects/pimd/
BSD 3-Clause "New" or "Revised" License
194 stars 86 forks source link

PIMD does not appear to listen to Docker user networks on startup #122

Closed sandersaares closed 6 years ago

sandersaares commented 6 years ago

I have iterated further from the situation described in #121 and disabled the docker0 default bridge network and instead created a new Docker network. What I now observe is that after server restart, PIMD does not operate on this network. If I restart PIMD, it starts operating as expected.

On Ubuntu 16.04 I install Docker and then disable the default bridge network by adding "bridge": "none" to the Docker configuration. Then I create a new Docker network using docker network create --driver bridge --gateway 172.31.250.254 --subnet 172.31.250.0/24 asdf.

I also install pimd and keep its default configuration untouched.

After system restart, I currently manually execute sudo iptables --policy FORWARD ACCEPT as per #121 findings.

What I observe on startup (with no containers running) is the following:

saares@michaelscarn:~$ ifconfig
br-7c952626e729 Link encap:Ethernet  HWaddr 02:42:c6:ab:9b:5b
          inet addr:172.31.250.254  Bcast:0.0.0.0  Mask:255.255.255.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth0      Link encap:Ethernet  HWaddr 00:15:5d:04:eb:d8
          inet addr:10.0.5.224  Bcast:10.0.5.255  Mask:255.255.254.0
          inet6 addr: fe80::215:5dff:fe04:ebd8/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:7785 errors:0 dropped:27 overruns:0 frame:0
          TX packets:185 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:938957 (938.9 KB)  TX bytes:17133 (17.1 KB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:160 errors:0 dropped:0 overruns:0 frame:0
          TX packets:160 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:11840 (11.8 KB)  TX bytes:11840 (11.8 KB)

pimreg    Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
          UP RUNNING NOARP  MTU:1472  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

PIMD appears to be operating normally on eth0 (at least a bunch of entries from the LAN show up in ip mroute).

Then I create a container. Ifconfig now has a new entry:

vethd12df9d Link encap:Ethernet  HWaddr 7e:bb:5e:26:d5:5d
          inet6 addr: fe80::7cbb:5eff:fe26:d55d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:13885 errors:0 dropped:0 overruns:0 frame:0
          TX packets:19871 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1227631 (1.2 MB)  TX bytes:26850819 (26.8 MB)

This interface appears to be the one used by Docker for communicating to the container. I have no understanding of how it works (it has no ipv4?) beyond the fact that I see the container's traffic if I monitor this interface (e.g. when I do apt update the HTTP traffic goes over this interface).

I immediately notice that performing a packet capture on this new "veth" interface shows no PIM traffic. I would expect PIMD to start using this interface. When I start a listener, I see the listener's IGMP traffic but nothing from PIMD:

 37 55.147483747 172.31.250.1 → 224.0.0.22   IGMPv3 54 Membership Report / Join group 239.1.2.3 for any sources
   38 55.431623082 172.31.250.1 → 224.0.0.22   IGMPv3 54 Membership Report / Join group 239.1.2.3 for any sources

Indeed, PIMD is unaware of this join:

saares@michaelscarn:~$ sudo pimd -r | grep 239.1.2.3
saares@michaelscarn:~$

After restarting PIMD, it immediately starts operating normally on this interface:

   39 126.444730167 172.31.250.254 → 224.0.0.1    IGMPv3 50 Membership Query, general
   40 126.444752467 172.31.250.254 → 224.0.0.13   PIMv2 60 Hello
   41 126.451464704 172.31.250.254 → 224.0.0.22   IGMPv3 70 Membership Report / Join group 224.0.0.22 for any sources / Join group 224.0.0.2 for any sources / Join group 224.0.0.13 for any sources
   42 127.091480757 172.31.250.254 → 224.0.0.22   IGMPv3 70 Membership Report / Join group 224.0.0.22 for any sources / Join group 224.0.0.2 for any sources / Join group 224.0.0.13 for any sources
   43 128.659498817 172.31.250.254 → 224.0.0.22   IGMPv3 70 Membership Report / Join group 224.0.0.22 for any sources / Join group 224.0.0.2 for any sources / Join group 224.0.0.13 for any sources

I would expect PIMD to not require to be restarted in this situation. Is the behavior I observe normal? Do I need to perform some additional configuration to get it to automatically use this interface?

If I restart pimd before creating the container (before the "veth" network is created) it works fine immediately when I start the container. Perhaps the veth network is just a red herring and the key factor is something that happens earlier on startup (related to the Docker created br- network?).

troglobit commented 6 years ago

This looks like a duplicate of #21. Currently pimd needs to be restarted to see new interfaces. I have no idea how docker networking is integrated in Linux, so I cannot help you there.

sandersaares commented 6 years ago

Understood. Closing as duplicate then!