troglobit / pimd

PIM-SM/SSM multicast routing for UNIX and Linux
http://troglobit.com/projects/pimd/
BSD 3-Clause "New" or "Revised" License
194 stars 86 forks source link

pimd with mininet #127

Closed carlasauvanaud closed 5 years ago

carlasauvanaud commented 5 years ago

Hi,

I am trying to deploy with Mininet on my laptop an environment with 6 hosts, each one in its on LAN, and 2 routers running Pimd. However, Wireshark allows me to see that the pimd instances on my two routers seem to never forward the multicast messages coming from any host. Also the RP is correctly reported but the multicast routing table remains empty.

Here is my topology:

                     +----------+            +----------+                        
rx host1+----LAN1----+r1eth1    |            |    r2eth1+----LAN5----+tx host4   
                     |          |            |          |                        
rx host2+----LAN2----+r1eth2    |            |    r2eth2+----LAN6----+tx host5   
                     |    r1eth4+----LAN4----+r2eth4    |                        
rx host3+----LAN3----+r1eth3    |            |    r2eth3+----LAN7----+tx host6   
                     |          |            |          |                        
                     +----------+            +----------+                        
                        router1                router2    

I have a Linux kernel 4.15.18

CONFIG_IP_PIMSM_V1=y
CONFIG_IP_PIMSM_V2=y
CONFIG_IPV6_PIMSM_V2=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_MROUTE_MULTIPLE_TABLES=y

I set up the correct unicast routes so that my hosts can ping each other and I am testing the multicast connectivity with mcjoin. I tested several configuration files for pimd: with either rXeth4 activated on each router, or all interfaces activated ; and also with a fixed rp-address or not (priority of 1 or 200). All cases seem to lead to the same problem. Now I am wondering, is it even possible to run properly two pimd instances on the same machine ?

Kind regards

EDIT: the static multicast forwarding works well using smcroute btw.

ruckc commented 5 years ago

Try changing your rp_filter setting to 0 on your r1's pimreg interface and r2's pimreg interface.

echo 0 > /proc/sys/net/ipv4/conf/pimreg/rp_filter

That at least seems to be my issue where the multicast data get's tunneled over the PIM Register tunnel between the two pimd instances.

carlasauvanaud commented 5 years ago

Thanks for you help @ruckc , unfortunately it seems that it was by default set to 0. So, no change.

troglobit commented 5 years ago

What released version of pimd are you running, or are you using the latest (unreleased) from the master?

Since multicast routing works with SMCRoute I can skip the usual TTL question. What's more important then is if the routers can see each other, do they peer? You can query this in different ways, depending on the version of pimd.

carlasauvanaud commented 5 years ago

Thanks @troglobit!

Multicast Routing Table ======================================================
--------------------------------- (,,G) ------------------------------------
Number of Groups: 0
Number of Cache MIRRORs: 0


Rendezvous-Point Set =========================================================
RP address Incoming Group Prefix Priority Holdtime


169.254.0.1 32 65535
232/8 1 65535
10.0.0.7 4 65535
224/4 1 65535


Current BSR address: 10.2.6.1

08:02:31.932 RECV 46 bytes PIM v2 Hello from 10.1.1.1 to 224.0.0.13 08:02:31.932 PIM HELLO holdtime from 10.1.1.1 is 105
08:02:31.932 PIM DR PRIORITY from 10.1.1.1 is 1
08:02:31.932 PIM GenID from 10.1.1.1 is 183622635
08:02:31.932 RECV 46 bytes PIM v2 Hello from 10.1.2.1 to 224.0.0.13 08:02:31.933 PIM HELLO holdtime from 10.1.2.1 is 105
08:02:31.933 PIM DR PRIORITY from 10.1.2.1 is 1
08:02:31.933 PIM GenID from 10.1.2.1 is 1552789732
08:02:31.933 RECV 46 bytes PIM v2 Hello from 10.1.3.1 to 224.0.0.13 08:02:31.933 PIM HELLO holdtime from 10.1.3.1 is 105
08:02:31.933 PIM DR PRIORITY from 10.1.3.1 is 1
08:02:31.933 PIM GenID from 10.1.3.1 is 319243819


and **on router 2**:

Virtual Interface Table ======================================================
Vif Local Address Subnet Thresh Flags Neighbors


0 10.0.0.8 10 1 PIM 10.2.6.1
10.2.5.1
10.2.4.1
10.0.0.7
1 10.2.4.1 10.2.4/24 1 DR NO-NBR
2 10.2.5.1 10.2.5/24 1 DR NO-NBR
3 10.2.6.1 10.2.6/24 1 DR NO-NBR
4 10.0.0.8 register_vif0 1

Multicast Routing Table ======================================================
--------------------------------- (,,G) ------------------------------------
Number of Groups: 0
Number of Cache MIRRORs: 0


Rendezvous-Point Set =========================================================
RP address Incoming Group Prefix Priority Holdtime


169.254.0.1 32 65535
232/8 1 65535
10.0.0.7 0 65535
224/4 1 65535


Current BSR address: 10.2.6.1


However, running `mcjoin  239.0.0.28` on host1, I can see the following on router 1 (which is the router directly connected to host1):

08:27:52.064 Received IGMP v3 Membership Report from 10.1.1.100 to 224.0.0.22 08:27:52.064 accept_membership_report(): IGMP v3 report, 16 bytes, from 10.1.1.100 to 224.0.0.22 with 1 group records. 08:27:52.064 accept_group_report(): igmp_src 10.1.1.100 ssm_src 0.0.0.0 group 239.0.0.28 report_type 34 08:27:52.064 Set delete timer for group: 239.0.0.28 08:27:52.064 SM group order from 10.1.1.100 (*,239.0.0.28) 08:27:52.064 find_route: Not PMBR, return NULL

troglobit commented 5 years ago

OK, I've reproduced it in Mininet now (first time trying out out). I'll have to look into this in more detail, possibly later tomorrow (CET). I want to verify with my setup in the CORE Network Emulator, which is what I normally use.

carlasauvanaud commented 5 years ago

This sounds great :) In any case, I found a workaround by using Containernet and deploying my routers as docker containers.

troglobit commented 5 years ago

This wasn't the most trivial thing to debug, I understand why you got stuck. It turns out Mininet doesn't provide the most optimal virtualization out of the box.

To get it to work I had to set up a separate .conf file for each router, enable only the interfaces connecting to other PIM routers, and then start with both the -I ID and -N switches:

pimd -N -I r1 -f r1.conf -n

The -I ID is required since otherwise pimctl will not be able to connect to the correct daemon. This is due to Mininet sharing the same PID and mount namespaces with all instances. With Containernet this is all fixed since Docker sets up all the namespaces, properly shielding all the pimd instances from each other.

One could have /etc/{r1,r2,r3,...}.conf and then call pimd -N -I r1 and the identity takes care to use the correct .conf file, PID files, and domain socket (used by pimctl).

So I guess we can close this issue now?

carlasauvanaud commented 5 years ago

Ok so for me it is still not working. I even try again several configurations either using:

Thanks for the tricks with the identity configuration. I did not know this one. I guess you can close the issue since it is obviously only related to mininet and I do not have time to look more into this.

Thank you again :)

troglobit commented 5 years ago

Yeah, I think it's safe to say we should probably recommend ppl to not use vanilla Mininet with pimd.

Thanks, closing.