Closed carlasauvanaud closed 5 years ago
Try changing your rp_filter setting to 0 on your r1's pimreg interface and r2's pimreg interface.
echo 0 > /proc/sys/net/ipv4/conf/pimreg/rp_filter
That at least seems to be my issue where the multicast data get's tunneled over the PIM Register tunnel between the two pimd instances.
Thanks for you help @ruckc , unfortunately it seems that it was by default set to 0. So, no change.
What released version of pimd are you running, or are you using the latest (unreleased) from the master?
Since multicast routing works with SMCRoute I can skip the usual TTL question. What's more important then is if the routers can see each other, do they peer? You can query this in different ways, depending on the version of pimd.
Thanks @troglobit!
My pimd version is 3.0-beta1 (I am using Ubuntu 18.04 and I had some issues related to interfaces when using the version 2.3.1 installed by apt. Actually, my interfaces which are not in the same subnet were said by pimd to be in the same subnet and therefore not considered.)
Indeed I am using the command mcjoin 239.0.0.28 -s -d -t 10
to test my deployment (TTL well set).
As for the routers, they respectively well communicate through their rXeth4 interface with IPs 10.0.0.7 and 10.0.0.8 as I can see: on router 1:
Virtual Interface Table ======================================================
Vif Local Address Subnet Thresh Flags Neighbors
--- --------------- ------------------ ------ --------- -----------------
0 10.0.0.7 10 1 PIM 10.1.3.1
10.1.2.1
10.1.1.1
10.0.0.8
1 10.1.1.1 10.1.1/24 1 DR NO-NBR
2 10.1.2.1 10.1.2/24 1 DR NO-NBR
3 10.1.3.1 10.1.3/24 1 DR NO-NBR
4 10.0.0.7 register_vif0 1
Multicast Routing Table ======================================================
--------------------------------- (,,G) ------------------------------------
Number of Groups: 0
Number of Cache MIRRORs: 0
Rendezvous-Point Set =========================================================
RP address Incoming Group Prefix Priority Holdtime
169.254.0.1 32 65535
232/8 1 65535
10.0.0.7 4 65535
224/4 1 65535
Current BSR address: 10.2.6.1
08:02:31.932 RECV 46 bytes PIM v2 Hello from 10.1.1.1 to 224.0.0.13
08:02:31.932 PIM HELLO holdtime from 10.1.1.1 is 105
08:02:31.932 PIM DR PRIORITY from 10.1.1.1 is 1
08:02:31.932 PIM GenID from 10.1.1.1 is 183622635
08:02:31.932 RECV 46 bytes PIM v2 Hello from 10.1.2.1 to 224.0.0.13
08:02:31.933 PIM HELLO holdtime from 10.1.2.1 is 105
08:02:31.933 PIM DR PRIORITY from 10.1.2.1 is 1
08:02:31.933 PIM GenID from 10.1.2.1 is 1552789732
08:02:31.933 RECV 46 bytes PIM v2 Hello from 10.1.3.1 to 224.0.0.13
08:02:31.933 PIM HELLO holdtime from 10.1.3.1 is 105
08:02:31.933 PIM DR PRIORITY from 10.1.3.1 is 1
08:02:31.933 PIM GenID from 10.1.3.1 is 319243819
and **on router 2**:
Virtual Interface Table ======================================================
Vif Local Address Subnet Thresh Flags Neighbors
0 10.0.0.8 10 1 PIM 10.2.6.1
10.2.5.1
10.2.4.1
10.0.0.7
1 10.2.4.1 10.2.4/24 1 DR NO-NBR
2 10.2.5.1 10.2.5/24 1 DR NO-NBR
3 10.2.6.1 10.2.6/24 1 DR NO-NBR
4 10.0.0.8 register_vif0 1
Multicast Routing Table ======================================================
--------------------------------- (,,G) ------------------------------------
Number of Groups: 0
Number of Cache MIRRORs: 0
Rendezvous-Point Set =========================================================
RP address Incoming Group Prefix Priority Holdtime
169.254.0.1 32 65535
232/8 1 65535
10.0.0.7 0 65535
224/4 1 65535
Current BSR address: 10.2.6.1
However, running `mcjoin 239.0.0.28` on host1, I can see the following on router 1 (which is the router directly connected to host1):
08:27:52.064 Received IGMP v3 Membership Report from 10.1.1.100 to 224.0.0.22 08:27:52.064 accept_membership_report(): IGMP v3 report, 16 bytes, from 10.1.1.100 to 224.0.0.22 with 1 group records. 08:27:52.064 accept_group_report(): igmp_src 10.1.1.100 ssm_src 0.0.0.0 group 239.0.0.28 report_type 34 08:27:52.064 Set delete timer for group: 239.0.0.28 08:27:52.064 SM group order from 10.1.1.100 (*,239.0.0.28) 08:27:52.064 find_route: Not PMBR, return NULL
OK, I've reproduced it in Mininet now (first time trying out out). I'll have to look into this in more detail, possibly later tomorrow (CET). I want to verify with my setup in the CORE Network Emulator, which is what I normally use.
This sounds great :) In any case, I found a workaround by using Containernet and deploying my routers as docker containers.
This wasn't the most trivial thing to debug, I understand why you got stuck. It turns out Mininet doesn't provide the most optimal virtualization out of the box.
To get it to work I had to set up a separate .conf
file for each router, enable only the interfaces connecting to other PIM routers, and then start with both the -I ID
and -N
switches:
pimd -N -I r1 -f r1.conf -n
The -I ID
is required since otherwise pimctl
will not be able to connect to the correct daemon. This is due to Mininet sharing the same PID and mount namespaces with all instances. With Containernet this is all fixed since Docker sets up all the namespaces, properly shielding all the pimd
instances from each other.
One could have /etc/{r1,r2,r3,...}.conf
and then call pimd -N -I r1
and the identity takes care to use the correct .conf
file, PID files, and domain socket (used by pimctl
).
So I guess we can close this issue now?
Ok so for me it is still not working. I even try again several configurations either using:
rp-address
-N
option and phyint rX-eth4 enable
in the conf filephyint rX-eth4 enable
while disabling all other interfaces in the conf fileThanks for the tricks with the identity configuration. I did not know this one. I guess you can close the issue since it is obviously only related to mininet and I do not have time to look more into this.
Thank you again :)
Yeah, I think it's safe to say we should probably recommend ppl to not use vanilla Mininet with pimd.
Thanks, closing.
Hi,
I am trying to deploy with Mininet on my laptop an environment with 6 hosts, each one in its on LAN, and 2 routers running Pimd. However, Wireshark allows me to see that the pimd instances on my two routers seem to never forward the multicast messages coming from any host. Also the RP is correctly reported but the multicast routing table remains empty.
Here is my topology:
I have a Linux kernel 4.15.18
I set up the correct unicast routes so that my hosts can ping each other and I am testing the multicast connectivity with
mcjoin
. I tested several configuration files for pimd: with either rXeth4 activated on each router, or all interfaces activated ; and also with a fixed rp-address or not (priority of 1 or 200). All cases seem to lead to the same problem. Now I am wondering, is it even possible to run properly two pimd instances on the same machine ?Kind regards
EDIT: the static multicast forwarding works well using smcroute btw.