troglobit / pimd

PIM-SM/SSM multicast routing for UNIX and Linux
http://troglobit.com/projects/pimd/
BSD 3-Clause "New" or "Revised" License
194 stars 86 forks source link

Connect to host on multicast group with PIM protocol not igmp. #113

Closed bbbpiotr closed 6 years ago

bbbpiotr commented 6 years ago

Hello,

I'm using pimd version 2.3.2. Is this possible to connect (PIM join command) to particular host in the group? Using mcjoin command igmp frames are sending not PIM protocol join.

Example:

mcjoin 233.4.5.6 -i eth0

tcpdump -vvvvXX -i eth0 -n 13:25:28.009593 IP (tos 0xc0, ttl 1, id 0, offset 0, flags [DF], proto IGMP (2), length 40, options (RA)) 192.168.0.34 > 224.0.0.22: igmp v3 report, 1 group record(s) [gaddr 233.4.5.6 to_ex { }]

Peter.

jp-t commented 6 years ago

PIM is a protocol between routers, not between hosts. Multicast senders/receivers never issue PIM messages. Instead, a receiver must use IGMP to advertise either the local router or the LAN switches. This is the way it works.

troglobit commented 6 years ago

Like @jp-t said.

There do however exist test suites for protocol validation, like the hugely expensive Ixia ANVL.

bbbpiotr commented 6 years ago

Thank you for reply Jean-Pierre Tosoni.

One more information about my configuration. Maybe it's important. I'm using the same machine for sending igmp (mcjoin) and running pimd. Additionally there is GRE tunnel over IPSec. The hosts in the group I want to receive multicast traffic from are "on the other side" of tunnel: ------------------ me
GRE over IPSec

router | ----------------------- multicast sender # 1 | ----------------------- multicast sender # 2

"PIM is a protocol between routers, not between hosts." How to make pimd to send PIM join? When pimd sends PIM joins?

Correct me if I'm wrong, please. User makes:

mcjoin 233.4.5.6 -i eth0

This means advertise this machine (me) to 233.4.5.6 multicast group? So this frame makes router to pass multicast traffic from group to the sender (me)? So where is the point routers send PIM join messages?

jp-t commented 6 years ago

Sorry, your drawing is not clear. Could you use code formating like this:

|---demo---|
|          |

In any case, in your drawing I see only one router, so, still no need for PIM joins? Are there routers farther down the GRE tunnel?

How to make pimd to send PIM join? When pimd sends PIM joins?

PIM join is sent when this kind of architecture is used:

   |----LAN1----|   |---WAN---|   |----LAN2----|
rx host        router1       router2         tx host

When rxhost sends IGMP JOIN then router1 sends PIM JOIN to router2 then router2 forwards multicast to router1 (well it's a bit more complex because of rendezvous points selection, but this is the driving idea)

So where is the point routers send PIM join messages?

Router1 would pass MC traffic to rxhost... if it receives MC traffic. The point is that router1 uses JOIN to tell router2 farther on the path, that router1 needs MC traffic. If no receiver needs MC traffic, router2 will not forward it.

bbbpiotr commented 6 years ago

Thank you for explanation (driving idea) how this works this clarify a lot. :)

Sorry for unclear drawing. Below my configuration:

 |---------WAN(GRE over ipsec)---------| |------------LAN--------------|----------------|
me                                    router                       tx host1         tx host2

In any case, in your drawing I see only one router, so, still no need for PIM joins?

I think that I need to send PIM join to router to join multicast group. It does not depend on me but on existing infrastructure on the other side of the tunnel.

Are there routers farther down the GRE tunnel?

I think there are farther routers on the other side of tunnel. Does it makes difference if there are more than one router on the other side of tunnel?

I am wondering how pimd works internally? Does it receives igmp frame and then sends PIM join to proper neighbor or receives multicast groups from kernel when starts? Just wondering how to force pimd to send PIMv2 join to the other side of the tunnel?

jp-t commented 6 years ago

Does it makes difference if there are more than one router on the other side of tunnel?

I believe the important thing is the position of the RendezVous point (RVP) (the host where the PIM joins are directed to)

By the way, it's important that all routers along the path are PIM-enabled.

I am wondering how pimd works internally?

Long story short answer: read the source code together with RFC's :-)

Basically, when receiving IGMP, pimd sends JOIN to the rendezvous point host. This means that the RVP must be preconfigured statically or negotiated beforehand, and that means that the BSR (which distributes RVP information) must have been negotiated earlier again.

You must be aware that pimd has internally a timer loop that wakes up every 5 seconds, so, initial negotiations are slow. Moreover, if I remember correctly, the version you are using has an extra 15s delay at startup ; this delay was removed in more recent versions.

Does it receives igmp frame and then sends PIM join to proper neighbor

it joins the RVP

or receives multicast groups from kernel when starts?

no, it's pimd that configures the kernel, not the opposite.

Just wondering how to force pimd to send PIMv2 join to the other side of the tunnel?

1) either you configured a static RVP for your group, or a BSR message from the BSR must inform your pimd of the location of the RVP 2) if your pimd router is the RVP, no joins are necessary 3) after that, an IGMP from the rxhost must be sent to its nearest pimd router

bbbpiotr commented 6 years ago

Again, thank you for response.

Basically, when receiving IGMP, pimd sends JOIN to the rendezvous point host. This means that the RVP must be preconfigured statically or negotiated beforehand, and that means that the BSR (which distributes RVP information) must have been negotiated earlier again.

I have Rendezvous Point configured statically but none of packets are send there (RVP IP).

Below log from pimd. Pimd is on the same machine that mcjoin is.

> mcjoin -d -j 233.91.122.18 -i tunnel1
> pimd -c /etc/pimd.conf -digmp_proto,pim_jp,kernel,pim_register
16:14:00.504 Send IGMP Membership Query     from 91.236.233.6 to 224.0.0.1
16:14:00.504 SENT    36 bytes IGMP Membership Query     from 91.236.233.6    to 224.0.0.1
16:14:00.504 query_groups(): Sending IGMP v3 query on wpg-tun1
16:14:00.504 Send IGMP Membership Query     from 192.168.0.34 to 224.0.0.1
16:14:00.504 SENT    36 bytes IGMP Membership Query     from 192.168.0.34    to 224.0.0.1
16:14:00.504 Received IGMP Membership Query     from 192.168.0.34 to 224.0.0.1
16:14:05.509 For src 169.254.0.1, iif is 0, next hop router is 169.254.0.1: NOT A PIM ROUTER
16:14:06.657 Received IGMP v3 Membership Report from 192.168.0.34 to 224.0.0.22
16:14:06.657 accept_membership_report(): IGMP v3 report, 16 bytes, from 192.168.0.34 to 224.0.0.22 with 1 group records.
16:14:06.657 accept_group_report(): igmp_src 192.168.0.34 ssm_src 0.0.0.0 group 233.91.122.18 report_type 34
16:14:06.657 Set delete timer for group: 233.91.122.18
16:14:06.657 SM group order from  192.168.0.34 (*,233.91.122.18)
16:14:06.657 find_route: Not PMBR, return NULL
16:14:07.081 Received IGMP v3 Membership Report from 192.168.0.34 to 224.0.0.22
16:14:07.081 accept_membership_report(): IGMP v3 report, 16 bytes, from 192.168.0.34 to 224.0.0.22 with 1 group records.
16:14:07.081 accept_group_report(): igmp_src 192.168.0.34 ssm_src 0.0.0.0 group 233.91.122.18 report_type 34
16:14:07.081 Set delete timer for group: 233.91.122.18
16:14:07.081 find_route: Not PMBR, return NULL
16:14:09.253 Received IGMP v3 Membership Report from 192.168.0.34 to 224.0.0.22
16:14:09.253 accept_membership_report(): IGMP v3 report, 40 bytes, from 192.168.0.34 to 224.0.0.22 with 4 group records.
16:14:09.253 accept_group_report(): igmp_src 192.168.0.34 ssm_src 0.0.0.0 group 233.91.122.18 report_type 34
16:14:09.253 Set delete timer for group: 233.91.122.18
16:14:09.253 find_route: Not PMBR, return NULL
16:14:09.253 accept_group_report(): igmp_src 192.168.0.34 ssm_src 0.0.0.0 group 224.0.0.22 report_type 34
16:14:09.253 Set delete timer for group: 224.0.0.22
16:14:09.253 Not creating routing entry for LAN scoped group 224.0.0.22
16:14:09.253 accept_group_report(): igmp_src 192.168.0.34 ssm_src 0.0.0.0 group 224.0.0.2 report_type 34
16:14:09.253 Set delete timer for group: 224.0.0.2
16:14:09.253 Not creating routing entry for LAN scoped group 224.0.0.2
16:14:09.253 accept_group_report(): igmp_src 192.168.0.34 ssm_src 0.0.0.0 group 224.0.0.13 report_type 34
16:14:09.253 Set delete timer for group: 224.0.0.13
16:14:09.253 Not creating routing entry for LAN scoped group 224.0.0.13
16:14:11.025 Received IGMP v3 Membership Report from 192.168.0.34 to 224.0.0.22
16:14:11.025 accept_membership_report(): IGMP v3 report, 16 bytes, from 192.168.0.34 to 224.0.0.22 with 1 group records.
16:14:11.301 Received IGMP v3 Membership Report from 192.168.0.34 to 224.0.0.22
16:14:11.301 accept_membership_report(): IGMP v3 report, 16 bytes, from 192.168.0.34 to 224.0.0.22 with 1 group records.
^C16:14:13.283 pimd version 2.3.2 exiting.
> tcpdump -vvvvXX -i tunnel1 -n '(ip proto 103) or ((ip proto 2) and (ip[20] == 0x14) or igmp)'
16:14:00.504269 IP (tos 0xc0, ttl 1, id 5806, offset 0, flags [none], proto IGMP (2), length 36, options (RA))
    192.168.0.34 > 224.0.0.1: igmp query v3

16:14:06.657596 IP (tos 0xc0, ttl 1, id 0, offset 0, flags [DF], proto IGMP (2), length 40, options (RA))
    192.168.0.34 > 224.0.0.22: igmp v3 report, 1 group record(s) [gaddr 233.91.122.18 to_ex { }]

16:14:07.081583 IP (tos 0xc0, ttl 1, id 0, offset 0, flags [DF], proto IGMP (2), length 40, options (RA))
    192.168.0.34 > 224.0.0.22: igmp v3 report, 1 group record(s) [gaddr 233.91.122.18 to_ex { }]

16:14:09.253588 IP (tos 0xc0, ttl 1, id 0, offset 0, flags [DF], proto IGMP (2), length 64, options (RA))
    192.168.0.34 > 224.0.0.22: igmp v3 report, 4 group record(s) [gaddr 233.91.122.18 is_ex { }] [gaddr 224.0.0.22 is_ex { }] [gaddr 224.0.0.2 is_ex { }] [gaddr 224.0.0.13 is_ex { }]

16:14:11.025582 IP (tos 0xc0, ttl 1, id 0, offset 0, flags [DF], proto IGMP (2), length 40, options (RA))
    192.168.0.34 > 224.0.0.22: igmp v3 report, 1 group record(s) [gaddr 233.91.122.18 to_in { }]

16:14:11.301574 IP (tos 0xc0, ttl 1, id 0, offset 0, flags [DF], proto IGMP (2), length 40, options (RA))
    192.168.0.34 > 224.0.0.22: igmp v3 report, 1 group record(s) [gaddr 233.91.122.18 to_in { }]

16:14:13.283741 IP (tos 0x0, ttl 1, id 85, offset 0, flags [none], proto PIM (103), length 46)
    192.168.0.34 > 224.0.0.13: PIMv2, length 26
    Hello, cksum 0x345c (correct)

16:14:13.305576 IP (tos 0xc0, ttl 1, id 0, offset 0, flags [DF], proto IGMP (2), length 56, options (RA))
    192.168.0.34 > 224.0.0.22: igmp v3 report, 3 group record(s) [gaddr 224.0.0.2 to_in { }] [gaddr 224.0.0.22 to_in { }] [gaddr 224.0.0.13 to_in { }]

16:14:13.701606 IP (tos 0xc0, ttl 1, id 0, offset 0, flags [DF], proto IGMP (2), length 56, options (RA))
    192.168.0.34 > 224.0.0.22: igmp v3 report, 3 group record(s) [gaddr 224.0.0.2 to_in { }] [gaddr 224.0.0.22 to_in { }] [gaddr 224.0.0.13 to_in { }]

As you can see from above there is no pim join. pim sends igmp messages to 224.0.0.22. I think the problem is that receiver is on the same host pimd is working. How can I set PMBR in pimd.conf? Maybe this is the problem:

16:14:06.657 find_route: Not PMBR, return NULL
jp-t commented 6 years ago

I think the problem is that receiver is on the same host pimd is working

OK, I overlooked that point. Since the receiver is in the router, then this computer does not need to forward anything from one port to another => no multicast routing (only receiving) => pimd is NOT used. So, no PIM JOIN.

Your computer must be seen as a simple host, so it will send a simple IGMP join (not PIM) to the next router on the far end of the tunnel -- and that router will issue a PIM JOIN farther if needed.

Indeed, on your tcpdump you can see the IGMP JOIN GROUP 233.91.122.18 going out encapsulated in the tunnel. All is going as it should.

Hints:

bbbpiotr commented 6 years ago

"jp-t" -> Thank you for reply it clarify a lot.

I added tunnel with client to the configuration, it looks like this:

               192.168.0.34                        192.168.0.33
| --vpn(internet)--|-- WAN(GRE over ipsec)(internet)--| |------LAN------|------LAN------|
client           pimd                                router         mcast-tx1        mcast-tx2
10.10.0.10       10.10.0.1

The pimd do not see IGMP join from client (10.10.0.10).

Virtual Interface Table ======================================================
Vif  Local Address    Subnet              Thresh  Flags      Neighbors
---  ---------------  ------------------  ------  ---------  -----------------
  0  public-ip        public-ip                1  DISABLED
  1  192.168.0.34     192.168.0.32/30          1  PIM        192.168.0.33   
  2  10.10.0.1        10.10.0.1/32             1  DR NO-NBR
  3  192.168.0.34     register_vif0            1 

 Vif  SSM Group        Sources             

Multicast Routing Table ======================================================
--------------------------------- (*,*,G) ------------------------------------
Number of Groups: 0
Number of Cache MIRRORs: 0
------------------------------------------------------------------------------

pimd.conf:

phyint eth0 disable
# multicast resource tunnel
phyint tun1 dr-priority 10
# client tunnel
phyint 10.10.0.1 enable

rp-candidate 233.91.122.18
rp_address 233.91.122.18

I checked your hins it looks correct.

jp-t commented 6 years ago

The pimd do not see IGMP join from client (10.10.0.10).

1) What do you mean more precisely (so I can know the point where it breaks) :

2) can you at least ping 10.10.0.1 from "client" ? ping 10.10.0.1 3) can you try the following on the "client" (assuming Linux) ping -t 1 10.10.0.1 (assuming Windows) ping -i 1 10.10.0.1 Does it work as well?

troglobit commented 6 years ago

Good questions, I'm not really following either. PIM is protocol independent multicast, which means the unicast routing underneath must be set up already, unlike DVMRP (mrouted). So OSPF, RIP, BPG, or static routes must exist to enable e.g. ping between client and multicast sender.

bbbpiotr commented 6 years ago
  1. What do you mean more precisely (so I can know the point where it breaks) :
    • the IGMP frame does not appear on a tcpdump running on the 10.10.0.1 interface of "pimd" computer?

The frame appears on 10.10.0.1 computer.

Client:

root@client:~/scripts# ifconfig 
 . . .
tun0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1500
        inet 10.10.0.10  netmask 255.255.255.255  destination 10.10.0.1
        inet6 fe80::ba50:a322:8088:668b  prefixlen 64  scopeid 0x20<link>
. . .
root@client:~/scripts# mcjoin 233.91.1.1 -i tun0
joined group 233.91.1.1 on tun0 ...
^C
Received total: 0 packets
root@client:~/scripts# _

Server:

root@serv:~# ifconfig 
 . . .
tun0      Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  
         inet addr:10.10.0.1  P-t-P:10.10.0.10  Mask:255.255.255.255
 . . .

root@serv:~# tcpdump -vvvv -i tun0 -n
tcpdump: listening on tun0, link-type RAW (Raw IP), capture size 262144 bytes
18:02:38.974523 IP (tos 0xc0, ttl 1, id 0, offset 0, flags [DF], proto IGMP (2), length 40, options (RA))
    10.10.0.10 > 224.0.0.22: igmp v3 report, 1 group record(s) [gaddr 233.91.1.1 to_ex { }]
18:02:39.166443 IP (tos 0xc0, ttl 1, id 0, offset 0, flags [DF], proto IGMP (2), length 40, options (RA))
    10.10.0.10 > 224.0.0.22: igmp v3 report, 1 group record(s) [gaddr 233.91.1.1 to_ex { }]
18:02:44.792110 IP (tos 0xc0, ttl 1, id 0, offset 0, flags [DF], proto IGMP (2), length 40, options (RA))
    10.10.0.10 > 224.0.0.22: igmp v3 report, 1 group record(s) [gaddr 233.91.1.1 to_in { }]
18:02:45.752376 IP (tos 0xc0, ttl 1, id 0, offset 0, flags [DF], proto IGMP (2), length 40, options (RA))
    10.10.0.10 > 224.0.0.22: igmp v3 report, 1 group record(s) [gaddr 233.91.1.1 to_in { }]
^C
4 packets captured
4 packets received by filter
0 packets dropped by kernel
root@serv:~# _
  • or, the log messages from the pimd daemon do not indicate it received IGMP from "client"?

Correct. No info in logs that frame is received.

  • or, the pimd daemon does not react to IGMP from client? Please check all three cases.
  1. can you at least ping 10.10.0.1 from "client" ? ping 10.10.0.1

Yes I can.

  1. can you try the following on the "client" (assuming Linux) ping -t 1 10.10.0.1

It works:

root@client:~/scripts# ping -t1 10.10.0.1
PING 10.10.0.1 (10.10.0.1) 56(84) bytes of data.
64 bytes from 10.10.0.1: icmp_seq=1 ttl=64 time=64.2 ms
64 bytes from 10.10.0.1: icmp_seq=2 ttl=64 time=64.6 ms
64 bytes from 10.10.0.1: icmp_seq=3 ttl=64 time=64.4 ms
64 bytes from 10.10.0.1: icmp_seq=4 ttl=64 time=64.0 ms
64 bytes from 10.10.0.1: icmp_seq=5 ttl=64 time=64.7 ms
^C
--- 10.10.0.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 64.079/64.446/64.743/0.406 ms
root@client:~/scripts# _

(assuming Windows) ping -i 1 10.10.0.1 Does it work as well?

Yes it does work. I triple check it . . . :)

jp-t commented 6 years ago

Your config file is incorrect in several ways. 1) the multicast group "233.91.122.18" in the config file does not match the group "233.91.1.1" used by the client. (but, maybe you have another RP for this group? anyway "pimd" computer will not serve this group) Ignore this, since the default mask is 16 bits. 2) the "rp-address" keyword is spelled with an hyphen, not an underscore. 3) the "rp-address" expects a local address as its mandatory first argument and group as optional second argument (warning, it's the opposite for rp-candidate) 4) the groups should specify a masklen (not mandatory, but otherwise one must guess at the default masklen value)

bbbpiotr commented 6 years ago

Thank you for response and tips . . . again.

I was unable to achieve what I need on original source. I modified source to meet my requirements. It works good. Thank you for support again. :)

troglobit commented 6 years ago

@bbbpiotr Maybe we can close this issue with pimd now?

bbbpiotr commented 6 years ago

@troglobit Yes, sure. Thank you.