troglobit / pimd

PIM-SM/SSM multicast routing for UNIX and Linux
http://troglobit.com/projects/pimd/
BSD 3-Clause "New" or "Revised" License
197 stars 87 forks source link

lo and/or /32 not allowed #71

Closed rburkholder closed 8 years ago

rburkholder commented 8 years ago

I have pimd 2.3.1, kernel 4.3.5 on debian amd64.

To get routing in the right direction, I have a route like:

root@host001# ip -d route show 224.0.0.0/4
unicast 224.0.0.0/4  proto boot  scope global 
    nexthop via 10.2.4.8  dev enp3s0 weight 1
    nexthop via 10.2.4.10  dev enp4s0 weight 1

I noticed via netstat -g, that a multicast source will register against a specific interface. If that interface goes down, the source won't move to another interface.

So try a different a form of resiliency, independent of the exit interface, I looked at alternate options.

At LCM, they show a route to the loopback. Which makes sense. I would like my RPs on the loopback, much like what can be done on Cisco devices. I can add the following route, and it shows in the routing table.

ip route add 224.0.0.0/4 nexthop via 10.2.0.4

I can do the following to get multicast on the loopback:

ip link set lo multicast on

I even have an additional ip address on the loopback, and confirms multicast is set:

root@host001# ip addr show dev lo
1: lo: <LOOPBACK,MULTICAST,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group  default 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 10.2.0.4/32 brd 10.2.0.4 scope global lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever

I have phyint in pimd.conf:

phyint 10.2.0.4 enable

Pimd seems to pick up physical interfaces, but I can't seem to get it to recognize the lo device. The following debug doesn't indicate why it doesn't pick up lo. lo won't show up in the vif table.

pimd -f --debug=routes,peers,pim_routes,interface 

Also, I notice that I am unable to use the 'rp-candidate local-addr' with an interface with a /32 address. So that eliminates the lo as a suitable source of an rp location. Is that by design? It seems possible in cisco land though.

Is this a pimd or a kernel issue? Bug / feature?

Are there other ways to obtain some resiliency on the exit interfaces?

Thanx.

troglobit commented 8 years ago

Hi @rburkholder :smiley:

First, traditional Cisco gear is quite different from Linux, so using loopback as is done there does not always apply. Even though some concepts used by them can be really handy.

Now, you figured out that you need to set the MULTICAST flag on loopback, so that's a start. But pimd (currently) does not allow /32's ... yeah, I don't think it's needed but there's a lot of legacy code in there I haven't dared to touch yet. The configure script supports a --disable-masklen-check flag that you can try out, but I don't think you really need this anyway, see further down.

Now, when using a multicast routing daemon like pimd, mrouted, or smcroute you don't need to bother with setting up unicast routes to 224.0.0.0/4. The multicast routers have their own routing tables that work with VIFs, virtual interfaces that map on top of physical interfaces (phyints). With the former two multicast routes are installed in the kernel dynamically and with the latter smcroute routes are installed statically. The former two are therefore more suitable for resilient setups.

However, I'm afraid I don't understand fully what it is you want to do or how much experience you have with PIM. But in most cases you just need to fire up pimd with the default pimd.conf as argument (see -h for syntax) and you're off, basically.

For a crash course you need:

  1. A multicast sender which is connected to your router, if that network is smarter than an unmanaged switched network you must verify that this network can learn of your routing daemon and direct all multicast towards your router.
  2. The multicast sender must send multicast with a TTL >1 ... multicast is usually treated as broadcast and always default to TTL 1 to prevent "broadcast" from being routed.
  3. On the receiver side you either need another PIM router, or a receiver that is capable of responding to IGMP Queries with an IGMP Join for the multicast group it wants to receive. The join is (simplified) what triggers pimd to dynamically install a multicast route into the kernel.

Use the tools pimd -r, cat /proc/net/ip_mr_vif, and cat/proc/net/ip_mr_cacheto see what your PIM daemon does and what it installs in the kernel. The latter can be parsed by a regular human using theip mroute` tool.

rburkholder commented 8 years ago

Thank you for your response.

I am just getting started with pimd, so I am doing a lot of 'brute force and ignorance' style of testing to get something working. I do have something working now. An outline of the configuration can be found at Linux VXLAN with pimd, Quagga, and openvswitch.

I am going to have to spend some quality time and watch traces to build up my comfort with the group registrations.

For the routing, as outlined in my blog item, I have a default route going to a management interface, rather than towards the core. I need to fix that, and then maybe the multicast will work properly without the specific route. The multicast group woudn't work without that multicast route in place.

I also used an openvswitch vlan to act as a residence for the RP on each of the two core devices. I will need to tune how groups are associated with each RP, and may obtain a bit of load balancing in the process.

Thank you for pointing out the compile time flag for the mask checking. I will keep that in mind for future testing.

In the end, I was able to build a test rig to simulate your 'crash course'. I was pleased with the results. Things worked well.

Thanx for pointing out the mapping of 'ip mroute' and 'cat /proc/net/ip_mr_cache'.

troglobit commented 8 years ago

No problem, good luck mate! :smiley: