alsmith / multicast-relay

Relay multicast and broadcast packets between interfaces.
GNU General Public License v3.0
304 stars 47 forks source link

Multiple Networks? #59

Open VeniceNerd opened 2 years ago

VeniceNerd commented 2 years ago

Is it possible to connect multiple sets of vlans together? I will have my network (1), my tenant network (2), and a shared iot network (3).

I would want network 1 and 3 to share mdns as well as network 2 and 3 to share mdns. However, I don’t want network 1 and 2 to share mdns with each other.

Is that possible?

alsmith commented 2 years ago

Hi @VeniceNerd !

Certainly you can bridge over multiple networks, just specify all the interface names in --interfaces.

@commiepinko had the issue recently where he wanted to filter out responses so that certain networks would or would not receive mDNS broadcasts. Have a look at https://github.com/alsmith/multicast-relay/issues/57 and see if it makes sense to you.

Kind regards, Al.

commiepinko commented 2 years ago

I wanted to permit mDNS only between br0 and nine VLANs and block all traffic between VLANs. It took me awhile to get the syntax right. I'd be happy to answer any questions

The container:

podman run -it -d \
--restart=on-failure:10 \
--name="multicast-relay" \
--network=host \
--mount type=bind,src=/mnt/data/on_boot.d_support,dst=/multicast-relay-config \
-e OPTS="--ifFilter=/multicast-relay-config/ifFilter.json" \
-e INTERFACES="br0 br101 br102 br103 br104 br105 br106 br107 br108 br109" \
docker.io/scyto/multicast-relay

ifFilter.json:

{
"192.168.0.0/24": ["br0", "br101", "br102", "br103", "br104", "br105", "br106", "br107", "br108", "br109"],
"192.168.1.0/24": ["br0"],
"192.168.2.0/24": ["br0"],
"192.168.3.0/24": ["br0"],
"192.168.4.0/24": ["br0"],
"192.168.5.0/24": ["br0"],
"192.168.6.0/24": ["br0"],
"192.168.7.0/24": ["br0"],
"192.168.8.0/24": ["br0"],
"192.168.9.0/24": ["br0"]
}
VeniceNerd commented 2 years ago

Thanks for the quick response! I'm not 100% sure I explain edmy use case clearly enough, though. I'm no trying to bridge multiple networks together (so they can all share mdns among each other) I'm trying to have several seperate mdns groups. I think it's easier explained with a graphic:

VlanNetwork

So basically I would like for VLAN 10, 20, and 30 to each exchange mdns information with VLAN 50 but I don't want VLAN's 10, 20, and 30 to exchange mdns information with each other.

@commiepinko is that basically what you have figured out to do? If so I would definitely love to pick your brain a bit more about this because the mdns on the Unifi has been driving me INSANE! ;)

---- Use Case Explanation --- In case anyone cares what I'm trying to do here. I have a house with a guest house and an office in the back. They each have their own AppleTV, AppleID, and Homekit setup. So each one of them will get their own VLAN. Then I am planning to place all of the IOT devices as well as Homebridge, and HomeAssistant on the IOT network. I figure it's easier to have one shared IOT network instead of creating three separate IOT networks (especially since I run three instances of Homebridge all on the same Raspberry Pi)

That's why I need each of the individual home networks to be able to exchange mdsn info with the IOT network. However, I want to prevent all of the HomePods and AppleTVs from one house to show up in the other house. That's why I don't want those three networks to talk to each other. ;)

commiepinko commented 2 years ago

I'm assuming that the host that will run the relay is on your LAN. I'm also assuming the relay won't care if its host doesn't send or receive mDNS broadcasts. If I'm wrong, you should still be able to get the idea.

Before you start, get the names of your network interfaces if you haven't already.

Assuming your network looks like this, which is unlikely…

br0 (192.168.0.0/24) - LAN
br10 (192.168.10.0/24) - VLAN 10
br20 (192.168.20.0/24) – etc.
br30 (192.168.30.0/24)
br50 (192.168.50.0/24)

…the following should do what you want.

container

I'm running the relay on a UniFi UDMP router (which requires other configuration in addition to this). Your container will of course be suited to your host. However you configure it, make sure that the mount and the location of ifFilter.json match up. The relay has to be able to find it.

podman run -it -d \
--restart=on-failure:10 \
--name="multicast-relay" \
--network=host \
--mount type=bind,src=/mnt/data/on_boot.d_support,dst=/multicast-relay-config \
-e OPTS="--ifFilter=/multicast-relay-config/ifFilter.json" \
-e INTERFACES="br0 br10 br20 br30 br50" \
docker.io/scyto/multicast-relay

ifFilter.json

{
"192.168.10.0/24": ["br20", "br30", "br50"]
"192.168.20.0/24": ["br10", "br30", "br50"]
"192.168.30.0/24": ["br10", "br20", "br50"]
"192.168.50.0/24": ["br10", "br20", "br30",]
}

Each line is a rule that controls which subnets are allowed to communicate with which network interfaces. Of course this controls only mDNS. I limit other traffic with a firewall running on the same host as the relay. (I can't use the firewall for everything because my router only supports an mDNS reflector, which projectile vomits everything everywhere.)

A final thought… It sounds like you have an Internet of Things going on. At home, I've chosen to handle IoTsecurity by putting my trivial devices on a different SSID from anything sensitive and using a stateful firewall to limit the traffic. It works well and can be managed by the firewall's interface without all this fussing.

Hope that helps.

VeniceNerd commented 2 years ago

Thank you so much for all your help! Let me go through your reply step by step to make sure I got it right.

I'm assuming that the host that will run the relay is on your LAN. I'm also assuming the relay won't care if its host doesn't send or receive mDNS broadcasts.

I think I’m using the exact same equipment as you. I was planning to run this on my Dreamachine Pro. That’s what you do, right

I'm running the relay on a UniFi UDMP router (which requires other configuration in addition to this). Your container will of course be suited to your host.

Do you know where I could find an easy to follow, step by step, tutorial to get this up and running on my DMP? I have never messed with installing anything on it some I’m a bit nervous…

podman run -it -d \ --restart=on-failure:10 \ --name="multicast-relay" \ --network=host \ --mount type=bind,src=/mnt/data/on_boot.d_support,dst=/multicast-relay-config \ -e OPTS="--ifFilter=/multicast-relay-config/ifFilter.json" \ -e INTERFACES="br0 br10 br20 br30 br50" \ docker.io/scyto/multicast-relays

Do you not have logging disabled? I read somewhere that the logs will fill up your storage space on the DMP within hours.

ifFilter.json

{ "192.168.10.0/24": ["br20", "br30", "br50"] "192.168.20.0/24": ["br10", "br30", "br50"] "192.168.30.0/24": ["br10", "br20", "br50"] "192.168.50.0/24": ["br10", "br20", "br30",] }

So this is where you lose me lol. I basically only VLAN 10 to talk to VLAN 50, VLAN 20 to VLAN 50, and VLAN 30 to VLAN 50 but not to each other. Maybe I’m not understanding exactly how the ifFilter.json works but to me it reads as if VLAN 10 would be allowed to talk to VLAN 20, 30, and 50, for example?

A final thought… It sounds like you have an Internet of Things going on.

Yeah exactly. VLAN 50 will have all my wifi plugs, home automation devices, homebridge, and home assistant on it. Then I’ll bridge the mdns to the house, office, and guest house so they can talk to the respective Apple TV’s.

Also, once this is all set up is there a way to test if everything is working right? For my firewall rules I always run the “ping” command to see if traffic is allowed through or not but not sure if there’s a way to test this for mdns as well.

Again thanks so much for your help. I really appreciate I!

commiepinko commented 2 years ago

I was planning to run this on my Dreamachine Pro. That’s what you do, right

Correct.

I read somewhere that the logs will fill up your storage space on the DMP within hours.

That clever boostchicken is ahead of us both. Implement their container-common script.

So this is where you lose me lol.

ifFilter.json is just a list of source→destination pairs (one per line) which determine which subnets can broadcast to which network interfaces (which were declared in the INTERFACES parameter when you created the container.) Each line explicitly declares which source→destination(s) broadcast traffic will be relayed. Just declare what you want to permit.

Source is declared by subnet; destination by virtual network interface name.

2021-11-21_05-26-25_PM

2021-11-21_06-05-02_PM2

Also, once this is all set up is there a way to test if everything is working right?

I found the Discovery bonjour browser helpful. It'll show you mDNS broadcasts being received by the host you run it on.

For my firewall rules I always run the “ping” command to see if traffic is allowed through or not but not sure if there’s a way to test this for mdns as well.

Since you can only ping hosts, not ports, that's no help here. Discovery will let you see what broadcasts are or aren't reaching a given host.

So this is where you lose me lol.

No problem. The more specific you are about your confusion(s), the easier it is to address them.

VeniceNerd commented 2 years ago

Wow! You are incredible!!! You just taught me so much about networking in general. Thank you so much!!!! I'm so excited to finally be wrapping my head around this. So let me see if I got this right.

VlanNetwork_v2

ifFilter.json OPTION 1:

{ "10.0.1.50.0/24": ["br0", "br10", "br20", "br30"] }

--> This would send mDNS traffic FROM my IOT Network (VLAN 50) to ALL of my other networks, correct? Will this be automatically a two way street or will the mDNS traffic only flow one way in this case?

ifFilter.json OPTION 2:

{ "10.0.1.50.0/24": ["br0", "br10", "br20", "br30"] "10.0.1.10.0/24": ["br0", "br50"] "10.0.1.20.0/24": ["br0", "br50"] "10.0.1.30.0/24": ["br0", "br50"] "10.0.1.1.0/24": ["br50"] }

--> In this case I would have created a two way street between VLAN 50 & LAN to all the other VLANS, correct? So mdns traffic would flow back and forth between them but NOT between the separate VLANS (10 to 20, 20 to 30, etc...), correct? Or would this cause mdsn traffic to spill over from VLAN 10 to VLAN 20 because it all passes through VLAN 50?

Also, if all of my IOT devices are on VLAN 50 and my AppleTV's are on their respective VLANS do I even need to have mdns flow both ways or would I only need the IOT (VLAN 50) traffic to flow TO the Apple TV's?

Once I understand the theory of this I think I will be ready to take the next step and try installing this on my DMP. Would this be the workflow?

Step 1: Install the UDM Utilities - https://github.com/boostchicken/udm-utilities

Step 2: Install Multicast relay - https://hub.docker.com/r/scyto/multicast-relay

Step 3: Run these commands

podman run -it -d \
--restart=on-failure:10 \
--name="multicast-relay" \
--network=host \
--mount type=bind,src=/mnt/data/on_boot.d_support,dst=/multicast-relay-config \
-e OPTS="--ifFilter=/multicast-relay-config/ifFilter.json" \
-e INTERFACES="br0 br10 br20 br30 br50" \
docker.io/scyto/multicast-relay

ifFilter.json

{
ONE OF MY OPTIONS FROM ABOVE
}

Would that be the correct order? If so I will try to find some tutorials on how to implement each step. :)

commiepinko commented 2 years ago

ifFilter.json OPTION 1: "10.0.1.50.0/24": ["br0", "br10", "br20", "br30"] --> This would send mDNS traffic FROM my IOT Network (VLAN 50) to ALL of my other networks, correct? Will this be automatically a two way street or will the mDNS traffic only flow one way in this case?

The relays are uni-directional. If you want a two-way street, specify the two different directions on separate lines.

You've got the right idea. Don't be timid. Speaking for me, I never really learn anything until I've tried it and maid at least one big mess. Just keep track of what files you've created or altered on the UDM, so that you can revert and try again if it all goes pear shaped.

VeniceNerd commented 2 years ago

The relays are uni-directional. If you want a two-way street, specify the two different directions on separate lines.

Copy that. So am I correct to assume that "10.0.1.50.0/24": ["br0", "br10", "br20", "br30"] would take the mDns traffic from my VLAN 50 and send it to LAN, VLAN 10, VLAN 20, and VLAN 30?

Do you know if Homekit even required bi directional mdns? Or is it only important that the IoT devices can send their mDns TO the AppleTVs?

commiepinko commented 2 years ago

So am I correct to assume that "10.0.1.50.0/24": ["br0", "br10", "br20", "br30"] would take the mDns traffic from my VLAN 50 and send it to LAN, VLAN 10, VLAN 20, and VLAN 30?

Yes.

Do you know if Homekit even required bi directional mdns? Or is it only important that the IoT devices can send their mDns TO the AppleTVs?

I'm not sure, but remember that in this context, mDNS is only about broadcasting advertisements for services, not actually using the services. The only thing multicast-relay controls is the broadcast of the advertisement. The actual use of the advertised services is separate. For example, I have a server that uses mDNS on port 5353 to advertise SMB file shares to various VLANs. Clients on those VLANs can see the shares. However, whether or not they can actually access the shares has nothing to do with mDNS or multicast-relay. SMB uses ports 445 and 139, and whether or not hose are open between client and server is up to the firewall, not multicast-relay.

If this is baffling, think of mDNS as a steaming service that does nothing but advertise shows available on other services. Being able to see the advertisement doesn't determine whether or not you can access the other services.

It helps to remember what mDNS was developed for. The original motive for it was zero-config networking, i.e., to allow hosts on small networks without a DNS server to determine each other's names and addresses automatically. It doesn't actually handle the transfer of data between hosts, it only makes it possible for the hosts to address each other.

VeniceNerd commented 2 years ago

Thanks to all of your help I have finally started jumping in now. Here is what I have done so far:

  1. Disable Multicast DNS in settings
  2. Disable IGMP Snooping on all networks
  3. Disable Multicast Enhancement on all wireless networks
  4. Run Discovery App -> Confirmed that I no longer see mdns services from other VLANS
  5. ssh root@IPofDMP
  6. run ifconfig -> got my networks
Screenshot 2021-11-29 at 13 23 01
  1. unifi-os shell
  2. curl -L https://udm-boot.boostchicken.dev -o udm-boot_1.0.5_all.deb
  3. dpkg -i udm-boot_1.0.5_all.deb
Screenshot 2021-11-29 at 13 36 16

Now at this point I am stuck. The boostchicken readme says:

podman-update Updates Podman, conmon, and runc to a recent version. This allows docker-compose usage as well.

container-common Apply this after on-boot-script. Updates container defaults to maintain stable disk usage footprint of custom containers. Prevents logs filling up UDM storage full.

However, I can't find the commands on how to run podman-update or install the container-common to prevent the logs filling up. Could you guide me on this step?

After I have done those two steps I plan on continuing like this:

10.

podman run -it -d \
--restart=on-failure:10 \
--name="multicast-relay" \
--network=host \
--mount type=bind,src=/mnt/data/on_boot.d_support,dst=/multicast-relay-config \
-e OPTS="--ifFilter=/multicast-relay-config/ifFilter.json" \
-e INTERFACES="br0 br10 br20 br30 br50" \
docker.io/scyto/multicast-relay

This step will install the multicast-relay docker container, correct? Do I enter this all at once?

  1. Configure the ifFilter.json

{ "10.0.1.50.0/24": ["br0", "br10", "br20", "br30"] }

This should send mdns advertisment from my VLAN50 to all other networks. I believe this unidirection should be enough to get my HomeKit working. However, I am also not sure how I actually edit or create this ifFilter.json file. Do you have any pointers here as well?

  1. Will there be any other steps after this? I found a website that I used as a guide that also mentions the following:

Add a startup script to re-execute the container on startup.

touch 01-multicast-relay.sh
chmod +x 01-multicast-relay.sh

Then use vim 01-multicast-relay.sh to edit the file. Hit i to enter edit mode, paste the following contents, then hit esc and :w to save the file. Enter :q to quit.


#!/bin/sh

kill all instances of avahi-daemon (UDM spins an instance up even with mDNS services disabled)

killall avahi-daemon

start the multicast-relay container image

podman start multicast-relay



**But I am not sure if I should be doing this or not since I didn't see it anywhere else**. 

I feel like am SO CLOSE to success now!!! :)
commiepinko commented 2 years ago

By your numbers, wherever comments seem helpful…

  1. Note that you enter unifi-os shell to install your boostchickens, but then you exit and do everything else at whatever-the-top-level-is-called. Observe:
2021-11-29_04-27-22_PM2

(Ignore the error message.) As you can see, the unifi-os shell don't know nuthin' 'bout runnin' no podman.

  1. This step will install the multicast-relay docker container, correct? Do I enter this all at once?

Yes and yes. podman is a single command and all the rest is just its parameters. Note the \ ending all lines except the last - \ says "ignore the following line break". One does this to break commands into readable parts to keep them comprehensible. It's a matter of taste and style. When the command is executed, it's seen as one continuous line without breaks.

  1. "10.0.1.50.0/24": ["br0", "br10", "br20", "br30"]

You've got it. The subnet on the left forwards to the interfaces on the right, and if that's the only line, that's the only thing multicast-relay will do.

I am also not sure how I actually edit or create this ifFilter.json file.

Create ifFilter.json using a text editor. It doesn't matter which so long so it saves plain text with Unix line endings. Every programmer has their favorite text editing app, and is often quite passionate about it. I'll leave it to you to find your one true love. I've used BBedit since the earth cooled. It has a free mode. You can also use Apple's /Applications/TextEdit. (Just be sure to Format > Make Plain Text. TextEdit defaults to rich text, which won't work here.)

Of course you can also use a 'nix editor like vim or nano. I generally keep backup copies of scripts, config files, and all the rest, so I use a macOS editor and send the files to their working locations using scp.

Where you put all these files of course matters. Here's how I do it.

/mnt/data/on_boot.d                Required for scripts to be run at startup.
/mnt/data/on_boot.d_support         Arbitrary location for files used by
                                                           startup scripts.

You can put ifFilter.json anywhere you like, but the correct directory path must appear in this line of the container configuration in order for podman to find it.

--mount type=bind,src=/mnt/data/on_boot.d_support,dst=/multicast-relay-config \

12 Will there be any other steps after this?

Up to you.

One picture being worth a thousand, here's my configuration: multicast-relay_configuration.zip, consisting of three startup scripts and one support file.

10-cat-pub-key.sh - install my public key on the UDM for secure password-free logon 20-container-common.sh - keep startup script logs from going mad 30-multicast-relay.sh - start the multicast relay container*

My podman script is in there as well, and ifFilter.json we've already discussed.

Add a startup script to re-execute the container on startup. touch 01-multicast-relay.sh chmod +x 01-multicast-relay.sh

You needn't script the existence of the script(s). Just put 'em where they go. Similarly, permissions matter, but you need set them but once:

2021-11-29_05-19-35_PM 2021-11-29_05-19-02_PM2

I feel like am SO CLOSE to success now!!! :)

Me too, but keep in mind it's not success until you've broken it and fixed it a few times. 😽

Finally, keep in mind that this entire exercise consists mostly of a way make multicast-relay persist between restarts of the UDM. Software updates to the UDM will may wipe it all out at any time. After every update, you'll need to check, reinstalling as necessary. You'll want to keep a copy of all your work elsewhere, or if you're feeling ambitious, perhaps write a single script that recreates all of it in a single execution. In general, the more you know about Linux, the easier this stuff gets.

VeniceNerd commented 2 years ago

Thank you so much again. This is where I left off last time:

143946658-33d257c7-cb5b-43d8-95e1-dd4f11568e80

As far as I can tell I correctly installed the UDM Pro Boot script with the above commands, correct?

Next I followed this guide to a T to install container-common. I see it in the folder:

Screenshot 2021-11-30 at 16 28 08

and "vim" checked that the file has content:

Screenshot 2021-11-30 at 16 29 08

Then I placed the "ifFilter.json" into the "on_boot.d_support" folder:

Screenshot 2021-11-30 at 16 25 17

Does that all look correct so far?

However, next the boostchicken guide says to install podman-update:

podman-update Updates Podman, conmon, and runc to a recent version. This allows docker-compose usage as well.

The guide for that overwhelmed me a little, though. Do I need to do this before going on to install the relay? Or is this optional? If it is NOT optional do you have any other resources that could help me get this completed?

I'm going to wait to hear back from you if I need to deal with the "podman-update" thing or if I'm ready to go ahead with the relay instal. Rather take it slow and steady with stuff like this. ;)

VeniceNerd commented 2 years ago

I kept going because I really need my HomeKit setup to work again so I now finished all steps:

Placed the 30-multicast-relay.sh" file on the UDMP and ran "chmod +x" on the file:

Screenshot 2021-12-01 at 17 50 00

Executed the following command in Terminal:

podman run -it -d \
--restart=on-failure:10 \
--name="multicast-relay" \
--network=host \
--mount type=bind,src=/mnt/data/on_boot.d_support,dst=/multicast-relay-config \
-e OPTS="--ifFilter=/multicast-relay-config/ifFilter.json" \
-e INTERFACES="br0 br10 br20 br30 br50" \
docker.io/scyto/multicast-relay
Screenshot 2021-12-01 at 17 52 48

Ran the following two commands:

killall avahi-daemon

podman start multicast-relay
Screenshot 2021-12-01 at 17 57 01

Unfortunately, the discovery tool (running on a computer in VLAN 10) is not showing any of the devices in VLAN50. It only shows the devices that are on VLAN 10.

Did I do something wrong? Is there any way to check if the multicast relay is actually working?

PS: ran "podman ps" and got the following result:

Screenshot 2021-12-01 at 18 39 49

No idea if that should be showing our "multicas-relay" or not...

EDIT 1:

After digging trough some forums I found this command "journalctl -u udm-boot.service" and ran it from the Unifi Shell. Not 100% sure what it does but here is the log result:

Screenshot 2021-12-01 at 21 41 49

EDIT 2:

Ran just "journalctl" from the Unifi Shell and looked out for "udm-boot.service". Here is the result:

Screenshot 2021-12-02 at 09 47 56

I also noticed that a little further down "avahi-daemon.socket" shows up as active. Wasn't that supposed to get killed with the "30-multicast-relay.sh" bootup script?

Screenshot 2021-12-02 at 09 54 51

Anyways, here is the full readout in case it helps diagnose the problem:

root@ubnt:/# systemctl
UNIT                                                              LOAD   ACTIVE     SUB       DESCRIPTION          

dev-sda4.device                                                   loaded activating tentative /dev/sda4                
dev-sda6.device                                                   loaded activating tentative /dev/sda6            

-.mount                                                           loaded active     mounted   /                        
data.mount                                                        loaded active     mounted   /data                    
dev-disk.mount                                                    loaded active     mounted   /dev/disk                
dev-hugepages.mount                                               loaded active     mounted   Huge Pages File System   
dev-mqueue.mount                                                  loaded active     mounted   POSIX Message Queue File 
etc-hostname.mount                                                loaded active     mounted   /etc/hostname            
etc-hosts.mount                                                   loaded active     mounted   /etc/hosts               
etc-resolv.conf.mount                                             loaded active     mounted   /etc/resolv.conf         
etc-systemd-system-unifi\x2dcore.service.d-capabilities\x2dworkaround.conf.mount loaded active     mounted   /etc/system
d/system/unifi
etc-unifi\x2dos-ssh_proxy_port.mount                              loaded active     mounted   /etc/unifi-os/ssh_proxy_p
etc_host.mount                                                    loaded active     mounted   /etc_host                
mnt-persistent.mount                                              loaded active     mounted   /mnt/persistent          
root-.ssh-id_rsa.mount                                            loaded active     mounted   /root/.ssh/id_rsa        
run-.containerenv.mount                                           loaded active     mounted   /run/.containerenv       
sys-kernel-config.mount                                           loaded active     mounted   Kernel Configuration File
sys-kernel-debug.mount                                            loaded active     mounted   Kernel Debug File System 
tmp.mount                                                         loaded active     mounted   /tmp                     
usr-lib-version.mount                                             loaded active     mounted   /usr/lib/version         
var-log-journal.mount                                             loaded active     mounted   /var/log/journal         
var-opt-unifi\x2dprotect-tmp.mount                                loaded active     mounted   /var/opt/unifi-protec
t/tm
systemd-ask-password-console.path                                 loaded active     waiting   Dispatch Password Request
systemd-ask-password-wall.path                                    loaded active     waiting   Forward Password Requ
ests
init.scope                                                        loaded active     running   System and Service Ma
nage
cron.service                                                      loaded active     running   Regular background progra
dbus.service                                                      loaded active     running   D-Bus System Message Bus 
exim4.service                                                     loaded active     running   LSB: exim Mail Transport 
freeswitch.service                                                loaded active     running   freeswitch               
postgresql-cluster@9.6-main.service                               loaded active     exited    PostgreSQL initial setup 
postgresql-cluster@9.6-protect-cleanup.service                    loaded active     exited    Postgresql cluster cleanu
postgresql-cluster@9.6-protect.service                            loaded active     exited    PostgreSQL initial setup 
postgresql.service                                                loaded active     exited    PostgreSQL RDBMS         
postgresql@9.6-main.service                                       loaded active     running   PostgreSQL Cluster 9.6-ma
postgresql@9.6-protect.service                                    loaded active     running   PostgreSQL Cluster 9.6-pr
systemd-journal-flush.service                                     loaded active     exited    Flush Journal to Persiste
systemd-journald.service                                          loaded active     running   Journal Service          
systemd-logind.service                                            loaded active     running   Login Service            
systemd-modules-load.service                                      loaded active     exited    Load Kernel Modules      
systemd-remount-fs.service                                        loaded active     exited    Remount Root and Kernel F
systemd-sysctl.service                                            loaded active     exited    Apply Kernel Variables   
systemd-sysusers.service                                          loaded active     exited    Create System Users      
systemd-tmpfiles-setup-dev.service                                loaded active     exited    Create Static Device Node
systemd-tmpfiles-setup.service                                    loaded active     exited    Create Volatile Files and
systemd-update-utmp.service                                       loaded active     exited    Update UTMP about System 
systemd-user-sessions.service                                     loaded active     exited    Permit User Sessions     
udm-boot.service                                                  loaded active     exited    Run On Startup UDM       
ulp-go.service                                                    loaded active     running   ULP-GO                   
unifi-access.service                                              loaded active     running   UniFi access controller  
unifi-base-ucore.service                                          loaded active     running   UniFi Base Controller    
unifi-core.service                                                loaded active     running   UniFi Core               
unifi-protect.service                                             loaded active     running   UniFi Protect            
unifi-talk.service                                                loaded active     running   UniFi Talk controller    
unifi.service                                                     loaded active     running   unifi                

-.slice                                                           loaded active     active    Root Slice               
system-postgresql.slice                                           loaded active     active    system-postgresql.slice  
system-postgresql\x2dcluster.slice                                loaded active     active    system-postgresql\x2dclus
system.slice                                                      loaded active     active    System Slice             
user.slice                                                        loaded active     active    User and Session Slic
e   
avahi-daemon.socket                                               loaded active     listening Avahi mDNS/DNS-SD Stack A
dbus.socket                                                       loaded active     running   D-Bus System Message Bus 
systemd-initctl.socket                                            loaded active     listening initctl Compatibility Nam
systemd-journald-dev-log.socket                                   loaded active     running   Journal Socket (/dev/log)
systemd-journald.socket                                           loaded active     running   Journal Socket      

basic.target                                                      loaded active     active    Basic System             
cryptsetup.target                                                 loaded active     active    Local Encrypted Volumes  
getty.target                                                      loaded active     active    Login Prompts            
graphical.target                                                  loaded active     active    Graphical Interface      
local-fs-pre.target                                               loaded active     active    Local File Systems (Pre) 
local-fs.target                                                   loaded active     active    Local File Systems       
multi-user.target                                                 loaded active     active    Multi-User System        
network-online.target                                             loaded active     active    Network is Online        
network.target                                                    loaded active     active    Network                  
paths.target                                                      loaded active     active    Paths                    
slices.target                                                     loaded active     active    Slices                   
sockets.target                                                    loaded active     active    Sockets                  
swap.target                                                       loaded active     active    Swap                     
sysinit.target                                                    loaded active     active    System Initialization    
timers.target                                                     loaded active     active    Timers              

systemd-tmpfiles-clean.timer                                      loaded active     waiting   Daily Cleanup of Temporar

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

80 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
root@ubnt:/# 
explorernerd commented 2 years ago

This thread is a godsend! I have been wanting to install the relay for ages but never knew where to start. I followed along as good as I could but it seems I'm in the same boat as @VeniceNerd. Installed everything but no dice. Since I followed his steps exactly (with my VLAN addressed of course) hat's probably not a surprise.

@commiepinko hope you'll have time to come back here soon again seems like you've got this figured out on your system.

Either way great knowledge so far. Thanks!

commiepinko commented 2 years ago

@VeniceNerd …

Podman has a logging function. If necessary, run podman start multicast-relay Then run podman logs -f multicast-relay to see what it has to say for itself.

If it's not sufficiently informative, remove the existing container and make a new verbose one.

podman stop multicast-relay
podman container rm multicast-relay

podman run -it -d \
blah blah blah \
-e OPTS="--verbose --ifFilter=/multicast-relay-config/ifFilter.json" \
blah blah blah

With the verbose option, podman logs -f multicast-relay will be very chatty indeed. Sleuth for clues.

Of course ultimately you'll want to remove the verbose container, and run a new one without that option.

commiepinko commented 2 years ago

@explorernerd

Glad to help. It seems as though the tricky part for beginners is distinguishing between how the container is configured and operates vs the parts that are purely the relay's purview.

VeniceNerd commented 2 years ago

Podman has a logging function. If necessary, run podman start multicast-relay Then run podman logs -f multicast-relay to see what it has to say for itself.

@commiepinko you're back! happy dance!!!! I ran the two commands and this is the output:

starting multicast-relay
Using Interfaces: br0 br10 br20 br30 br50
Using Options --foreground  --ifFilter=/multicast-relay-config/ifFilter.json
Traceback (most recent call last):
  File "./multicast-relay/multicast-relay.py", line 1018, in <module>
    sys.exit(main())
  File "./multicast-relay/multicast-relay.py", line 1015, in main
    packetRelay.loop()
  File "./multicast-relay/multicast-relay.py", line 671, in loop
    if self.onNetwork(srcAddr, network, self.cidrToNetmask(int(netmask))) and tx['interface'] not in self.ifFilter[net]:
  File "./multicast-relay/multicast-relay.py", line 844, in onNetwork
    networkL = PacketRelay.ip2long(network)
  File "./multicast-relay/multicast-relay.py", line 827, in ip2long
    packedIP = socket.inet_aton(ip)
OSError: illegal IP address string passed to inet_aton

Of course I don't really have any clue what it means. Is it helpful to you at all? Or should I remove the container and re-install with verbose?

Also, is it normal that udm-boot.service reports as "loaded / active / excited" when running "journalctl" from the Unifi Shell or should it be saying "loaded / active / running"?

Screenshot 2021-12-02 at 13 54 21

@explorernerd glad this is helping you out as well! Once I have successfully figured this out I will write a clean post here with all the correct steps, summarizing everything that @commiepinko has taught me so that other people can join us.

commiepinko commented 2 years ago

Of course I don't really have any clue what it means. Is it helpful to you at all? Or should I remove the container and re-install with verbose?

If that's the concise log, I shudder to imagine the verbose version. (And I'm not sure what the policy here is, but many environments frown on posting massive log entries.)

My concise output from podman logs -f multicast-relay looks more like this:

starting multicast-relay
Using Interfaces: br0 br101 br102 br103 br104 br105 br106 br107 br108 br109
Using Options --foreground  --ifFilter=/multicast-relay-config/ifFilter.json

That's it - no repetition, no massive tome.

Also, is it normal that udm-boot.service reports as "loaded / active / excited" when running "journalctl" from the Unifi Shell or should it be saying "loaded / active / running"?

I don't know. What strikes me in your log is

OSError: illegal IP address string passed to inet_aton

I've no idea what that means, but it sounds like a possible syntax error somewhere in your configuration. Attach your container creation script, whatever you have in _/mnt/data/onboot.d, and your ifFilter.json?

VeniceNerd commented 2 years ago

If that's the concise log, I shudder to imagine the verbose version. (And I'm not sure what the policy here is, but many environments frown on posting massive log entries.)

Oh sorry about that. Won't do that again!

I don't know.

Whenever you're near your machine would you mind to check? You just have to run "journalctl" from the Unifi Shell and look for udm-boot.service. I would like to rule out that the issue is there...

Attach your container creation script, whatever you have in _/mnt/data/onboot.d, and your ifFilter.json?

I downloaded the files from my UPMP and zipped them up for you: UDM.zip

To create the container I ran this command in Terminal:

podman run -it -d \
--restart=on-failure:10 \
--name="multicast-relay" \
--network=host \
--mount type=bind,src=/mnt/data/on_boot.d_support,dst=/multicast-relay-config \
-e OPTS="--ifFilter=/multicast-relay-config/ifFilter.json" \
-e INTERFACES="br0 br10 br20 br30 br50" \
docker.io/scyto/multicast-relay

Hopefully something will jump out at you!

EDIT:

So just I ran the following commands to stop and remove my original container:

podman stop multicast-relay
podman container rm multicast-relay
Screenshot 2021-12-02 at 19 04 18

Then I re-created the container WTHOUT the ifFilter.json option (also removed the -d)

podman run -it \
--restart=on-failure:10 \
--name="multicast-relay" \
--network=host \
--mount type=bind,src=/mnt/data/on_boot.d_support,dst=/multicast-relay-config \
-e INTERFACES="br0 br10 br20 br30 br50" \
docker.io/scyto/multicast-relay

Boom! Immediately the Discovery tool is showing all the devices from all of my VLANS. I double checked by showing the logs:

Screenshot 2021-12-02 at 19 17 55

So it seems there is an issue with the ifFilter.json, right?

EDIT 2:

Yep! Did a f up on the ifFilter.json. Instead of:

{
"10.0.50.0/24": ["br0", "br10", "br20", "br30"]
}

I had and extra "1" in my IP address. What a stupid mistake!

{
"10.0.1.50.0/24": ["br0", "br10", "br20", "br30"]
}

So I replaced the ifFilter.json, removed the container and created another one:

podman run -it -d \
--restart=on-failure:10 \
--name="multicast-relay" \
--network=host \
--mount type=bind,src=/mnt/data/on_boot.d_support,dst=/multicast-relay-config \
-e OPTS="--ifFilter=/multicast-relay-config/ifFilter.json" \
-e INTERFACES="br0 br10 br20 br30 br50" \
docker.io/scyto/multicast-relay
Screenshot 2021-12-02 at 19 40 58

No more error message!

The only problem now appears to be that the ifFilter actually doesn't seem to do anything at all. No matter what VLAN I'm on I'm seeing devices from all other VLANs in Discovery. I attached my most recent files again!

UDM.zip

Do you have any idea why the ifFiler.json doesn't appear to be doing anything? Do I have more errors in there?

PS: I hope it doesn't bother you that I note down all my detailed steps. Want to make sure that @explorernerd and others can follow along.

VeniceNerd commented 2 years ago

@commiepinko if you don't want to read all the above here is a summary:

there was an error in the ifFilter.json. I had an additional "1" in the IP source IP address. After I fixed that the error messages disappeared. Multicast Relay is working now! However, ifFilter.json doesn't seem to have any effect. No matter what VLAN I'm on I can see the mdns traffic from all other VLANs. Here are my latest sh files:

UDM.zip

I feel SO CLOSE now!!! I hope you have an idea (or trouble shooting suggestions) to find out why the filters aren't doing anything yet.

PS: Does the ifFilter.json require any special chmod? This is what it looks like now:

Screenshot 2021-12-02 at 20 19 20
VeniceNerd commented 2 years ago

I figured it out!!! I figured out why it works for you but didn't work for me! I found this thread where I read that apparently if you don't define a source network in ifFilter by default it will broadcast to all other networks.

And since I only had one line in my ifFilter

10.0.50.0/24": ["br0", "br10", "br20", "br30"]

it meant that all the other networks would still broadcast everywhere else. So now I changed it to this:

{
"10.0.50.0/24": ["br0", "br10", "br20", "br30"],
"10.0.10.0/24": ["br0"],
"10.0.20.0/24": ["br0"],
"10.0.30.0/24": ["br0"],
"10.0.1.0/24": ["br0"]
}

This way I force mdns traffics from all the other vlans (10, 20, 30) to my LAN network [br0]. So now almost everything works as intended. All networks get MDNS traffic from VLAN 50, but networks 10, 20, and 30 can't see traffic from each other.

I say almost because weirdly enough my LAN (BR0) also only sees traffic from itself and VLAN 50. That's weird to me since I'm routing traffic from all the other networks into BR0. I assume it has to do with my final line:

"10.0.1.0/24": ["br0"]

Where I route br0 to br0. I assume that does something funky? I also tried:

"10.0.1.0/24": []

That has the same effect, though. In the thread I found the guys discuss how to do this but I am not sure it was actually resolved. Maybe @juliodiz or @alsmith could chime in on how to properly do this?

Unless you have an idea?

alsmith commented 2 years ago

Morning guys - great collaboration between you two - happy to see the community help there!

The ifFilter check is basically this:

                    transmit = True
                    for net in self.ifFilter:
                        (network, netmask) = '/' in net and net.split('/') or (net, '32')
                        if self.onNetwork(srcAddr, network, self.cidrToNetmask(int(netmask))) and tx['interface'] not in self.ifFilter[net]:
                            transmit = False
                            break
                    if not transmit:
                        continue

So the logic is, for each interface that we want to transmit on, if the source IP of the mDNS request is in any filter configuration line, and if the interface being considered is NOT in the list of interfaces in the config, then we do not transmit.

{
"10.0.50.0/24": ["br0", "br10", "br20", "br30"],
"10.0.10.0/24": ["br0"],
"10.0.20.0/24": ["br0"],
"10.0.30.0/24": ["br0"],
"10.0.1.0/24": ["br0"]
}

This should have the effect of letting 10.0.50. broadcast on br0, br10, br20 and br30. 10.0.10. only on br0, 10.0.20.* only on br0, etc - and also 10.0.1.0/24 also only on br0.

Oh and because the relay only relays between different vlans, any given vlan will always see discovery traffic from other hosts on the same vlan - the relay can't control or influence that at all.

That should cover your aim - br0 would see its own mDNS traffic, and that from from vlans 10, 20, 30 and 50.

VeniceNerd commented 2 years ago

Morning guys - great collaboration between you two - happy to see the community help there!

Yes, @commiepinko has been extremely helpful and generous with his time. I'm hoping this discussion will help many others setting up their relay!

{
"10.0.50.0/24": ["br0", "br10", "br20", "br30"],
"10.0.10.0/24": ["br0"],
"10.0.20.0/24": ["br0"],
"10.0.30.0/24": ["br0"],
"10.0.1.0/24": ["br0"]
}

This should have the effect of letting 10.0.50. broadcast on br0, br10, br20 and br30. 10.0.10. only on br0, 10.0.20.* only on br0, etc - and also 10.0.1.0/24 also only on br0.

It almost does! The only thing that doesn't work none of the traffic from 10.0.10., 10.0.20., or 10.0.30.* seems to show up on BR0.

One note about this last line:

"10.0.1.0/24": ["br0"]

10.0.1.0 IS BR0. I only included this line to add 10.0.1.0 on the filter list. Without this line traffic from 10.0.1.0 was flowing into all other VLANs. So basically I told it to just send traffic to itself. Maybe that's why there are some issues with BR0 receiving traffic from the other VLANS? I just didn't know how to solve 10.0.1.0 not sending any traffic to the other networks without this line.

@alsmith what would be the recommended way to add a source network to ifFiler.json and tell it NOT to send traffic anywhere? (Instead of using my weird workaround to have it send traffic to itself, which I think may be causing issues...)

PS: I did try "10.0.1.0/24": [] as well which didn't work either.

commiepinko commented 2 years ago

@alsmith What would be the recommended way to add a source network to ifFiler.json and tell it NOT to send traffic anywhere? (Instead of using my weird workaround to have it send traffic to itself, which I think may be causing issues...)

My understanding (and experience) is that multicast-relay does nothing unless told to. In other words, "don't relay anything" is the default and you don't have to specify it. A possible confusion is that the relay controls only mDNS traffic. For example, I use it to block broadcast advertisements for services between VLANs, but that's all it does. The VLANs can still access each other's services. Blocking the services themselves is a function of the firewall, not the relay. If the two don't complement each other, weirdness can result (e.g., being able to see file shares advertised, but not be able to access them, or vice versa).

@VeniceNerd The only thing that doesn't work none of the traffic from 10.0.10., 10.0.20., or 10.0.30.* seems to show up on BR0.

Odd. My ifFilter.json is very similar…

{
"192.168.0.0/24": ["br0", "br101", "br102", "br103", "br104", "br105", "br106", "br107", "br108", "br109"],
"192.168.1.0/24": ["br0"],
"192.168.2.0/24": ["br0"],
"192.168.3.0/24": ["br0"],
"192.168.4.0/24": ["br0"],
"192.168.5.0/24": ["br0"],
"192.168.6.0/24": ["br0"],
"192.168.7.0/24": ["br0"],
"192.168.8.0/24": ["br0"],
"192.168.9.0/24": ["br0"]
}

…and works as expected. (I'm being grateful, not bragging.) Check your container creation script and verify that all networks involved are listed? -e INTERFACES="br0 br101 br102 br103 br104 br105 br106 br107 br108 br109" \

Final troubleshooting thought… Of course only what hosts broadcast can be relayed. To check that your systems are actually broadcasting the mDNS you expect, stop multicast-relay and enable the DM's multicast reflector: Settings > Services > mDNS. With the reflector enabled, you should have a free-for-all where every host on every network can see every other host's broadcasts. (Which I assume is why Settings > Wireless Networks > [WLAN] > Block LAN to WLAN Multicast and Broadcast Data exists, and which, come to think of it, is another setting to keep in mind.)

VeniceNerd commented 2 years ago

My understanding (and experience) is that multicast-relay does nothing unless told to. In other words, "don't relay anything" is the default and you don't have to specify it.

That is so weird because this is not my experience at all. When I included only "10.0.50.0/24": ["br0", "br10", "br20", "br30"] in my json file all the other VLANS were broadcasting all over the place. Once I mentioned all the other networks as a source and told them where to route, then it all worked as expected. (Besides that one quirk I'm still struggling with...)

Did you see this thread? https://github.com/alsmith/multicast-relay/issues/34

@alsmith said this over there:

Default is relay everything that is not in the ifFilter.json.

By default, if the src IP# can't be found in the filter then it gets relayed to all interfaces. If it is found then it will only be relayed to the interface(s) specified in the json.

Which would confirm what I have noticed, no?

commiepinko commented 2 years ago
Default is relay everything that is not in the ifFilter.json.

By default, if the src IP# can't be found in the filter then it gets relayed to all interfaces. If it is found then it will only be relayed to the interface(s) specified in the json.

Huh. So it goes. I've never tried a configuration that didn't entirely specify what I wanted, so that wasn't noticeable, and so assumed wrong. Sorry 'bout that. It's good to know in any case.

alsmith commented 2 years ago

Correct, the default is to relay everywhere.

Also, the relay won't ever rebroadcast on the interface that it received the packet from, so @VeniceNerd's "10.0.1.0/24": ["br0"] is basically a no-op, it's the same as specifying "10.0.1.0/24": [] since it wouldn't even consider br0 as a retransmit interface - what it would do is prevent relaying to any of the other interfaces of course. Hope that helps the understanding !

VeniceNerd commented 2 years ago

Hey @commiepinko & @alsmith I can now with certainty say the this line in ifFilter.json:

"10.0.1.0/24": ["br0"]

causes issues. With this line enabled my mdns stops working properly and all kinds of weird issues arise. This line basically tells network 10.0.1.0/24 (which is BR0) to forward all traffic to itself. I did this so that BR0 won't send traffic to anyone else.

Since that line causes issues, though, this is definitely not the way to exclude one VLAN from broadcasting anywhere else. It's not a big issue for me atm since I don't really have any devices on my main LAN (10.0.1.x) but it would still be nice to know what the proper way would be to exclude a specific network from broadcasting to any other VLANS (besides itself, of course).

Any ideas?

commiepinko commented 2 years ago

I've revised "192.168.0.0/24": ["br0", "br101", "br102", "br103", "br104", "br105", "br106", "br107", "br108", "br109"], to "192.168.0.0/24": ["br101", "br102", "br103", "br104", "br105", "br106", "br107", "br108", "br109"], (What did I think, that 192.168.0.0/24 needed permission to talk to itself?)

It's odd though - my install worked correctly with the error, and now it works without it. I can see no difference, except that now I know longer have to suffer godlike status or be mistaken for knowing everything. 😽

alsmith commented 2 years ago

Hey @commiepinko & @alsmith I can now with certainty say the this line in ifFilter.json:

"10.0.1.0/24": ["br0"]

causes issues. With this line enabled my mdns stops working properly and all kinds of weird issues arise. This line basically tells network 10.0.1.0/24 (which is BR0) to forward all traffic to itself. I did this so that BR0 won't send traffic to anyone else.

Since that line causes issues, though, this is definitely not the way to exclude one VLAN from broadcasting anywhere else. It's not a big issue for me atm since I don't really have any devices on my main LAN (10.0.1.x) but it would still be nice to know what the proper way would be to exclude a specific network from broadcasting to any other VLANS (besides itself, of course).

Can you say what issues that causes? If 10.0.1.1 is br0, then whether br0 is there is not then it shouldn't make any difference. The relay won't re-transmit a packet out of the same interface that it received it on.

VeniceNerd commented 2 years ago

@alsmith looks like you were right and this is NOT after all what caused the issues. The change of the ifFilter seems to have been a mere coincidence. Probably because I stopped and restarted the container in the process. I just saw the problem happen again.

Basically after a while my HomeKit devices show as "not responding" in the Home App. When everything works this is what the Discovery App shows:

Screenshot 2021-12-07 at 14 15 09

All of the HomeKit devices are grouped under "_hap._tcp." and if you click on the little arrow it will show you the IP address and all the other details.

Once everything stops working this is what Discovery shows:

Screenshot 2021-12-06 at 13 07 03

Some devices can still be seen (at times) but they don't show up under "_hap._tcp." anymore and expanding on the arrow won't reveal any details.

If you stay in the discovery tool you can often see it going back and forth. I have no idea what is going on. Sometimes everything works great and then everything stops working. I haven't seen any rhyme or reason yet.

Here is what it looks like from the Home app: https://user-images.githubusercontent.com/25128243/145115530-9561d76e-bed3-41e5-b8c2-51c993d80d67.MOV

@commiepinko do you have any idea why this may be happening and what I could do to trouble shoot? Do I maybe have some settings wrong on the DMP? I disabled all firewall rules and I can ping across all VLANS.

VeniceNerd commented 2 years ago

So I removed the container again and started a new one. This time without the ifFilter option. So I just ran:

podman run -it -d \
--restart=on-failure:10 \
--name="multicast-relay" \
--network=host \
--mount type=bind,src=/mnt/data/on_boot.d_support,dst=/multicast-relay-config \
-e INTERFACES="br0 br10 br20 br30 br50" \
docker.io/scyto/multicast-relay

So now everything has been running solid for almost 48 hours. Of course I'm getting mdns traffic from all networks on all networks at the moment but there have been none of the weird issues I reported above. So right now my hunch is that the problem was caused by something in the ifFilter.json. Here again is my ifFilter.json:

{
"10.0.50.0/24": ["br0", "br10", "br20", "br30"],
"10.0.10.0/24": ["br0"],
"10.0.20.0/24": ["br0"],
"10.0.30.0/24": ["br0"]
}

Do you guys see any issues at all with this? Why would including the ifFilter.json as an option cause my system to behave inconsistently? Just to make sure: I don't need to set any specific chmod on the ifFilter.json, correct?

If you guys have any ideas of why it runs well without the ifFilter but starts acting up with the ifFilter I would greatly appreciate it.

The only other option I can think of right now would to run multiple instances of multicast-relay. At least I would have to run two:

podman run -it -d \
--restart=on-failure:10 \
--name="multicast-relay-10" \
--network=host \
--mount type=bind,src=/mnt/data/on_boot.d_support,dst=/multicast-relay-config \
-e INTERFACES="br10 br50" \
docker.io/scyto/multicast-relay

and

podman run -it -d \
--restart=on-failure:10 \
--name="multicast-relay-20" \
--network=host \
--mount type=bind,src=/mnt/data/on_boot.d_support,dst=/multicast-relay-config \
-e INTERFACES="br30 br50" \
docker.io/scyto/multicast-relay

That would at least connect the two VLANs I need to swap mdns most. Just not sure if it is even recommended to run two instances of multicast. I obviously only would want to do this if we can't figure out the issue with the ifFilter solution.

Any feedback greatly appreciated as always!

VeniceNerd commented 2 years ago

Hey @commiepinko do you have any thoughts at all on the above? Any idea why my ifFilter may cause issues? If not what do you think of my idea of running two instances of Multicast Relay instead? Obviously I'd rather get the ifFilter to work but just in case...

commiepinko commented 2 years ago

Alas, @VeniceNerd, I'm stumped. With the exception of omitting the LAN, your config is very like mine. If you verbose the relay's logs, do you find anything of interest? Logs are generally the best place to go in pursuit of the inexplicable.

As for running multiple instances, I'd be hesitant to double down on something that isn't working for unknown reasons, but that's must me.

alsmith commented 2 years ago

It wouldn't do any damage to run multiple instances if you separate them out so that they handle individual interfaces and won't want to interfere with each other - I think it's clever enough that even if you don't do that, that it wouldn't end up causing massive network traffic by rebroadcasting each others' packets but that's not worth trying out. (-;

VeniceNerd commented 2 years ago

@alsmith It's driving me absolutely insane! Without the ifFilter.json everything works well. I am not trying a smaller setup with just three VLANS

10.0.10.0 = br10 10.0.30.0 = br30 10.0.50.0 = br50

I want VLAN 50 to send mdns to VLAN 10 and 30 but I don't want VLAN's 10 and 30 to send mdns anywhere else. First I tried this:

{
"10.0.50.0/24": ["br10", "br30"],
"10.0.10.0/24": ["br10"],
"10.0.30.0/24": ["br30"]
}

Unfortunately, this causes the unexpected behaviors I have outlined above. Then I tried this:

{
"10.0.50.0/24": ["br10", "br30"],
"10.0.10.0/24": ["br50"],
"10.0.30.0/24": ["br50"]
}

That configuration works but it's basically the same as not using any ifFilter at all since the traffic is transmitted on all VLANs. I assume because I have VLAN 10 and 30 dump into VLAN 50 and then VLAN 50 sends back to 10 and 30?

So basically I still have not found a way to prevent VLAN's 10 and 30 from sending MDNS traffic anywhere. Are we 100% certain that asking one VLAN to send traffic to itself would be the best way to prevent it from sending it to any other VLANS? Or is there any other way I could try this in ifFilter?

Also, @commiepinko can you think of anything in the Unifi Network settings that I may have set up incorrectly for this to happen?

VeniceNerd commented 2 years ago

I collected logs for all scenarios and also discovered something REALLY strange in the last secnario!

COMBINATION 1:

podman run -it -d \
--restart=on-failure:10 \
--name="multicast-relay" \
--network=host \
--mount type=bind,src=/mnt/data/on_boot.d_support,dst=/multicast-relay-config \
-e OPTS="--verbose --ifFilter=/multicast-relay-config/ifFilter.json" \
-e INTERFACES="br10 br30 br50" \
docker.io/scyto/multicast-relay
{
"10.0.50.0/24": ["br10", "br30"],
"10.0.10.0/24": ["br50"],
"10.0.30.0/24": ["br50"]
}

Here mdns traffic gets transmitted between all networks. It's basically as if I'm not running ifFilter at all. Here is the log from this setup: log1.txt

I'm not good at reading logs but to me it looks like traffic is sent to BR50 and BR50 then turns around and sends it right back at BR10 and BR30. Exactly what I don't want to happen. This really confused me because I think @commiepinko does basically the same in his ifFilter, no?

COMBINATION 2:

podman run -it -d \
--restart=on-failure:10 \
--name="multicast-relay" \
--network=host \
--mount type=bind,src=/mnt/data/on_boot.d_support,dst=/multicast-relay-config \
-e OPTS="--verbose --ifFilter=/multicast-relay-config/ifFilter.json" \
-e INTERFACES="br10 br30 br50" \
docker.io/scyto/multicast-relay
{
"10.0.50.0/24": ["br10", "br30"],
"10.0.10.0/24": ["br10"],
"10.0.30.0/24": ["br30"]
}

This just breaks the setup for me. Most of my homekit traffic from BR50 won't show up on BR10 anymore and it breaks my Homekit. It's kind of jittery and intermittent. Here is the log for that: log2.txt

Then I scraped all of that and just ran this one, simple relay instead:

podman run -it -d \
--restart=on-failure:10 \
--name="multicast-relay-10" \
--network=host \
--mount type=bind,src=/mnt/data/on_boot.d_support,dst=/multicast-relay-config \
-e OPTS="--verbose" \
-e INTERFACES="br10 br50" \
docker.io/scyto/multicast-relay

This should only let VLAN 10 and VLAN 50 talk. However, the discovery tool also shows one or two devices from VLAN 30 when I'm connected to VLAN 10 or 50. This only happens sporadically. It's not always there. It should NEVER be there, though. I can even see them in the logs (10.0.30.145 - AppleTV & 10.0.30.178 - HomePod & 10.0.30.249 -iPad): log3.txt

Why in the world are these showing up when I run "podman logs -f multicast-relay-10"? Network BR30 wasn't even included in my interface list! What is going on? Are there rogue processes running on my DMP?

At least I'm hoping this will give you guys an idea what may be going on cause I'm about ready to be admitted to an institution... lol

commiepinko commented 2 years ago

@VeniceNerd… Here's my final (well, for the moment) configuration for reference, though I doubt there's anything new to be discovered there. FYI, I have a single complimentary firewall rule that blocks inter-VLAN traffic, and nothing else.

Wish I had more to suggest.

UDMP Customization.zip

VeniceNerd commented 2 years ago

@VeniceNerd… Here's my final (well, for the moment) configuration for reference, though I doubt there's anything new to be discovered there. FYI, I have a single complimentary firewall rule that blocks inter-VLAN traffic, and nothing else.

Wish I had more to suggest.

UDMP Customization.zip

@commiepinko Yeah I'm starting to think that this is not about my relay configuration but that maybe something weird is going on with my network setup? Or maybe I accidentally have another rogue instance or the relay running at the same time? I really don't think so but maybe worth a look? Is there maybe a command to display all running podman instances?

Also, I'm really hoping that @alsmith may have an idea why VLAN30 traffic is being passed when only VLAN 20 and 50 are defined in the relay. I have a feeling once I crack that nut the rest will fall into place...

commiepinko commented 2 years ago

@VeniceNerd Is there maybe a command to display all running podman instances?

podman container list will show you running containers. podman container list -all displays all containers (except, for some reason, unifi-os).

Podman doesn't have the hugely detailed command structure of Docker, but there's still a lot one can do.

VeniceNerd commented 2 years ago

@VeniceNerd Is there maybe a command to display all running podman instances?

podman container list will show you running containers. podman container list -all displays all containers (except, for some reason, unifi-os).

Podman doesn't have the hugely detailed command structure of Docker, but there's still a lot one can do.

Here's what I have running:

Screen Shot 2021-12-13 at 10 25 37 AM

So that looks correct, no? If so I will open a new ticket with this specific issue since I think we are now quite far outside the scope of the original issue.

commiepinko commented 2 years ago

@VeniceNerd That's identical to mine.

VeniceNerd commented 1 year ago

Hey @commiepinko and @alsmith I am trying all of this again with a brand new network setup now and running MultiCast Relay on a Raspberry Pi instead of on the UDM Pro SE (I upgraded!). I am still seeing the EXACT same issue that I saw last year. As soon as I use the ifFilter.json stuff goes sideways.

Is there ANY chance that my network addresses (10.1.1.0/ 10.1.10.0 / 10.1.20.0 / 10.1.30.0) are messing this up? It's the only difference I can see to what @commiepinko has going on. His are all 192.168.0.0 / 192.168.1.0 etc...

I just don't understand how it can work so well for him and it's an absolute disaster for me. I have been trying this for almost two years now and may actually go insane soon. lol

VeniceNerd commented 1 year ago

I opened a new issue with my observations on Raspberry Pi: https://github.com/scyto/multicast-relay/issues/17

This is with a brand new UDM Pro SE, brand new network, and Multicast-Relay running on a Raspberry Pi. I still run in pretty much the same issue that I had last year on the UDM Pro. I don't understand what I am doing wrong...