home-assistant / plugin-multicast

Multicast implementation for Home Assistant
Apache License 2.0
22 stars 12 forks source link

pimd integration #61

Closed jens-maus closed 2 months ago

jens-maus commented 1 year ago

This PR refs #17 and is a first draft of getting PIMD (https://github.com/troglobit/pimd) compiled and integrated into plugin-multicast. Evaluation, setup and testing is still missing, thought. But this draft should help in getting things sorted out over time so that we finally get pimd integrated for third party addons like RaspberryMatic CCU which require also udp multicasting to be routed between the docker networks and the host network, which hopefully pimd can solve.

jens-maus commented 1 year ago

@pvizeli Would you please have a look at my proposed changes here as I am not so familiar with the home assistant build environment. And also can you please assist in providing some hints in how to get the built plugin container installed / transfered in a home assistant test environment, so that I can try out and get the /etc/pimd.conf config file modified in a real live environment.

pvizeli commented 1 year ago

@pvizeli Would you please have a look at my proposed changes here as I am not so familiar with the home assistant build environment. And also can you please assist in providing some hints in how to get the built plugin container installed / transfered in a home assistant test environment, so that I can try out and get the /etc/pimd.conf config file modified in a real live environment.

We using our tool tempio: https://github.com/home-assistant/plugin-dns/blob/master/rootfs/etc/cont-init.d/corefile.sh Supervisor write config for plugins into filesystem and we can pick that up with tempio.

The communication between SU and plugins are a bit limited. We working on a new generation that include a small daemon at the plugin to control things inside. But I think for now as MVP, using the config file should work

gamer123 commented 1 year ago

Hi so far i understood is now the HA Kernel since HA OS V9.2 able to handle the multicast and it is possible to run pimd in Docker. And now the HA Multicast plugin needs to forward it from Host network into Docker network that RaspberryMatic can access it and probably the other way round.

Is it possible to get this dev/draft plugin into a running HA to test/play around or optimize the pimd configuration ?

agners commented 1 year ago

And also can you please assist in providing some hints in how to get the built plugin container installed / transfered in a home assistant test environment, so that I can try out and get the /etc/pimd.conf config file modified in a real live environment.

To get an all in one build environment the builder can be used.

I often end up calling docker build manually, for more control. In this case, it is as simple as

docker build --build-arg BUILD_FROM=ghcr.io/home-assistant/amd64-base:3.16 --build-arg BUILD_ARCH=amd64 \
    --build-arg MDNS_REPEATER_VERSION=1.2.0 --build-arg PIMD_VERSION=2.3.2 \
    -t <your-docker-hub-user>/amd64-hassio-multicast:latest -f Dockerfile .

I've fixed two issues with the latest base images and built the plug-in. It seems pimd is running, however, not sure if it does what it should :sweat_smile: Maybe it also interferes with mDNS etc... This needs more testing!

To test it on a running machine, you can replace the current multicast plugin as follows:

docker pull docker.io/agners/amd64-hassio-multicast:latest
docker tag docker.io/agners/amd64-hassio-multicast:latest ghcr.io/home-assistant/amd64-hassio-multicast:2022.02.0
docker stop hassio_multicast
ha multicast restart

To restore the original:

docker pull ghcr.io/home-assistant/amd64-hassio-multicast:2022.02.0
docker stop hassio_multicast
ha multicast restart

(I've built it for amd64 and aarch64, replace according to your systems architecture)

agners commented 1 year ago

config file modified in a real live environment.

You can find an example of this using bashio and tempio here: https://github.com/home-assistant/addons/blob/master/silabs-multiprotocol/rootfs/etc/cont-init.d/config.sh

jens-maus commented 1 year ago

I've fixed two issues with the latest base images and built the plug-in. It seems pimd is running, however, not sure if it does what it should 😅 Maybe it also interferes with mDNS etc... This needs more testing!

Thanks for all these hints @agners. Highly appreciated. I indeed got pimd also up&running in the test environment here but also don't know if it correctly works or not as I definitely need to read more on that UDP multicasting topic as it seems. One thing that came in my mind, however, is: Is this docker-based plugin-multicast itself running on the host network (thus in hostmode)? Because otherwise we would end up in the same situation that docker containers usually don't receive any UDP multicasting traffic and thus pimd / plugin-multicast cannot forward it to all the other individual containers.

And thanks for the short documentation on how to test all this and how to build a test container. I will see if I can try all this rather soon and then we can see if pimd works as expected and is able to forward multicast UDP traffic. I fear, however, that we would need some kind of testbed here to actually try to reproduce / simulate udp multicast traffic arriving at pimd.

agners commented 1 year ago

I indeed got pimd also up&running in the test environment here but also don't know if it correctly works or not as I definitely need to read more on that UDP multicasting topic as it seems.

Yeah multicast is a bit a pain. But in general, it's always UDP (as TCP is point-to-point by nature). IGMP is used to register membership. By default, it does not cross routers (and that is how the internal hassio Docker network is hooked up, through routing and NAT). Multicast relies on L2 doing "the right thing" (either learning which client is connected to which port via IGMP snooping, or reverting to simply broadcasting multicast frames).

With pimd we should make sure that a process within the hassio network can register IGMP membership, and this membership request should be sent out to the (primary) host network. With that, the multicast frames should arrive at the (primary) host network, and pimd should relay those into the hassio network.

Is this docker-based plugin-multicast itself running on the host network (thus in hostmode)? Because otherwise we would end up in the same situation that docker containers usually don't receive any UDP multicasting traffic and thus pimd / plugin-multicast cannot forward it to all the other individual containers.

Yes, the plug-in runs in host mode. Otherwise it could not do this, I agree.

We probably want to limit pimd to only rely between the "primary host network" and hassio network. You can find the primary host network using bashio (see example here).

What is also helpful for testing is docker execing into the container, shtudown the pimd s6 service and manually start the service, e.g.

docker exec -it hassio_multicast /bin/bash
s6-svc -d -wd /run/service/pimd
pimd -d -f
gamer123 commented 1 year ago

@agners nice work! I made a full testbench with HA, Raspberrymatic CCU Addon, with connected Homematic IP RF-USB-Stick and the DRAP (multicast device) and two sensors (presence detector and a temperatur sensor connected via IP wired.) I also installed your modified docker container. Connecting the DRAP is not possible but as far as i understood is the IGMP not ready yet.

@jens-maus @agners I can provide a guest access via Teamviewer. I sourced it out into guest network. Wireshark sniffing is also possible.

jkunczik commented 1 year ago

I tried something similar, because I wanted to move Rasperrymatic into a DMZ, while keeping the DRAP in the iot network. I didn't use pimd, but smcroute (static multicast routing) instead. While I made sure to increase the TTL of all packets to prevent them from being discarded, the traffic never reached the other interface. From troglobit/smcroute#50 I learned that 224.0.0.0/8 is for link-local traffic only and thus discarded by the kernel. Unfortunately that also seems to be the case for pimd (troglobit/pimd#120).

jens-maus commented 1 year ago

Would be great if you could present some summarized network traffic that you probably catched with tools like wireshark.

jkunczik commented 1 year ago

Sure. Here are my observations. I configured smcroute to allow multicast traffic on 224.0.0.1 and 224.0.0.120 between all my networks:

stateDiagram
    LAN_1 --> IOT_100
    IOT_100 --> LAN_1
    DMZ_11 --> LAN_1
    LAN_1 --> DMZ_11

The networks are 192.168.XXX.1/24 (where the XXX is noted in the above diagram). My CCU has the IP 192.168.100.11; the DRAP has 192.168.100.1419: The TTL of all multicast packets is increased to five by pre-routing rules and firewall rules are set, so that the packets are not discarded. For SSDP (which was configured analogously), this is working properly (see SSDP dump.csv).

Then I ran the net finder from inside the IOT network, as well as from within my LAN.

Observations from same network (IOT)

In this case, I am observing the following communication (see HmIP same network.csv);

discovery (triggered by netfinder)

sequenceDiagram 
net finder->>224.0.0.1: Who is there?
DRAP->>224.0.0.1: DRAP with {SN} and {FW}
CCU->>224.0.0.1: CCU with {SN} and {FW}
net finder->>224.0.0.1: DRAP with {SN}: Reply
DRAP->>224.0.0.1: ACK (?)
net finder->>224.0.0.1: CCU with {SN}: Reply
CCU->>224.0.0.1: ACK (?)

Net finder always sends discovery requests to port 43439. Devices answer from port 43439 to the net finder's source port, where the request originated from. The TTL of the Request is 5. The TTL of the responses is 1.

heartbeats (sent automatically from the homematic devices)

Both CCU and DRAP send regular multicasts to 224.0.0.120:43438 with changing payload, but constant payload size. The payload size seems to vary over longer time durations.

Observations from different network (LAN)

Here, I only see the discovery request from net finder, but no response (see HmIP routed.csv). When capturing the IOT interface at the same time, I don't see the forwarded discovery packet.

:warning: At some point I received 224.0.0.120 traffic and one 224.0.0.1 packet from IOT on the LAN side. However, without doing anything these packets stopped coming.

gamer123 commented 1 year ago

This sounds all tricky. A easy solution could be by using a external USB Network adapter and forward this adapter into the docker container. So with it a own NIC for DRAPs is given and the rest is unaffected. Bad is a second network cable is needed and a running DHCP Server.

jens-maus commented 1 year ago

We actually do have a different type of solution in the making/pipelining for the HAP/DRAP communication issue we tried to solve initially with this PR here, see https://github.com/jens-maus/RaspberryMatic/issues/1373#issuecomment-1560532150.

While for the ordinary docker/OCI use case we already developed a solution by using a `macvlan' network (see https://github.com/jens-maus/RaspberryMatic/wiki/Installation-Docker-OCI), however, for the HA addon use case of RaspberryMatic we are waiting for the HA devs to develop an integrated solution so that an add-on can define that it requires a macvlan insteqd of using the standard hassio docker network only. However, we could already develop a manual patch script to get a running RaspberryMatic HA addon working until the next HA restart. See here: https://github.com/jens-maus/RaspberryMatic/wiki/Installation-HomeAssistant#hmip-haphmipw-drap-support-patch

agners commented 2 months ago

Initial discussion on how to integrate macvlan support in Home Assistant Supervisor started: https://github.com/home-assistant/architecture/discussions/1034.

I am closing this PR as currently the pimd integration solution is given up on.