Open chiefnoah opened 9 months ago
I used the example register
tool to "fake" registration of my service and I get expected behavior (a crash, but it resolves the service) in my "datalistener
"
And I am unable to get a response when I use the query
example tool for my service with the multicast IP set as the IP in the ServiceInfo
, but am when I set it to my workstation's IP.
The ip
in ServiceInfo::new()
is meant to support one or more unicast IP addresses. I'm actually not aware of use cases where a multicast IP is used for registering a service instance. (Should we add a check for that?)
When sending out packets via multicast, normally it's only the destination address that is multicast address, not the source address. In other words, in your case, the datablaster
service would register on a regular unicast IP but send out data to a multicast group address.
To be honest, based on my limited networking knowledge, I don't think it's possible to "bind" to a multicast IP as source IP. What happens is that you bind to a regular unicast IP and joins the IP into a multicast group. And the receiver also joins the same multicast group to receive the data. Would that make sense?
I'm actually not aware of use cases where a multicast IP is used for registering a service instance.
Same, but I'm creating one :slightly_smiling_face:
(Should we add a check for that?)
probably, I think it's the core of my issue
When sending out packets via multicast, normally it's only the destination address that is multicast address, not the source address. In other words, in your case, the
datablaster
service would register on a regular unicast IP but send out data to a multicast group address.
This is kinda correct. You don't need to bind
an IP at all on the data producer side, the fact that I do so is a quirk and limitation of the Rust UdpSocket
type. Multicast works sort of the opposite of how unicast works, the data producer connect
s to the multicast IP and sends it packets. On the receiving side, the "listeners" bind
to the multicast IP. You do not necessarily need to join the multicast group as a producer, you only must join if you wish to receive data from the mutlicast group. The source address for datagrams sent to a multicast address are not used for routing, but I'm not actually sure what the "listener" sees as the source address for packets that are sent to it (it might be the multicast address, but I think it's probably the sender interface address).
I don't think it's possible to "bind" to a multicast IP as source IP
Correct. You bind to the multicast address when you want to receive data from the multicast group, making it the destination address.
My understanding of how DNS-SD works is you have a PTR record that allows you to enumerate the instances of an arbitrary service on the network and SRV records for each instance. The SRV records then (along with other metadata such as port and priority) point to a domain name that should have a corresponding A and/or AAAA record that can be resolved.
In my case, I want to store the multicast IP that my producer selects (in the example it's random, but in practice I will use a smarter algorithm to prevent collisions and select multicast IPs) in the A record that the SRV record points to.
To do that, the producer needs to be able to respond to queries for addresses that do not correspond to an address that it technically owns. The rest is handled by the semantics of the service's protocol. I ran out of time last night while digging into the code for handling queries, but my guess is queries are being dropped/ignored if the IP isn't owned by the producer instance of the daemon.
Also to clarify, my example I linked above works completely if you hard code the multicast IP instead of using mdns_sd.
I should be able to have some patches soon
but I'm not actually sure what the "listener" sees as the source address for packets that are sent to it (it might be the multicast address, but I think it's probably the sender interface address).
I think the "listener" would see the sender interface address as the source address. (and the multicast address is the destination address).
In my case, I want to store the multicast IP that my producer selects (in the example it's random, but in practice I will use a smarter algorithm to prevent collisions and select multicast IPs) in the A record that the SRV record points to.
If I understand, your goal is to let the listener to find out the multicast address the data will be sent to. I think you should use TXT
record to store that multicast address to let the "listener" know about it.
In short: I think you can:
Register your datablaster
service like a regular mDNS service publisher using its local IP. You can also omit that and let the library fill in the IP by calling ServiceInfo::enable_addr_auto()
.
In the service info of datablaster
, publish its multicast address in the properties
(TXT record).
The listener will detect the service via a regular mDNS query and obtain the multicast address from get_property()
(or get_property_val()
. Then "listener" would bind to this multicast address.
Then in the business logic, datablaster
would send out data to the multicast address and the "listener" would receive the data.
(P.S. The underlying rationale is: the multicast address is not your server address regardless you use mdns-sd or not. It would be incorrect IMO to list it as A/AAAA record of the service itself. The multicast address is your data destination address and where the "listener" receive them).
The underlying rationale is: the multicast address is not your server address regardless you use mdns-sd or not.
This is fair, but any such limitation is not one imposed by any of the RFCs, but is instead a safeguard made by the library/application to prevent unintentional misconfiguration.
It would be incorrect IMO to list it as A/AAAA record of the service itself. The multicast address is your data destination address and where the "listener" receive them
A/AAAA records are address records, they can even hold a netmask. Storing a multicast address in them is completely valid.
The RFC for mDNS had a brief, but authoritative opinion on this:
Except in the case of proxying and other similar specialized uses, addresses in IPv4 or IPv6 address records in Multicast DNS responses MUST be valid for use on the interface on which the response is being sent.
Providing an A record for a multicast address for which a mDNS responder is the authoritative source for data being sent to a multicast group would fall under the proxying category, but even if you do not accept that reading, it falls under the "similar specialized uses".
It's fine if you don't want to support that case, I'll just fork and move on.
Providing an A record for a multicast address for which a mDNS responder is the authoritative source for data being sent to a multicast group would fall under the proxying category, but even if you do not accept that reading, it falls under the "similar specialized uses".
It's fine if you don't want to support that case, I'll just fork and move on.
To be clear, I'd be happy to support this use case. My comments are saying you can achieve that without adding code to support multicast address in A record. But ultimately you know best about your needs, and if you have a patch on the way, I'll be glad to take a look and look forward to have that support added.
I'm working on an application that uses multicast for efficiently sending large amounts of ephemeral data. A sanitized example of what I'm trying to attempt can be found here, but in short it appears there's an issue with responding to queries for a service (and A/AAAA record) associated with a multicast address.
It's possible I'm just using the library wrong, so I apologize in advance if that is the case.
I suspect this log message, on the half of my application that
register
s a service is potentially related:I'll admit, I'm not the most familiar with the details of mDNS/DNS-SD, so in the meantime I'm going to both dig into the RFCs and the code to make sure I'm not messing up. Otherwise some direction would be greatly appreciated!
If this does end up being a bug, I'm happy to send patches.