Closed joshuakarp closed 1 year ago
Does this mean 2 sockets for every IP address?
Does this mean 2 sockets for every IP address?
yes, but that is if we decide to do unicast response, request.
for now, i'm just focusing on not touching any of that @CMCDragonkai
We're going to have to do it the way ciao does it. 1 socket per interface, but bound to wildcard.
Make sure that the sockets will be able to join both the IPv4 multicast group and IPv6 multicast group.
Some tests to consider:
@amydevs
INFO
for most things. The only warnings right now should only be for when os.networkInterfaces
provide invalid information and when you can proceed.X, Xing, Xed
for the logging first keyword. Where X
is the verb. Only using Xing
when it's a one-off thing. X, Xed
is for pre-X and post-X respectively.MDNS
in different scenarios.On linux, due to node setting the IP_MULTICAST_ALL
flag to true, will have sockets bound to any wildcard address (::
, ::0
, 0.0.0.0
) receive all multicast traffic from all added groups on the system! This is not the same behavior as that on windows/macos/bsd.
Note that here https://github.com/clshortfuse/node-getsockethandleaddress it indicates that you can get the sockfd integer just by doing socket._handle.fd
. Specifically for linux and macos I think.
You should confirm if this fix is needed for macos.
You'll still need to write the NAPI code to actually do something with that file descriptor number.
The _handle.fd
does come with some sort of warning message. See if you can suppress that...?
The intended behavior is that a binded socket to "0.0.0.0" or "::0", with IP_MULTICAST_ALL disabled, will not receive any multicast messages at all. Furthermore, when addMembership is called specifying a specific interface, multicast packets will only be received from that group on that interface.
It would seem, that disabling IP_MULTICAST_ALL works as intended on udp4 sockets.
However, it seems that when disabling IPV6_MULTICAST_ALL, without calling addMembership, it works as intended. However, as soon as I call addMembship, that socket seems to start receiving multicast packets from every interface even when i've only specified one specific interface.
it seems that there are several options that add an ipv6 socket to a multicast group, being IPV6_JOIN_GROUP and IPV6_ADD_MEMBERSHIP. They all use the ipv6_mreq struct as a configuration option rather than the ip_mreq struct.
The key difference between these is that ipv6_mreq takes in the interface index, whilst ip_mreq takes an interface ip address.
Node, upon calling addMembership, will call uv_udp_set_membership with JOIN_GROUP, passing in the interface ip address.
If it is udp4, libuv will throw ENODEV when an iface does not correspond to the address you have provided.
However, on udp6, libuv tries to look for the index (scopeid) of the interface with the address you've provided using uv_ip6_addr. It SHOULD give an ENODEV error if it is invalid, but it is not bubbling up to node for some reason.
On udp6, libuv chooses the scopeid by WHATEVER IS AFTER THE % SIGN, IF IT CAN'T BE FOUND, IT'S IGNORED. THAT IS WHY ADDMEMBERSHIP WITH JUST A IPV6 ADDRESS IS NOT ENOUGH, I THINK THE SCOPE ID NEEDS TO BE PROVIDED AFTER THE PERCENTAGE SIGN. This is only on windows, (on linux, providing the network interface name after the % sign is correct)
On udp6, using either IPV6_ADD_MEMBERSHIP or IPV6_JOIN_GROUP, will for whatever reason, make your socket listen to multicast packets on all interfaces rather than just a singular specified one. The native code that i tested is:
int AddMulticastMembership6(int sockfd, char* group, char* interface) {
struct ipv6_mreq mreq;
inet_pton(AF_INET6, group, &mreq.ipv6mr_multiaddr);
mreq.ipv6mr_interface = if_nametoindex(interface);
bool success = setsockopt(sockfd, IPPROTO_IPV6, IPV6_JOIN_GROUP, &mreq, sizeof(mreq)) >= 0;
return if_nametoindex(interface);
}
i've found that IP_BLOCK_SOURCE also exists. this could be useful in filtering out our own traffic. However, we would need to implement a platform-agnostic solution if we wanted to use this across all platforms. For now, having a set ip to filter out seems fine to me.
as a workaround to https://github.com/MatrixAI/js-mdns/issues/1#issuecomment-1637365999, i'm trying to just bind a unicast socket first, then binding all the multicast sockets after. This is done so that the first unicast socket will catch all of the necessary unicast traffic.
I'm at the point of implementing this. However, even though i've made sure that the unicast socket is the first thing to be binded on a particular port, as soon as i bind other sockets, none of the sockets seem to be receiving any unicast traffic at all!
I wonder if the first socket bound to a port on an interface with reuseaddr being true receiving all unicast traffiic is deterministic...
on macos, tests run correctly, just some counted references are making the cleanup (afterAll) of MDNS hang. I've pinned it down to the sending of the goodbye packets, but i'm still figuring a solution
on windows, it is not possible to bind to a multicast address like you can on any unix system. On windows systems, i am binding each multicast socket to "::" instead. This functionally is the same as binding to the multicast address in my case, as i'm binding a unicast socket before all the other multicast sockets are binded. Windows makes sure that only the first socket that you've bound will receive multicast traffic.
on macos, tests run correctly, just some counted references are making the cleanup (afterAll) of MDNS hang. I've pinned it down to the sending of the goodbye packets, but i'm still figuring a solution
Are you tracking all resources between start and stop? Always make sure to keep track of them. We already have problems with memory leaks and we have to be very strict here.
Merged into staging now, doing the release.
Is this fully addressed by MDNS? Are there still plans to handle Hairpinning, PMP and PCP?
PMP and PCP should be done separately.
Hairpinning not sure how that would be achieved.
I created https://github.com/MatrixAI/Polykey/issues/536 to track PCP/PMP via UPNP. I did find a project that could be wrapped around in JS to make use of.
@amydevs please tick off everything that was done above too.
Created by @CMCDragonkai
Specification
There are two types of Data Flow in the MDNS System, Polling (Pull), and Announcements/Responses (Push). When a Node joins the MDNS group, the records are pushed to all other nodes. However, for the joined node to discover other nodes, it needs to conduct polling queries that other nodes respond to.
Sending Queries
The MDNS spec states that query records can have additional records, but we won't care to do this as it isn't necessary. Queries won't have any other records in the query record, much like a standard DNS packet (albeit an mdns query packet can contain multiple questions).
In the case that a responder is binded to 2 interfaces that are connected to the same network (such as a laptop with WiFi + ethernet connected), the queries asking for the ip for a hostname of the responder will receive multiple responses with different ip addresses.
This behavior is documented in: RFC 6762 14.
Control Flow
Unlike other mDNS libraries, we're going to use an AsyncIterator in order to have the consumer to have more control over the querying. An example of this would be:The query system has been decided to have it's runtime contained within
MDNS
rather than being consumer-driven. This means that scheduled background queries will have to be managed by a TaskManager (similar to polykey)Data Flow
Receiving Announcements/Responses (Pull)
Data Flow
Because queries are basically fire and forget, the main part comes in the form of receiving query responses from the multicast group. Hence, our querier needs to be able to collect records with a fan-in approach using a muxer that is reactive:
This can also be interpreted as a series of state transitions to completely build a service.
There also needs to be consideration that if the threshold for a muxer to complete is not reached, that additional queries are sent off in order to reach the finished state.
The decision tree for such would be as follows:
Control Flow
Instances of MDNS will extend EventTarget in order to emit events for service discovery/removal/etc.
The cache will be managed using a timer that is set to the soonest record TTL, rather than a timer for each record. The cache will also need to be an LRU in order to make sure that malicious responders cannot overwhelm it.
Sending Announcements
Control Flow
This will need to be experimented with a little. Currently the decisions are:
Types
Messages can be Queries or Announcements or Responses. This can be expressed as:
Parser / Generator
The Parsing and Generation together are not isomorphic, as different parsed UInt8array packets can result in the same packet structure.
Every worker parser function will return the value wrapped in an object of this type:
The point of this is so that whatever hasn't been parsed get returned in
.remainder
so we don't keep track of the offset manually. This means that each worker function also needs to take in a second uint8array representing the original data structure.parsePacket(Uint8array): Packet
parseHeader(Uint8array): {id: ..., flags: PacketFlags, counts: {...}}
parseId(Uint8array): number
parseFlags(Uint8Array): PacketFlags
parseCount(Uint8Array): number
parseQuestionRecords(Uint8Array): {...}
parseQuestionRecord(Uint8Array): {...}
parseResourceRecords(Uint8Array): {...}
parseResourceRecord(Uint8Array): {...}
parseResourceRecordName(Uint8Array): string
parseResourceRecordType(Uint8Array): A/CNAME
parseResourceRecordClass(Uint8Array): IN
parseResourceRecordLength(Uint8array): number
parseResourceRecordData(Uint8array): {...}
parseARecordData(Uint8array): {...}
parseAAAARecordData(Uint8array): {...}
parseCNAMERecordData(Uint8array): {...}
parseSRVRecordData(Uint8array): {...}
parseTXTRecordData(Uint8array): Map<string, string>
parseOPTRecordData(Uint8array): {...}
parseNSECRecordData(Uint8array): {...}
ErrorDNSParse
- Generic error with message that contains information for different exceptions. Ie.id parse failed at ...
parseResourceRecordKey
andparseQuestionRecordKey
andparseRecordKey
-parseLabels
.generatePacket(Packet): UInt8Array
generateHeader(id, flags, counts...)
generateFlags({ ... }): Uint8Array
generateCount(number): Uint8Array
generateQuestionRecords(): Uint8Array
-flatMap(generateQuestion)
generateQuestionRecord(): Uint8Array
generateResourceRecords()
generateRecord(): Uint8array
-generateRecordName
- "abc.com" - ...RecordKeygenerateRecordType
- A/CNAMEgenerateRecordClass
- INgenerateRecordLength
generateRecordData
generateARecordData(string): Uint8array
generateAAAARecordData(string): Uint8array
generateCNAMERecordData(string): Uint8array
generateSRVRecordData(SRVRecordValue): Uint8array
generateTXTRecordData(Map<string, string>): Uint8array
generateOPTRecordData(Uint8array): Uint8array
generateNSECRecordData(): Uint8array
MDNS
MDNS
MDNS.query()
MDNS.registerService()
MDNS.unregisterService()
Testing
We can use two MDNS instances to interact with each other to test both query and respond on separate ports.
Additional Context
The following discussion from 'Refactoring Network Module' MR should be addressed:
@CMCDragonkai: (+3 comments)
https://news.ycombinator.com/item?id=8229792
https://blog.apnic.net/2022/05/03/how-nat-traversal-works-concerning-cgnats/
https://github.com/MatrixAI/Polykey/issues/487#issuecomment-1294558470
https://github.com/MatrixAI/Polykey/issues/487#issuecomment-1294742114
https://support.citrix.com/article/CTX205483/how-to-accommodate-hairpinning-behaviour-in-netscaler
mDNS RFC https://datatracker.ietf.org/doc/html/rfc6762
DNS-SD RFC https://datatracker.ietf.org/doc/html/rfc6763
Domain Names RFC https://datatracker.ietf.org/doc/html/rfc1035
Extension Mechanisms for DNS RFC https://datatracker.ietf.org/doc/html/rfc6891
NSEC RFC https://datatracker.ietf.org/doc/html/rfc3845
Tasks