tomeshnet / prototype-cjdns-pi

Prototype system for mesh networks on single board computers
https://chat.tomesh.net/#/room/#software:tomesh.net
GNU General Public License v3.0
218 stars 42 forks source link

Multi-hop service advertisment model, with plugins #385

Open makew0rld opened 5 years ago

makew0rld commented 5 years ago

This issue details an idea proposed by @darkdrgn2k, originally beginning in chat here.

Basics:

Message format:

Logic:

Modules/Plugins:

darkdrgn2k commented 5 years ago

@darkdrgn2k suggested something similar to IPv6|service,service|TTL|hop|Identifier

What im currently theorizing. SSB uses : as delimiter but that conflicts with ipv6

We need a way of identifying each packet so that a node knows if they already received once and it came back, or if its a new one that it needs to forward

whatever or identifier needs to be figured out

Easiest way is a counter, but that could cause forged packets (if thats a problem)

darkdrgn2k commented 5 years ago

We will have to decide on defaults for this, maybe it can be on a per pkg basis

Should also have a hard limit on TTL so its not abused. IE broadcast to the WHOLE network. IE no more than 5

darkdrgn2k commented 5 years ago

Modules/Plugins:

I built this model in bash and i think its pretty simple

Main Service receive packet Content of packet is passed through every shell script in a folder (ie run-parts multhop-system-name-whatever.d/*) similat to nodeinfo.d

Each script process the packet's data and decides what to do with it (if anything).

ie the IPFS process would not care about SSB so it would skip it

The modules would place a file in that folder to deal with a certain type of service if they wanted to ie IPFS would swarm add while SSB would gossip add

darkdrgn2k commented 5 years ago

This saves on space since a single UDP packet only has about 508 bytes for info to be used reliably

I think your betting on 576 MTU which would never happen on a cjdns/yggdrasill network.

A UDP packet on a standard 1500 MTU network after header overhead is 1460 To be safe for IPSEC/GRE etc some system drop it as low as 1300 If it grows beyond that the packet is fragmented.

Also these packets are informational and will be resent at regular intervals, so if one doesn't make it.. it wont matter

makew0rld commented 5 years ago

We need a way of identifying each packet so that a node knows if they already received once and it came back, or if its a new one that it needs to forward

So obviously nodes won't send the message back to where they got it from, but messages can still be received twice ofc. I'm thinking the identifier part can just be a randomly generated UUID, possibly a truncated one depending on how many bytes we get. Nodes can cache the UUIDs they know of for a certain amount of time, and recognize messages based off of that.

Easiest way is a counter, but that could cause forged packets (if thats a problem)

Maybe another way to do it would be to have a small signature using the node's public key, the same one their IP address is derived from? So that way nodes will check if the IPv6 address listed in the message matches the public key of the signed part of the message. Idk if that's too intensive or large (byte-wise) though, I'd be happy to hear your thoughts.

We will have to decide on defaults for this, maybe it can be on a per pkg basis

Should also have a hard limit on TTL so its not abused. IE broadcast to the WHOLE network. IE no more than 5

I was talking about defaults for notification, which each pkg should decide, but be able to be overriden. But about TTL, I agree yeah. This can be enforced by other nodes, who will drop any packet with a TTL larger than 5, or whatever we decide on later.

I built this model in bash and i think its pretty simple [...]

That sounds like a great model, and pretty simple too. My only concern is that passing the packet through each script could take a long time as more services/scripts get added, so we'll have to watch out for that.

Another model could be to have sub-folders like whatever.d/IPFS/ and whatever.d/SSB/, each sub-folder having multiple scripts inside it. So we only run the scripts that are known to handle IPFS services when the node gets a packet about IPFS.

I think your betting on 576 MTU which would never happen on a cjdns/yggdrasill network.

Good point. CJDNS MTU is 1304 with headers subtracted (chat, file), and Yggdrasil MTU is very high, 65535.

One thing I said you didn't comment on:

Potentially just contains a list of services, and then an interested node can reply and ask for the info for a specific service

Is this a good model, or now that we have more packet space, should nodes send out a list of all their services with their relevant info? You also didn't comment on my suggestion of using Cap'n Proto instead, what do you think of that? Especially if we're doing binary stuff like signatures, or listing many services, I think it'll be more efficient.

Some things I didn't mention in the original post: