rahra / onioncat

Official repository of OnionCat, the VPN adapter for Tor and I2P.
https://www.onioncat.org/
GNU General Public License v3.0
226 stars 29 forks source link

OnionCat4 discussion notebook #34

Open rahra opened 3 years ago

rahra commented 3 years ago

This is a collection of open questions for OnionCat4, to remind me that I do not forget. And of course, open for discussion!

OnionCat4 is developed in the branch hsv3lookup.

aight8 commented 3 years ago

I don't know the exact state of the v3 lookup mechanism. However I want to write some personal notes / keywords:

random 3 digit pin (simple cluster passphrase guessing avoidance) + custom passphrase -> seed bytes -> hiarchial deterministic ed25519 key -> master pub key is shared -> determ all pub keys -> generate onion IDs (n = 10) -> master prv key required at init of a node

-> node try to connect to (n) nodes at bootstrap, if any connection succeeeded -> join cluster, otherwise -> create new -> on cluster enter (new node): node receives joined node count + used ID's (e.g.: 0,1,2,5 / node 3,4 went offline), maxNodeID = 5, my new node ID = 6 -> on cluster enter (with existing ID): same than new node, but advertise own node with specific ID, have to proove to the cluster (entrypoint node) I have the prv of the node with this ID (sign something) -> on cluster create: nothing (node ID = 0) -> a node can send heartbeat to the cluster, node is removed from DHT (active nodes) after timeout time (e.g. 30s)

-> onion services are created adhoc via the tor control port

this represents a lightweight p2p application (as control app/discovery/registrar)

used technologies: ed25519, tor, a p2p stack with dht

rahra commented 3 years ago

I think I got your idea. But how would you find the "initial contact"? By distributing the master key to your set of OnionCat nodes?

I'm already working on an article describing what I'm working on with this V3 lookup mechanism ;)

aight8 commented 3 years ago

By providing the passphrase for every node on-site once while the node bootstrap phase. It is used to generate the pubkeys at m/*. Then it try to access the cluster by resolving the first n nodes and connect to one of them. When it has connected to any of them my current node is part of the tor P2P network. (my node receives: all online nodes (index list) + the last ever used node index. the next higher one is my nodes index). Now the bootstrap generates the private key for that index from the generated master key - a ed25519 prv key which is my onion address, and discard the master key for security reasons. (with the master key I could impersonate every node in the network - so one bad node can do anything) The tap interface could map then like: 192.168.100.[node-index] -> m/[node-index]

As improvement theoretically the node cluster can ensure that a node at m/0 is always available. If not just publish that one, don't care if it get republished. m/0 could be an cluster entry node or just a DHT that provides indexes of online nodes (for example beyond 10, when the first 10 are offline). Though that improvement goes pretty far...

ocat -p "with-this-passphrase-im-part-of-the-network"                      (6st form)

Okey cool, I am curious!

rahra commented 3 years ago

Sounds good. Although it also sounds like a lot of work which I do not have any more, at least at the moment. But this is an open source project and you have my full support, for writing a paper or design draft on that and of course later in a possible implementation. I think what needs a little bit more attention still is the issue with the master key and the possibility of a rogue OnionCat node. What I did in the moment is pretty straight forward, no crypto, just DNS lookups within the network. Stand by for my explanation.