vacp2p / research

Thinking in code
MIT License
63 stars 4 forks source link

Storage overhead per peer #57

Open staheri14 opened 3 years ago

staheri14 commented 3 years ago

Overview

rln-relay consists of the following steps: 1- A membership smart contract must be deployed on the blockchain, the contract holds the list of current registered members (as well as deleted members) and has the API for member insertion, and deletion (slashing). Root of the tree and auth path of pubkeys are not available as part of the contract.

2- Each rln-relay enabled peer must register a public key through the smart contract. Peers need to persistently store their public and secret keys. Registered peer locks a certain amount of fund in the smart contract, which gets burned in case the peer misbehaves (spams the system). 3- Each rln-relay enabled peer must construct and maintain a Merkle Tree based on the list of current members )fetched from the membership contract). This is required for the spam protection. As scuh, peers must keep listening to the contract events (member insertion or deletion) and update their local Merkle tree accordingly. Having the Merkle tree, a peer is able to

4- By having access to the recent root and membership proof

For a smaller size groups e.g., N = 128

mratsim commented 3 years ago

In Eth2 we are considering moving away from Merkle Trees and use KZG Polynomial Commitments / Verkle Trees to have constant size proofs. This is also necessary research for Eth1 stateless clients to cur down on witness size.

This is still in research phase but if it's finalized, there will be new libraries that will be optimized and audited to support that usage.

Some writeups:

staheri14 commented 3 years ago

@mratsim It is good to know! thanks for sharing it! a constant proof size is a significant improvements, I had a quick look at the verkle tree slides, it seems this new construction has a higher complexity for proof construction which is somehow in favour of spam protection (lowers the messaging speed of the spammers). But I am not sure whether it lowers the storage complexity of the peers or not, I need to investigate further, but my guess is that the storage complexity would be still linear (though we may be able to optimize it).