vacp2p / research

Thinking in code
MIT License
62 stars 4 forks source link

Waku-RLN-Relay: Merkle tree storage evaluation #109

Closed staheri14 closed 1 year ago

staheri14 commented 1 year ago

The storage of the full Merkle tree by peers in waku-rln-relay may become a bottleneck, especially for resource-restricted devices. While this may intuitively make sense, we need some benchmarking and concrete performance evaluation to assess the correctness of this premise. This issue is to capture the necessary steps for this matter:

Next

If the storage overhead is proven to be a bottleneck, then a proper storage optimization algorithm/protocol should follow. That optimized algorithm will not be implemented in the killic lib rather in the zero-kit lib which is the replacement of killic lib for the waku-rln-relay.

staheri14 commented 1 year ago

fyi: @oskarth

oskarth commented 1 year ago

Can we please clarity this in terms of order of operations? To me this seems orthogonal to dynamic testnet. The way I see it is:

1) Critical path for dynamic testnet 2) Change over to Zerokit 3) Performance etc work

Of course if there are different people involved these can be parallelized (i.e. Keshav working on contracts), but from a global POV and to have it reflected in milestones as discussed.

staheri14 commented 1 year ago

@s1fr0 I see you have concerns about the running time of MT, can you please elaborate on that so that I can look into that aspect of the killic MT implementation?

staheri14 commented 1 year ago

@oskarth I did some updates on the issue description to make it more clear. I will also soon publish a meta issue about the rln critical path to shed light on the envisioned milestones, order of operations, and dependencies.

oskarth commented 1 year ago

Thanks.

soon publish a meta issue about the rln critical path to shed light on the envisioned milestones, order of operations, and dependencies.

The most important here is having one (or several) single clear milestone issue with a tight scope.

oskarth commented 1 year ago

Compute the storage complexity of the Merkle tree based on the algorithm used in the killic lib Compute the concrete storage overhead by experimenting with the killic lib

How exactly are we doing this and what are the parameters? I think it'd be good to keep this work tight in scope before running large simulations or varying parameters too much.

E.g. something like back of the napkin storage complexity math (similar to what Daniel did for Discovery v5 post in forum), and maybe only testing for a few orders of magnitude for inserts or so.

Might be good also to collect more clearly what we know and don't know when it comes to perf, e.g. your presentation already quite a lot of good numbers here.