celestiaorg / rsmt2d

Go implementation of two dimensional Reed-Solomon merkle tree data availability scheme.
Apache License 2.0
162 stars 80 forks source link

Investigate performance of parallelization #5

Closed adlerjohn closed 2 years ago

adlerjohn commented 4 years ago

Computing the erasure coding of each row and column, along with computing the namespace Merkle trees, can be done in parallel. Investigate the potential performance gains of doing so.

evan-forbes commented 4 years ago

I made a quick and dirty implementation that can be found here, and the benchmarks + more detailed write up can be found here

On an 8 core cpu, this implementation gets roughly 2-4 times faster, depending on data square size. performance )

I think this is a decent starting point, but judging by the trace, it looks like there is plenty of room for improvement.

I should have a chance to give the nmt library a similar treatment sometime in the next few days.

evan-forbes commented 4 years ago

I ran some more benchmarks for the nmt generation portion of hashing the data availability header, and fortunately (unfortunately?) everything went as one might expect. This task is even more parallelizable than generating the erasure data, but this implementation doesn't get the full benefits for such a parallelizable load. performance ~6x performances gains on a 8 core cpu for max width data square, with very little overhead introduced.

Summary: These unoptimized implementations show that there are easy options for performance gains should that be required. Write-up w/ code and traces: https://github.com/evan-forbes/rsmt2dbench Implementation for multi-threaded nmt generation: https://github.com/evan-forbes/lazyledger-core/tree/evan/multithreaded-rsmt

Thanks to @adlerjohn for the fun the idea Shoutout to @musalbas and @liamsi for writing easy to read code

As for further investigation

musalbas commented 4 years ago

Thanks for this @evan-forbes, this looks like great work.

Wondertan commented 2 years ago

@evan-forbes, I didn't see this before. Nice graphs! Curios, how did you generate those?

Wondertan commented 2 years ago

A few ideas towards parallelizing(it is getting annoying to wait for tests to generate the compute the square already):

adlerjohn commented 2 years ago

Additional thoughts regarding parallelization: whether rsmt2d erasure coding/NMT root computing should be done in parallel largely depends on how parallelized the user of the library is. For example, if only a single block is verified at a time, then indeed there will be gains if rsmt2d is parallelized. But if multiple blocks are verified in parallel, then the CPU will already be saturated.

That being said, there's always the case that once you've completed IBD and are at the tip, you'll be verifying one block at a time, and in such cases rsmt2d being parallelized is a requirement to saturate CPU cores.

evan-forbes commented 2 years ago

Curios, how did you generate those?

just Google sheets iirc

Wondertan commented 2 years ago

Users can potentially rely on Go's scheduler to distribute the load during sync, e.g., 10 blocks sync and reconstruction for each runs in its own routine. This is the simplest approach, but it has its downsides:

The best long-term approach I would propose is to have rsmt2d lib also do some form of concurrency and parallelization on the axis level. Writing some small global engine that processes axises and does not even aware of the square notion. It can even be implementing the Codec interface, but I have to think about it more.

Wondertan commented 2 years ago

I was going through the code for another reason and realized that it wouldn't be possible to do this on the Codec level, unfortunately. Codec is strictly sync interface, and the only way I see rn to make parallelization of repair by axis is to change solveCrossword method.

evan-forbes commented 2 years ago

we can probably change this issue to be about actually implementing parallelization, with the above feedback from @adlerjohn and @Wondertan, instead of only investigating it.