filecoin-project / drg-attacks

6 stars 2 forks source link

Parallelize Greedy attacks #6

Open schomatis opened 5 years ago

schomatis commented 5 years ago

Framing the problem in a context-free way first to ask for external opinions (mainly @dignifiedquire).

We'd like to parallelize a dynamic programming algorithm which does very cheap work (addition) and where most of the time is then spent on accessing (reading/writing) the data we're processing. An equivalent simplified implementation would look like:

for layer in 1..length {
    for (u,v) in connections {
        data[layer][u] += data[layer - 1][v];
    }
}

Each layer is constructed from the previous one and the (u,v) tuple (DRG parent/child) can be considered random, so any section of a layer may depend on any other section.

It seems that the coordination alone of such parallelization would outweigh its benefits. In each layer iteration we can consider the previous layer (data[layer - 1][v]) read-only and allow for concurrent access but the current layer (data[layer][u]) should be protected.

One possibility would be to partition the layer up front in different slices and assign each one a different thread. Then the main thread (though channels or similar) would route each (u,v) connection to the corresponding thread based on the index u.


Adding context for DRG attacks development, the code referred to before is:

https://github.com/filecoin-project/drg-attacks/blob/9c4fefe651b131281ab67db45bbd534b0bea9ec6/src/attacks.rs#L505-L518

which after the last optimizations seems like the biggest bottleneck in the algorithm.

nikkolasg commented 5 years ago

Greedy attacks are very slow. I think the two slowest functions are count_paths and update_radius in that order, but we may need to run a benchmark to confirm that.

schomatis commented 5 years ago

Will implement between Monday-Tuesday.

schomatis commented 4 years ago

As suggested by @dignifiedquire, partition seems like the way to go, we don't need channels, we can partition the parents accordingly.