protocol / beyond-bitswap

Other
34 stars 9 forks source link

Why Bitswsap and compression may not be a good match? #27

Open adlrocha opened 3 years ago

adlrocha commented 3 years ago

As part of RFC|BB|L203A of the Beyond Bitswap project we are exploring the use of compression within Bitswap. In our initial explorations we have implemented two main strategies:

Thus, two additional fields have been added to Bitswap's protobuf message: CompressedPaylaod including the compressed payload of the original Bitswap message if FullCompression is enabled, and the CompressionType flag to signal the compression strategy used.

Initial tests show an approximate x10 overhead by the use of compression compared to Bitswap without compression. We compared our GZip compression approach to existing implementations of GZip compression http handlers in golang, and the main difference is that GZip http handlers pipe the compression writer to the http server writer streaming the compressed bytes directly to the client. In our case, we can't use stream compression for several reasons:

As a result of all of this evaluations we want to open a discussion on the following topics:

jsoares commented 3 years ago

I don't want to detract from your larger point on protocol mismatch but these numbers must be looked at with some care.

I just tried reading, gziping, and writing to disk a 20 MB file on my machine (virtualised, under load) and it took 0.08 s for highly redundant data and 0.6 s for random data. This is using -9 for the worst-case scenario... In reality, you'd probably start with a much lower level for on-the-fly compression.

Looking at your plot here and for a ~20 MB file, you're reporting an increase of ~20 s in the full message case and ~6 s in the block compression case (vs. the uncompressed baseline). The times are not directly comparable but that's still an order of magnitude difference and doesn't seem entirely algorithmic -- implementation considerations probably have a huge influence. Since compressed communication is always a trade-off (it trades off link capacity and transmission time for computational capacity and CPU time), it is very sensitive to how fast the implementation is. I don't think a naive implementation allows us to draw definitive conclusions.

Looking into it a little more, it appears the Go standard library gzip is pretty slow (or at least was in 2015). It may be worth it to just quickly benchmark the gzip implementation in isolation and/or try alternatives.

The other side of the experiment, of course, is that using compression will never look good in evaluation unless you're evaluating the send time over a constrained link. I don't know if we have any sort of numbers for average upload speed across IPFS, but using something like 10 mbps may not be terribly unfair. (I assume this is not currently being done as the plot says "bandwidth 0", though that doesn't explain why it takes 3 s to transfer a 20 MB file in the baseline case...)

All that being said, there's a good chance even an optimised implementation will not suffice to bridge the gap.

adlrocha commented 3 years ago

I just realized I left this thread unanswered. A few updates on this compression matter: