pierrec / lz4

LZ4 compression and decompression in pure Go
BSD 3-Clause "New" or "Revised" License
878 stars 142 forks source link

Uncompressing quite slow in sparsely compressed data #215

Open evanphx opened 10 months ago

evanphx commented 10 months ago

Hi hi,

I've got a project that is compressing large-ish chunks (4-20k per chunk) as single lz4 blocks. In doing some profiling, I found that when the chunks are sparsely populated (ie, many runs of null bytes (these are disk blocks so they're commonly sparse)) the decoding performance is really poor.

Looking into the situation more, I found that the cause is the hand-coded assembly routines are tuned for quite small literals and match lengths as they use loops to copy up to 8 bytes at a time. When this is applied to long runs of, say, null bytes (in this case, the literal is 1 byte (a null) and the match ends up being like 16k in some cases) it's masssively slow when compared to the optimizations that copy does.

It being a slower time of year, I decided to go ahead and dig in and see if I could solve the issue. After reading the assembly, the Go version, and many profiling sessions, I found that as of Go 1.20, the Go version is faster than the hand-coded assembly in all cases if we apply one tweak to speed up copying short byte sequences (1 to 8 bytes).

I pulled the code out of lz4 and began working on profiling and optimizing and have released https://github.com/lab47/lz4decode.

I'm more than happy to port the code back into this package, please just let me know!

lizthegrey commented 10 months ago

Which architecture? ARM or AMD? And can you provide a benchmark to help us reproduce and tune?

evanphx commented 10 months ago

Both ARM64 and AMD64. Yup, benchmark all written and results provided here: https://github.com/lab47/lz4decode/actions/runs/7428506656