Open GoogleCodeExporter opened 9 years ago
Hi Sven,
Yes, you are correct.
Since the posix lz4 utility is not multi-threaded, compression has to wait
while loading the internal buffer, and it cannot load while compressing.
The best way to mitigate that effect is to use small buffers.
This is what is achieved using -B4.
A potentially interesting setting would be to use -B4D instead.
This creates chained 64KB blocks (instead of independent ones),
significantly increasing compression ratio.
Make sure to use lz4 release r117+ though, since older versions had a bug
regarding chained blocks command. See :
http://code.google.com/p/lz4/issues/detail?id=127
Regards
Original comment by yann.col...@gmail.com
on 13 May 2014 at 1:53
I'm currently trying to reproduce your issue, but without success so far.
The following command line :
cat filename | lz4 > /dev/null
wouldn't produce any significant difference between -B7 and -B4.
Is there a way I could reproduce your issue, to better study it ?
(Assuming I'm not installing Xen and an 8GB RAM VM to save its state, I need
something more lightweight...)
Original comment by yann.col...@gmail.com
on 14 May 2014 at 7:37
I think the issue is that "cat filename" is simply too lightweight. The Kernel
performs read-ahead on files I believe and if cat becomes blocked at some
point, it can produce new data very fast. With xen, and dumping a domUs memory,
this does not seems to be the case.
Original comment by sven.koe...@gmail.com
on 14 May 2014 at 10:47
Just to be clear : is this performance issue basically requesting to implement
multi-threading within the LZ4 command line utility ?
Original comment by yann.col...@gmail.com
on 11 Jun 2014 at 9:17
The idea would be to have a thread that fills a ringbuffer with data. The main
thread would get its data from this ring buffer instead of reading from stdin
or a file directly. It's easy to implement that. I'm an expert in pthread
mutexes and conditions. But I haven't had a time to implement and test it yet.
Unfortunately, and that would be many times easier, it is not so easy to
increase the size of the pipe that is hidden inside stdin or a fifo.
Original comment by sven.koe...@gmail.com
on 15 Jun 2014 at 11:40
Clear enough.
Unfortunately, my current multi-threading code is Windows specific, not
portable.
I did not spent time to learn how to write portable multi-threading code.
I guess pthreads is likely the way to go, while also keeping the ability to
generate single-threaded code for platforms unable to support pthreads.
Without external support, this objective will have to wait a bit.
Original comment by yann.col...@gmail.com
on 16 Jun 2014 at 1:06
Another potential way to answer such request
would be to "default" to 64 KB block size
when lz4 utility is used in pure-pipe mode.
Original comment by yann.col...@gmail.com
on 1 Jul 2014 at 6:58
In that case exactly one block fits into the pipe's buffer. I believe that will
harm performance.
Original comment by sven.koe...@gmail.com
on 1 Jul 2014 at 7:06
harm ?
Original comment by yann.col...@gmail.com
on 1 Jul 2014 at 7:07
"harm" if compared to having a larger buffer, I mean.
Certainly, as my benchmarks showed, using 64kB block size improved performance.
But having a buffer that can hold multiple blocks can improve performance even
further, IMHO.
Original comment by sven.koe...@gmail.com
on 1 Jul 2014 at 7:10
Original comment by yann.col...@gmail.com
on 6 Jul 2014 at 8:19
default 64KB blocks is an expected feature of upcoming r129
Original comment by yann.col...@gmail.com
on 31 Mar 2015 at 1:39
Well, finally, change of default parameter will rather be included within r130.
There are still a few third party lz4 readers out there which do not support
block linked mode. Linked mode is an associated feature with block 64KB,
because without it, the impact to compression ratio becomes sensible.
In the meantime, the explicit command `-B4D` works fine.
Original comment by yann.col...@gmail.com
on 16 Apr 2015 at 1:16
Original issue reported on code.google.com by
sven.koe...@gmail.com
on 13 May 2014 at 1:03