Closed GoogleCodeExporter closed 9 years ago
BZIP2 library also works similarly with a bz_stream structure and similar
function prototypes.
Original comment by nathan.m...@gmail.com
on 4 Nov 2012 at 9:50
LZ4 format is unfortunately incompatible with some corner-cases streaming
scenarios.
It would be necessary to first modify it, not a big task, but with some
potential risks regarding compatibility with existing user base.
I keep your suggestion in mind, since well, it's not the first time this is
required, and if there is enough pression for it, i might as well proceed with
the changes.
Original comment by yann.col...@gmail.com
on 4 Nov 2012 at 11:54
Not a defect. Enhancement request.
Original comment by yann.col...@gmail.com
on 4 Nov 2012 at 11:55
Note that we have written a small encapsulating library wrapping lz4/fastlz (or
any similar block compression library) with a zlib-like API (and a pluggable
compatibility header fastlzlib-zlib.h)
https://github.com/exalead/fastlzlib
Feel free to use it and report issues!
Original comment by xroche
on 4 Jan 2013 at 3:04
Thanks Xavier.
It's an excellent reference.
Original comment by yann.col...@gmail.com
on 4 Jan 2013 at 3:38
streaming support might be nice for, for instance, web apps, etc...I suppose :)
Original comment by rogerpack2005
on 14 Aug 2013 at 7:31
I know. This is a work in progress ;)
stay tune for updates...
Original comment by yann.col...@gmail.com
on 14 Aug 2013 at 7:32
Any progress or code we can poke at yet? Thanks for a great library.
Original comment by fullung@gmail.com
on 30 Dec 2013 at 5:22
Some core functions are now present into lz4.h, but they require to manually
care about buffer management and layout.
An interface with abstraction like zlib still has to be completed.
It's in my todo list.
Original comment by yann.col...@gmail.com
on 30 Dec 2013 at 5:25
Streaming support would be great
Original comment by doppelba...@gmail.com
on 24 Jan 2014 at 8:54
Sure.
I'll probably scale down my ambitions in order to get "something out".
My current attempt tries to take into consideration too many corner cases, and
it proves to complex for a first release.
Original comment by yann.col...@gmail.com
on 25 Jan 2014 at 2:52
Original comment by yann.col...@gmail.com
on 22 Apr 2014 at 11:08
Original comment by yann.col...@gmail.com
on 20 May 2014 at 9:22
Some progresses on this front.
http://fastcompression.blogspot.fr/2014/05/streaming-api-for-lz4.html
Comments & Questions welcomed.
Original comment by yann.col...@gmail.com
on 20 May 2014 at 9:23
Would it be possible to de-compress a stream?
Original comment by doppelba...@gmail.com
on 20 May 2014 at 9:43
Of course :
LZ4_decompress_safe_usingDict()
Original comment by yann.col...@gmail.com
on 20 May 2014 at 9:46
Wouldn't it be possible to keep the "old" compression streaming API and add a
decompression API like:
void* LZ4_decompress_init (...);
int LZ4_decompress_continue (...);
int LZ4_decompress_free (...);
Original comment by doppelba...@gmail.com
on 31 May 2014 at 10:34
Yes, it could be.
Original comment by yann.col...@gmail.com
on 31 May 2014 at 10:07
@doppelbauer :
Just to understand,
if I do understand your request properly,
it seems the main function you ask for : int LZ4_decompress_continue (...);
already exists, but is called : int LZ4_decompress_safe_withPrefix64k (...);
There is no "init" nor "free" associated, because there is no need for them.
Is it a problem with the function naming ?
Original comment by yann.col...@gmail.com
on 1 Jun 2014 at 10:54
@doppelbauer :
Maybe I misunderstood.
LZ4_decompress_safe_withPrefix64k() works without any need for a tracking
structure, but implies that previously decoded data must stand *just before*
the memory buffer where new data will be decoded.
Did you meant that LZ4_decompress_safe_continue() should have the ability to
decompress new blocks without this "positioning" condition, i.e. with
previously decoded memory block anywhere into memory, and still the ability to
use it to decompress next block ?
Then it would become the equivalent of LZ4_decompress_safe_usingDict(), but
without the need to explicitly tell where the previous data block is, it would
be automatically determined by the tracking structure.
Original comment by yann.col...@gmail.com
on 9 Jun 2014 at 2:24
Hi Yann,
Thanks a lot for your answer.
Maybe I didn't understand the API.
It would be great to de-compress a LZ4 stream packet by packet.
I have attached a test-case "streaming-test.c". It compressed random-data and
tries to uncompress in 64k chunks:
gcc -o test streaming-test.c lz4.c && ./test
Thanks a lot!
Markus
Original comment by doppelba...@gmail.com
on 10 Jun 2014 at 9:20
Attachments:
Hi doppelbauer
I've looked at your example.
There is a small flaw that I'll try to explain.
If you want to decompress 64K chunks, you have to compress 64K chunks.
LZ4_decompress_xxx() functions can only decompress full chunks.
(except LZ4_decompress_safe_partial, but that's a very specific exception, and
doesn't match your use case anyway).
In order to compress 64K chunks, you can either :
- compress them independently, one by one, using LZ4_compress()
- compress them in "chain mode", meaning successive blocks will reference
previous ones, boosting compression ratio (but also require to decompress them
in sequence) : this is where you use LZ4_compress_continue()
Once one of above conditions is met, you can decompress 64KB chunks, using
LZ4_decompress_safe_withPrefix64k(), as you did in your example.
If the naming is confusing, I could also propose an identical function named
LZ4_decompress_continue(), just for the sake of clarity.
Regards
Original comment by yann.col...@gmail.com
on 11 Jun 2014 at 10:15
Streaming branch has been merged into "dev" branch.
https://github.com/Cyan4973/lz4/tree/dev
Getting closer to a release
Original comment by yann.col...@gmail.com
on 11 Jun 2014 at 9:10
Hi Yann,
I'm looking at LZ4 HC header in the 'dev' tree, and it still states 'Note :
these streaming functions still follows the older model'. Should these
functions still be used for new projects? Or is there some newer/better API.
I'd really appreciate a pointer here. A single function name would be enough.
But if there's some sample code, that'd be even better ;)
Thanks!
Original comment by dchichkov@gmail.com
on 13 Jun 2014 at 1:01
Hi Dmitry
Realistically, the current streaming API of LZ4 HC will remain "as is" for a
few weeks. It requires time and caution to adapt, while I'll have to move on
and spend some time on a long-overdue request for xxHash.
Therefore, I'll soon update LZ4, with the new streaming interface *for the Fast
variant only*.
Should you need to start a development using LZ4 HC streaming interface, I
recommend to use the currently available interface.
Hopefully, it shouldn't be much of a problem.
The benefit of the new interface is that it will be more flexible. But if you
can work your problem with the current interface limitation, you'll have no
issue to adapt it for the new interface when it will be available.
Moreover, I intend to continue supporting the "current" streaming interface for
quite some time, even after publication of the new one, by putting the relevant
functions into "obsolete" category. Since they will stay there for some time,
it will give users time to adapt.
Regards
Original comment by yann.col...@gmail.com
on 14 Jun 2014 at 10:12
Hi Yann,
Is there a chance to decompress a stream in chunks, which was created
via "LZ4_compress()"?
Thanks a lot!
Markus
Original comment by doppelba...@gmail.com
on 14 Jun 2014 at 10:46
@doppelbauer
Basically, no.
If you compress a chunk as a single block, using LZ4_compress(), you have to
decompress it as a single block too.
The only exception is LZ4_decompress_safe_partial(), which can decompress a
part of the block, from the beginning of the block, to targetOutputSize.
But if what you need is a chunk at the end of the block, you will have to
decompress the entire block too.
It's still unclear to me if your problem is related to memory management (you
don't want to decompress the entire block, because there is not enough memory
for it), or to random access into the block (you just want to decompress a
small part of the compressed block because that's all you need).
Both problems are vastly different and require completely different solutions.
Original comment by yann.col...@gmail.com
on 14 Jun 2014 at 10:52
Database-Server sends compressed records to a client
Currently I use "LZ4_compress()" and "LZ4_decompress_safe()" - but the client
has to know the size of the decompressed value (and starts decompressing after
receiving the last byte).
So my idea was, that it would be great to decompress the stream in chunks.
Thanks a lot!
Markus
Original comment by doppelba...@gmail.com
on 14 Jun 2014 at 10:59
Then you may benefit from the new streaming API.
It basically depends on the size of your records.
If they are a few KB each, then there is probably no better alternative, you're
already doing the right thing.
If they are a few MB each, then it's time to consider cutting the record into
smaller blocks.
Use LZ4_compress_continue() on the small blocks instead of LZ4_compress().
This way, compression ratio will remain roughly equivalent (instead of
dramatically falling down as the block size gets smaller).
Then, you'll be able to decompress the small blocks one by one, using
LZ4_decompress_safe_withPrefix64k() or LZ4_decompress_safe_usingDict().
Original comment by yann.col...@gmail.com
on 14 Jun 2014 at 11:04
A new streaming API is proposed within r118.
It is only proposed for the Fast compression variant and decompression though.
I keep the issue opened, it will be closed when the new streaming API will also
be proposed within the High Compression variant LZ4HC.
Original comment by yann.col...@gmail.com
on 26 Jun 2014 at 9:51
With the release of latest LZ4 HC streaming API, I guess we can now close this
long-standing enhancement request.
Original comment by yann.col...@gmail.com
on 8 Nov 2014 at 8:44
Supported into r124
Original comment by yann.col...@gmail.com
on 8 Nov 2014 at 8:45
Thanks for doing this.
Original comment by rogerdpa...@gmail.com
on 16 Dec 2014 at 6:19
Original issue reported on code.google.com by
nathan.m...@gmail.com
on 4 Nov 2012 at 9:40