azyx3 / lz4

Automatically exported from code.google.com/p/lz4
0 stars 0 forks source link

Running with limitedOutput can overrun destination buffer sometimes #98

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?
1. Running with a 15MB buffer of certain value
2. Run LZ4_compress_limitedOutput(src, dest, 15728640, 157286490)
3. Dest buffer has overrun

What is the expected output? What do you see instead?

I expect to fail the compression attempt but not have the outbuffer overwritten.

What version of the product are you using? On what operating system?

I'm using R104, but I see the same issue is still there in R108

Please provide any additional information below.

The issue is that the tests for limitedOutput have length>>8, which is the same 
as dividing by 256, but the RUN_MASK code that follows is a divide by 255.  
This off by 1 adds up can can cause the dest buffer to be overrun.  In my case 
when getting to the first test for limitedOutput length is 15334400.  When this 
is divided by 255 the result is 60134, when shifted right by 8 the result is 
59900.  This difference is enough to step on the end of the output buffer.

While I have a file that can recreate this issue I am currently not attaching 
it because it is 15MB and not compressible.  If you need it let me know.

Currently I have changed both tests to length/255 to address this issue.  The 
compiler seems to do a pretty good job of generating decent code instead of 
putting in an actual divide.

Scott

Original issue reported on code.google.com by lsharv...@gmail.com on 11 Nov 2013 at 6:44

GoogleCodeExporter commented 9 years ago

Original comment by yann.col...@gmail.com on 12 Nov 2013 at 5:35

GoogleCodeExporter commented 9 years ago
Thanks for the clear description.

I'm interested in the sample file if it's possible.
Could you make it available through a dropbox or other file/distribution 
service ?

Thanks

Original comment by yann.col...@gmail.com on 13 Nov 2013 at 7:27

GoogleCodeExporter commented 9 years ago
Hi Yann,

I think I have copied the data up to a google drive.  Give the links below a
try and see if it works.  I also included a test program that I wrote to
test it after I ran into issues on our hardware.

We are doing all buffer to buffer operations using all 15MB buffers for both
the source and destination buffers.

Just link the program with your code.

Please let me know if you are able to get the files OK.

Thanks,
Scott

bad_data -
https://drive.google.com/file/d/0B1lUjQzCavpmdjVDdzRmVHpiQU0/edit?usp=sharin
g
test.c -
https://drive.google.com/file/d/0B1lUjQzCavpmcnFCejlRQ1RtQjA/edit?usp=sharin
g

Original comment by lsharv...@gmail.com on 14 Nov 2013 at 12:55

GoogleCodeExporter commented 9 years ago
Thanks, I've received the docs.
I'll get a look into them as soon as my planning allows.

Original comment by yann.col...@gmail.com on 14 Nov 2013 at 9:47

GoogleCodeExporter commented 9 years ago
I had some time to review this issue today.
The test program is well written, and points directly at the problem.
The proposed solution is correct too.
I thoroughly checked its impact on speed, and it appears to be negligible.

So this fix will be part of the next release.

Best Regards

Original comment by yann.col...@gmail.com on 1 Dec 2013 at 2:07

GoogleCodeExporter commented 9 years ago
Corrected into r109

Original comment by yann.col...@gmail.com on 3 Dec 2013 at 3:54

GoogleCodeExporter commented 9 years ago
Looking at r109, I think line 506 is missing the fix? 
http://code.google.com/p/lz4/source/diff?spec=svn109&r=109&format=side&path=/tru
nk/lz4.c

Should
  if ((limitedOutput) && unlikely(op + (1 + LASTLITERALS) + (length>>8) > oend)) return 0;    // Check output limit

be replaced with
  if ((limitedOutput) && unlikely(op + (1 + LASTLITERALS) + (length/255) > oend)) return 0;    // Check output limit

Original comment by jpou...@gmail.com on 4 Jan 2014 at 5:27

GoogleCodeExporter commented 9 years ago
Good point Adrien.
I've been pondering this choice when building r109. Here are my thoughts :

Line 509 is about length of matches.
The match length calculation using >>8 drift is correct up to 64KB,
then it drifts by one byte. If such a match exist (typically a long run of 
zeroes), that means we have achieved the maximum compression ratio, at 255:1, 
on the segment.

With such an impressive saving at hand, it's quite improbable to simultaneously 
cross the limit of output buffer (in general, limit output size ~= input size. 
To achieve such an outcome, it would be necessary to have output buffer size ~< 
input size - 64KB, which I've not seen so far).

Finally, I've tested Line 509 modification using /255,
and it resulted in a loss of performance.

So here we are, there is a theoretical issues, which is fairly difficult to 
produce (only forged data can realistically achieve this result; even then, it 
requires access to source data to compress, an advanced understanding of 
implementation buffer sizes, a specific set of condition on output buffer size 
(< input size - 64KB); effectively, it requires these buffer to be quite large 
to begin with, which is untypical of current LZ4 scenarios (i.e., it cannot 
affect ZFS, it cannot affect Lucene/SolR, etc.)). On the other hand, preventing 
this theoretical corner case costs some measurable performance for everyone.

So I felt it was necessary to have a pause before moving on this one.

Original comment by yann.col...@gmail.com on 4 Jan 2014 at 9:16