AgentD / squashfs-tools-ng

A new set of tools and libraries for working with SquashFS images
Other
196 stars 30 forks source link

bug in sqfs_data_reader_read continues past end of file. #58

Closed smoser closed 4 years ago

smoser commented 4 years ago

When using libsquashfs as a library, I came across a bug in sqfs_data_reader_read. It shows itself also when using the demo sqfsbrowse. I'm using git at 0.9.1 (d4cccaa8dd4c49eba3d26f078769e72d71c4563d).

Heres a recreate:

$ echo "stat bash" | ./sqfsbrowse ../test/go-squash/test.squashfs  
$ stat bash
Stat: bash
Type: file
Inode number: 1
Access: 0755/rwxr-xr-x
UID: 1000 (index = 0)
GID: 1000 (index = 0)
Last modified: Thu, 18 Jun 2020 21:14:44 +0000 (1592514884)
Blocks start: 96
Block count: 10
Fragment index: 0xFFFFFFFF
Fragment offset: 0
File size: 1183448
        Block #0 size: 42060 (compressed)
        Block #1 size: 42856 (compressed)
        Block #2 size: 67036 (compressed)
        Block #3 size: 67184 (compressed)
        Block #4 size: 68972 (compressed)
        Block #5 size: 67236 (compressed)
        Block #6 size: 68480 (compressed)
        Block #7 size: 37888 (compressed)
        Block #8 size: 27568 (compressed)
        Block #9 size: 852 (compressed)

$ echo "cat bash" | ./sqfsbrowse ../test/go-squash/test.squashfs  > out
$ ls -l out
-rw-rw-r-- 1 smoser smoser 1310733 Jun 18 17:23 out

The file 'bash' (as reported by stat) is 1183448. That agrees with the file that I added to a test squash. But when we 'cat' the file and redirect it to out, we get 1310733 bytes. I think readline accounts for writing 13 bytes ($ cat /bin/bash and some trailing ones). When I do the same operation from library usage I keep reading until i get to 1310720 bytes.

1310720 is 128k * 10. which seems likely related to block size.

See the attached test.squash.zip

smoser commented 4 years ago

I forgot that I could attach gzip. So here is gzip rather than making you look at a zip file. test.squashfs.gz

AgentD commented 4 years ago

Hi!

I tested it with your sample and could reproduce the problem. The cause appears to be that the file bash is not a multiple of the block size, but instead of generating a fragment block, the tail end is stored as a short block. The loop that copies the data mistakenly assumes that the last block is also a full sized block.

Commit 94cfa86 should fix the issue by clamping the offset & range to the file size and removing the fragile block bounds testing from the copy loop.

If you test it with your sample, the output file should still be a bit larger (1183461 in my case, i.e. 13 bytes off), because of the readline promt being written to stdout as well.

Thanks!

smoser commented 4 years ago

@AgentD,

Thanks! I can verify that its working for me now.