adamhathcock / sharpcompress

SharpCompress is a fully managed C# library to deal with many compression types and formats.
MIT License
2.27k stars 481 forks source link

Streams larger than `uint.MaxValue` cause broken zip files #209

Closed kenkendk closed 7 years ago

kenkendk commented 7 years ago

I have tracked down an issue causing failures when attempting to read zip files with SharpCompress (files created AND read by SharpCompress).

The error message is Unknown header {value}, where {value} is some random bytes from the file. This is similar to issue #33, but they report it for a much smaller file (I have tested the file mentioned in the issue, and it does not appear to cause any errors).

The problem is that the Central Directory Entry is limited to storing the Size and HeaderOffset as uint values. There are no checks in SharpCompress if this limit is exceeded, causing the creation to succeed, but then failing to read them later. In the example below, this is done with a single file of 4GB size, but it can also be achieved with many smaller files, as long as the HeaderOffset value becomes larger than 2^32.

There is another issue in that the number of files are limited to ushort, but this appears to have no effects other than reporting the wrong number of files, which is not directly exposed.

A workaround to reading such a file is using the forward-only interface, which does not read the Central Directory Entry, and this can correctly read the file contents unless there is a single file larger than 2^32.

I can think of two solutions:

1) Prevent creating files with offsets larger than 2^32, simply throwing an exception if this is detected.

2) Support zip64, which replaces the size and offset with 0xffffffff and stores a 64bit value in the extended information.

I think support for zip64 is the better choice here. Reading support for zip64 has already been added, but it requires that the correct zip64 records are written.

I will see if I can make a PR that adds zip64 support.

Example code that reproduces the issue:

using System;
using System.IO;
using System.Linq;
using SharpCompress.Archives;
using SharpCompress.Common;
using SharpCompress.Writers;
using SharpCompress.Writers.Zip;

namespace BreakTester
{
    class MainClass
    {
        public static void Main(string[] args)
        {
            // Set up the parameters
            var files = 1;
            var filesize = 4 * 1024 * 1024 * 1024L;
            var write_chunk_size = 1024 * 1024;

            var filename = "broken.zip";
            if (!File.Exists(filename))
                CreateBrokenZip(filename, files, filesize, write_chunk_size);

            var res = ReadBrokenZip(filename);
            if (res.Item1 != files)
                throw new Exception($"Incorrect number of items reported: {res.Item1}, should have been {files}");
            if (res.Item2 != filesize)
                throw new Exception($"Incorrect number of items reported: {res.Item2}, should have been {files * filesize}");
        }

        public static void CreateBrokenZip(string filename, long files, long filesize, long chunksize)
        {
            var data = new byte[chunksize];

            // We use the store method to make sure our data is not compressed,
            // and thus it is easier to make large files
            var opts = new ZipWriterOptions(CompressionType.None) { };

            using (var zip = File.OpenWrite(filename))
            using (var zipWriter = (ZipWriter)WriterFactory.Open(zip, ArchiveType.Zip, opts))
            {

                for (var i = 0; i < files; i++)
                    using (var str = zipWriter.WriteToStream(i.ToString(), new ZipWriterEntryOptions() { DeflateCompressionLevel = SharpCompress.Compressors.Deflate.CompressionLevel.None }))
                    {
                        var left = filesize;
                        while (left > 0)
                        {
                            var b = (int)Math.Min(left, data.Length);
                            str.Write(data, 0, b);
                            left -= b;
                        }
                    }
            }
        }

        public static Tuple<long, long> ReadBrokenZip(string filename)
        {
            using (var archive = ArchiveFactory.Open(filename))
            {
                return new Tuple<long, long>(
                    archive.Entries.Count(),
                    archive.Entries.Select(x => x.Size).Sum()
                );
            }
        }

        public static Tuple<long, long> ReadBrokenZipForwardOnly(string filename)
        {
            long count = 0;
            long size = 0;
            using (var fs = File.OpenRead(filename))
            using (var rd = ZipReader.Open(fs, new ReaderOptions() { LookForHeader = false }))
                while (rd.MoveToNextEntry())
                {
                    count++;
                    size += rd.Entry.Size;
                }

            return new Tuple<long, long>(count, size);
        }
    }
}
adamhathcock commented 7 years ago

Thanks for this.

If you could implement zip64 writing that would be great! I imagine both of the solutions you mentioned will be required (unless changing to zip64 from zip can be automatic)

I have to confess I haven't looked too deeply at the zip64 format at the moment.

kenkendk commented 7 years ago

Urgh, I just found an error from this commit: https://github.com/adamhathcock/sharpcompress/commit/6be6ef0b5c3ed6a4d40d00c8fb133518e75e4a6f

The values here are changed: https://github.com/adamhathcock/sharpcompress/blob/6be6ef0b5c3ed6a4d40d00c8fb133518e75e4a6f/src/SharpCompress/Common/Zip/Headers/ZipFileEntry.cs#L60

But the write uses the type here: https://github.com/adamhathcock/sharpcompress/blob/6be6ef0b5c3ed6a4d40d00c8fb133518e75e4a6f/src/SharpCompress/Common/Zip/Headers/LocalEntryHeader.cs#L45

So now it writes invalid local headers, as they have 8 byte sizes instead of 4 byte sizes....

kenkendk commented 7 years ago

Urgh, and in the central directory too!

adamhathcock commented 7 years ago

Wow that is a bad error. The writing needs to know if it's zip64 or not and write the correct byte size.

I guess zip64 writing needs done asap.

kenkendk commented 7 years ago

zip64 maintains the size, but sets it to 0xffffffff and then writes an "extra" record with the sizes having 64bit values.

kenkendk commented 7 years ago

Thanks for the quick merge and update.

I will investigate some more, because there must be some mitigating factor, otherwise the filenames would be garbled in all produced zip files, and that would break my unittests, so I would have caught it there.

adamhathcock commented 7 years ago

If you've got some tests that can be added to SharpCompress, I would appreciate it. Thanks for finding this.

Fixed by https://github.com/adamhathcock/sharpcompress/pull/210

kenkendk commented 7 years ago

This issue is not fixed, the #210 was a problem with writing invalid headers. There is still an issue with creating files larger than 4GB.

kenkendk commented 7 years ago

Phew, found the mitigating factor: https://github.com/adamhathcock/sharpcompress/blob/06e3486ec4c67377c4aa6c65b79d50d7e7925e56/src/SharpCompress/Writers/Zip/ZipCentralDirectoryEntry.cs#L42

This is still wrong, as it converts to 8 bytes, and then takes only 4. Fortunately little endian notation means that the value is written correctly on any x86 or ARM machine.

kenkendk commented 7 years ago

Nope, wrong again, the actual entry used is: https://github.com/adamhathcock/sharpcompress/blob/06e3486ec4c67377c4aa6c65b79d50d7e7925e56/src/SharpCompress/Writers/Zip/ZipCentralDirectoryEntry.cs#L17

Which has the uint type. I do not know where the other serialization methods are called.

adamhathcock commented 7 years ago

You can't zip files larger than 4 GB. The Zip64 support is currently read only

kenkendk commented 7 years ago

Sure you can, try the example code in this issue.

adamhathcock commented 7 years ago

If it works, then it's unintentional and you're finding the errors because of it.

kenkendk commented 7 years ago

Yes, that is why I opened the issue. You can create a zip file larger than 4gb without errors, and you see the errors only when you try to read the file.

I would expect some error to happen when crossing the 4gb limit, otherwise I will not discover the problem before I attempt to read the file.

adamhathcock commented 7 years ago

Okay, sorry. I thought this was a discussion of the continuation of the writing error.

Yes, something should be done to know files are too large for zip and/or implement zip64