Closed kenkendk closed 7 years ago
Thanks for this.
If you could implement zip64 writing that would be great! I imagine both of the solutions you mentioned will be required (unless changing to zip64 from zip can be automatic)
I have to confess I haven't looked too deeply at the zip64 format at the moment.
Urgh, I just found an error from this commit: https://github.com/adamhathcock/sharpcompress/commit/6be6ef0b5c3ed6a4d40d00c8fb133518e75e4a6f
The values here are changed: https://github.com/adamhathcock/sharpcompress/blob/6be6ef0b5c3ed6a4d40d00c8fb133518e75e4a6f/src/SharpCompress/Common/Zip/Headers/ZipFileEntry.cs#L60
But the write uses the type here: https://github.com/adamhathcock/sharpcompress/blob/6be6ef0b5c3ed6a4d40d00c8fb133518e75e4a6f/src/SharpCompress/Common/Zip/Headers/LocalEntryHeader.cs#L45
So now it writes invalid local headers, as they have 8 byte sizes instead of 4 byte sizes....
Urgh, and in the central directory too!
Wow that is a bad error. The writing needs to know if it's zip64 or not and write the correct byte size.
I guess zip64 writing needs done asap.
zip64 maintains the size, but sets it to 0xffffffff
and then writes an "extra" record with the sizes having 64bit values.
Thanks for the quick merge and update.
I will investigate some more, because there must be some mitigating factor, otherwise the filenames would be garbled in all produced zip files, and that would break my unittests, so I would have caught it there.
If you've got some tests that can be added to SharpCompress, I would appreciate it. Thanks for finding this.
Fixed by https://github.com/adamhathcock/sharpcompress/pull/210
This issue is not fixed, the #210 was a problem with writing invalid headers. There is still an issue with creating files larger than 4GB.
Phew, found the mitigating factor: https://github.com/adamhathcock/sharpcompress/blob/06e3486ec4c67377c4aa6c65b79d50d7e7925e56/src/SharpCompress/Writers/Zip/ZipCentralDirectoryEntry.cs#L42
This is still wrong, as it converts to 8 bytes, and then takes only 4. Fortunately little endian notation means that the value is written correctly on any x86 or ARM machine.
Nope, wrong again, the actual entry used is: https://github.com/adamhathcock/sharpcompress/blob/06e3486ec4c67377c4aa6c65b79d50d7e7925e56/src/SharpCompress/Writers/Zip/ZipCentralDirectoryEntry.cs#L17
Which has the uint
type. I do not know where the other serialization methods are called.
You can't zip files larger than 4 GB. The Zip64 support is currently read only
Sure you can, try the example code in this issue.
If it works, then it's unintentional and you're finding the errors because of it.
Yes, that is why I opened the issue. You can create a zip file larger than 4gb without errors, and you see the errors only when you try to read the file.
I would expect some error to happen when crossing the 4gb limit, otherwise I will not discover the problem before I attempt to read the file.
Okay, sorry. I thought this was a discussion of the continuation of the writing error.
Yes, something should be done to know files are too large for zip and/or implement zip64
I have tracked down an issue causing failures when attempting to read zip files with SharpCompress (files created AND read by SharpCompress).
The error message is
Unknown header {value}
, where{value}
is some random bytes from the file. This is similar to issue #33, but they report it for a much smaller file (I have tested the file mentioned in the issue, and it does not appear to cause any errors).The problem is that the Central Directory Entry is limited to storing the
Size
andHeaderOffset
asuint
values. There are no checks in SharpCompress if this limit is exceeded, causing the creation to succeed, but then failing to read them later. In the example below, this is done with a single file of 4GB size, but it can also be achieved with many smaller files, as long as theHeaderOffset
value becomes larger than2^32
.There is another issue in that the number of files are limited to
ushort
, but this appears to have no effects other than reporting the wrong number of files, which is not directly exposed.A workaround to reading such a file is using the forward-only interface, which does not read the Central Directory Entry, and this can correctly read the file contents unless there is a single file larger than
2^32
.I can think of two solutions:
1) Prevent creating files with offsets larger than
2^32
, simply throwing an exception if this is detected.2) Support zip64, which replaces the size and offset with
0xffffffff
and stores a 64bit value in the extended information.I think support for zip64 is the better choice here. Reading support for zip64 has already been added, but it requires that the correct zip64 records are written.
I will see if I can make a PR that adds zip64 support.
Example code that reproduces the issue: