Closed springy76 closed 7 years ago
Thank you Springy76. You've found a subtle "bug" in stream size parser... In official specifications (MS-CFB), at the end of 2.6.1, there's a note that specify parsers should ignore most significant 32 bit for version 3 compound documents because of possible not-zero initialized bytes in files. OpenMcdf parser did not ignore those bytes so I've added this feature even if it is not, strictly speaking, 100% specs compliant. I'm going to commit bug fix in minutes... Best Regards, Federico
Wow, that was fast. Thank you for also publishing a new Nuget package.
I have this file "PHOTOS.GAL" (should be attached) which contains plausible data (good named sub folders and the following files per folder: "DATA", "INFO", "ITEM", "ALBUM", "THUMBNAIL").
BUT: Nearly all of the files (streams) report insane large (GB) or even negative file sizes.
If you try to read some bytes (just 48!) from any CFStream you get an OutOfMemoryException "Array dimensions exceeded supported range." after the system memory rapdidly increases to 20GB or higher:
When I break using Visual Studio when rapid memory increase starts (already at 2GB process memory) then the memory profiler says there are 30 million instances of "OpenMcdf.Sector" class.
Maybe something gets parsed incorrectly or maybe the writing software was ill -- I can't tell. From what I found on the web the file might have been created by Adobe Photodeluxe 2.0's "EZ Photo".
PHOTOS.zip