flying-circus / pyfilesystem

Automatically exported from code.google.com/p/pyfilesystem
BSD 3-Clause "New" or "Revised" License
0 stars 0 forks source link

ZipFS can't open archive members bigger than available memory #72

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?
1. Create a zip archive (using whatever zip program you like) containing a file 
bigger than your free memory (I created a zipfile containing a 1GB file, and 
I'm currently testing on a netbook with only 512MB of RAM)
2. Try to extract the file from the archive using e.g. fscp 
zipfile.zip\!hugefile localpath

What is the expected output? What do you see instead?
I expect to see the file extracted (which works if the archive member is 
smaller than free memory). If the (uncompressed) archive member is bigger than 
free memory I simply get this printed:
fscp: 
If I add the --debug flag I also get the same output. (I think the limited 
debug output is because of the threading code used by fscp?)

What version of the product are you using? On what operating system?
Latest version from SVN, Ubuntu 11.04

Please provide any additional information below.
I eventually tracked down the cause of this - the ZipFS.open function was 
trying to read the entire file into a StringIO.

I managed to fix it in the attached patch, but due to the way 
http://docs.python.org/library/zipfile.html#zipfile.ZipFile.open works, this 
fix will only have any affect when using Python >= 2.6, and only on zip files 
that have been opened from a path-string, not from a file-object.

Original issue reported on code.google.com by gc...@loowis.durge.org on 23 May 2011 at 3:04

Attachments:

GoogleCodeExporter commented 9 years ago
Applied, thanks.

Original comment by willmcgugan on 26 May 2011 at 10:41