pycontribs / pyrax

The Python SDK for the Rackspace Cloud
developer.rackspace.com
Apache License 2.0
237 stars 208 forks source link

Segment large files for upload without needing tons of RAM #551

Open Starblade42 opened 9 years ago

Starblade42 commented 9 years ago

Reason for the changes and pull request:

This change addresses a bug I found. I'd try to upload a file larger than 5 GB, relying on pyrax to segment and handle the metadata for me, but it gave me a MemoryError.

project_container_name = "BigDataUploadTest"
# File that's > 5 GB
backup_file = "/data/tmp/SomeBigTarBall.tar.bz2"
# Get container
project_container = cloudfiles.get_container(project_container_name)
# Set checksum
checksum = pyrax.utils.get_checksum(backup_file)
#Upload file. Throws a python MemoryError
project_container.upload_file(backup_file, etag=checksum)

Inspecting the code showed that the function was trying to read 5 GB of the file into memory before writing to the disk.

# Maximum size of a stored object: 5GB - 1
MAX_FILE_SIZE = 5368709119

...

tmp.write(content.read(MAX_FILE_SIZE))

This didn't work on my 1 GB cloud server, of course. This should nicely resolve the issue for myself and for others.

The StorageObjectManager._upload function will now read and write in small pieces until the segment reaches the desired size, and it seems to work perfectly, especially with the self-deleting tmp file utility function.

I only found this issue in the one place, but it's possible it ought to be applied elswhere. If other file segmenting is done on the disk, the new function can be used easily.

Changes made

The function read_in_chunks by default reads in pieces of 8192 bytes (8 Kb) so even memory constrained boxes shouldn't have problems segmenting large files if they have enough disk space. This function required a new function in pyrax.utils

The following changes were made:

Obviously you'll want to verify everything, but I've done what I know to do on my end. Let me know if I can do anything to help.