I tried using pyfilesystem's s3fs to process a large quantity of data from Amazon's S3 (more than fits on disk) and I found that operation failed, because I ran out of space for the temp files s3fs creates.
Would it make sense to have an option or change the default to load files directly into memory instead of writing them to disk?
I tried using pyfilesystem's s3fs to process a large quantity of data from Amazon's S3 (more than fits on disk) and I found that operation failed, because I ran out of space for the temp files
s3fs
creates.Would it make sense to have an option or change the default to load files directly into memory instead of writing them to disk?