We've had some issues with slow upload/download to/from S3 (although this shouldn't be too much of a problem when running on EC2). Ideally we'd turn on transfer compression but S3 doesn't support that, so I'm just going to manually implement GZIP compression in the point set datastore.
A block-level job data pointset for PDX and environs goes from 60MB to 2.4MB with GZIP compression.
This does mean you'll have to clear out S3 buckets.
We've had some issues with slow upload/download to/from S3 (although this shouldn't be too much of a problem when running on EC2). Ideally we'd turn on transfer compression but S3 doesn't support that, so I'm just going to manually implement GZIP compression in the point set datastore.
A block-level job data pointset for PDX and environs goes from 60MB to 2.4MB with GZIP compression.
This does mean you'll have to clear out S3 buckets.