Open tony710508 opened 4 years ago
I have also just encountered this issue, has something changed? Note: this is between s3 buckets and not from EBS to S3
Thanks for the report. We're using the official AWS SDK. Copying objects (from S3 to S3) larger than 5GB
requires multipart uploading which we currently don't leverage.
https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html
You create a copy of your object up to 5 GB in size in a single atomic operation using this API. However, for copying an object greater than 5 GB, you must use the multipart upload Upload Part - Copy API.
We're going to use the multipart Copy API to support large file transfers between S3 buckets. Please note that uploading a large file to S3 works as expected.
https://github.com/aws/aws-sdk-go/pull/2653 PR is required to address this issue.
Just checking if there is any news on this issues. I could make use of this feature in my workflow. Not sure that PR @igungor mentioned is merged/ what's stopping that merge.
Is there any update on the status of this issue?
Similar problem is with sync
when file is larger than 5G.
case: source - has objects dest - is empty
s3cmd sync source/* dest/
"sync" fails silently on the first run after copying some objects. "sync" will fail on the second run and report the offending object
actually, bring us to the question - how do we exclude files from cp/sync ?
Any updates?
Ran into this today - any updates on this or is there a work-around ?
"InvalidRequest: The specified copy source is larger than the maximum allowable size for a copy source: 5368709120 status code: 400" this is the error in v2.2.2
fail to cp file with 6.3GB from one bucket to another bucket
EntityTooLarge: Your proposed upload exceeds the maximum allowed object size