Open rrogrs79 opened 8 years ago
From my previous experience on testing 10GB object files, setting "chunked=True" in write operations should help.
Thanks for the response! I'll give that a try.
From my previous experience on testing 10GB object files, setting "chunked=True" in write operations should help.
hi, I added "chunked=true" in wrire operations,then I submit the file,encountered errors on cosbench, such as:"Caused by: AmazonS3Exception: Status Code: 411, AWS Service: Amazon S3, AWS Request ID: tx000000000000000000991-005f040c22-2756fb-default, AWS Error Code: MissingContentLength, AWS Error Message: null, S3 Extended Request ID: 2756fb-default-default". my s3-config-file.xml: <?xml version="1.0" encoding="UTF-8" ?>
<operation type="write" ratio="100" division="container" config="cprefix=testwr64k;oprefix=w-500m-;containers=r(7,10);objects=r(1,100);chunked=True;sizes=c(500)MB" />
I was wondering if anyone has had success benchmarking workloads using larger file sizes (50GB+)? I'm able to successfully write the files using COSbench, but the tool does not report realtime statistics and once the workload has finished, the job "hangs" in the running stage so you never get an aggregate total. Any suggestions would be great! Have tested the same exact workload file with smaller file sizes and everything works as anticipated, so it's only the larger file sizes that is causing the issue. Thanks!