intel-cloud / cosbench

a benchmark tool for cloud object storage service
Other
573 stars 242 forks source link

Benchmarking Large File Sizes #296

Open rrogrs79 opened 8 years ago

rrogrs79 commented 8 years ago

I was wondering if anyone has had success benchmarking workloads using larger file sizes (50GB+)? I'm able to successfully write the files using COSbench, but the tool does not report realtime statistics and once the workload has finished, the job "hangs" in the running stage so you never get an aggregate total. Any suggestions would be great! Have tested the same exact workload file with smaller file sizes and everything works as anticipated, so it's only the larger file sizes that is causing the issue. Thanks!

ywang19 commented 8 years ago

From my previous experience on testing 10GB object files, setting "chunked=True" in write operations should help.

rrogrs79 commented 8 years ago

Thanks for the response! I'll give that a try.

zhijunshi commented 4 years ago

From my previous experience on testing 10GB object files, setting "chunked=True" in write operations should help.

hi, I added "chunked=true" in wrire operations,then I submit the file,encountered errors on cosbench, such as:"Caused by: AmazonS3Exception: Status Code: 411, AWS Service: Amazon S3, AWS Request ID: tx000000000000000000991-005f040c22-2756fb-default, AWS Error Code: MissingContentLength, AWS Error Message: null, S3 Extended Request ID: 2756fb-default-default". my s3-config-file.xml: <?xml version="1.0" encoding="UTF-8" ?>

zhijunshi commented 4 years ago

<operation type="write" ratio="100" division="container" config="cprefix=testwr64k;oprefix=w-500m-;containers=r(7,10);objects=r(1,100);chunked=True;sizes=c(500)MB" />