Closed bosher closed 6 years ago
Try adding param -Dcom.amazonaws.services.s3.disableGetObjectMD5Validation=true
to java in cosbench-start.sh?
/usr/bin/nohup java -Dcom.amazonaws.services.s3.disableGetObjectMD5Validation=true -Dcosbench.tomcat.config=$TOMCAT_CONFIG -server -cp main/* org.eclipse.equinox.launcher.Main -configuration $OSGI_CONFIG -console $OSGI_CONSOLE_PORT 1> $BOOT_LOG 2>&1 &
The test is passing after disabling MD5 validation. However, one of the purposes of the test for s3 implementation is content upload/download verification based on MD5. We could not verify content if we disable MD5.
@bosher check the object's etag. If the etag is not generated by md5, the md5 check cannot success.
The etag generated by Amazn is correct in all cases.
I have found that the problem is fixed if I create a build from master. So, the problem is only applicable to 0.4.3.c4 release.
@bosher sorry to interrupt, can you tell me how to build this project from master?
Please look at instructions https://github.com/intel-cloud/cosbench/blob/master/BUILD.md
The following simple workflow return an error on read step when pointing to : http://s3-us-west-2.amazonaws.com
...
...
The error in the log:2017-03-10 16:23:26,927 [ERROR] [ErrorStatistics] - error code: N/A occurred 9 times, fail to operate: mybk1/myobjects1, mybk1/myobjects1, mybk1/myobjects1, mybk1/myobjects1, mybk1/myobjects1, mybk1/myobjects1, mybk1/myobjects1, mybk1/myobjects1, mybk1/myobjects1 com.amazonaws.AmazonClientException: Unable to verify integrity of data download. Client calculated content hash didn't match hash calculated by Amazon S3. The data may be corrupt expectedHash: S��>@Vo�a>{� digest: ���� ��� at com.amazonaws.services.s3.internal.DigestValidationInputStream.validateMD5Digest(DigestValidationInputStream.java:79) at com.amazonaws.services.s3.internal.DigestValidationInputStream.read(DigestValidationInputStream.java:61) at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:72) at com.amazonaws.services.s3.model.S3ObjectInputStream.read(S3ObjectInputStream.java:155) at com.amazonaws.services.s3.model.S3ObjectInputStream.read(S3ObjectInputStream.java:147) at com.intel.cosbench.driver.operator.Reader.copyLarge(Reader.java:120) at com.intel.cosbench.driver.operator.Reader.doRead(Reader.java:92) at com.intel.cosbench.driver.operator.Reader.operate(Reader.java:69) at com.intel.cosbench.driver.operator.AbstractOperator.operate(AbstractOperator.java:76) at com.intel.cosbench.driver.agent.WorkAgent.performOperation(WorkAgent.java:197) at com.intel.cosbench.driver.agent.WorkAgent.doWork(WorkAgent.java:177) at com.intel.cosbench.driver.agent.WorkAgent.execute(WorkAgent.java:134) at com.intel.cosbench.driver.agent.AbstractAgent.call(AbstractAgent.java:44) at com.intel.cosbench.driver.agent.AbstractAgent.call(AbstractAgent.java:1) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)
The workload works fine if I reduce the number of workers to 1 on read step.