My guess - it happens because of the cleanup in s3 - with multiple locals that each might delete other local's results files.
The scenario:
set -o verbose
java -jar dsp1-1.0-SNAPSHOT-jar-with-dependencies.jar tweets.txt first.html 1 &
sleep 120
for i in {1..10}
do
java -jar dsp1-1.0-SNAPSHOT-jar-with-dependencies.jar tweets.txt tweets$i.html 1 &
done
sleep 30
java -jar dsp1-1.0-SNAPSHOT-jar-with-dependencies.jar tweets.txt last.html 1 terminate
The exception:
Deleting S3 bucket: manager-to-local-bucket
Exception in thread "pool-1-thread-3" com.amazonaws.services.s3.model.AmazonS3Exception: The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: B0B6906472A37F3A), S3 Extended Request ID: QCE7zN/lH2T71sCYUrNOnnyA6fArdsbD6lzHyXs6UPyl+/zGwVy0lHaVO+MJVgFHj2lwlRaS8t0=
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1383)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:902)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:607)
at com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:376)
at com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:338)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:287)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3687)
at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1181)
at com.bgu.dsp.awsUtils.S3Utils.getFileInputStream(S3Utils.java:186)
at com.bgu.dsp.awsUtils.S3Utils.downloadFile(S3Utils.java:211)
at com.bgu.dsp.common.protocol.managertolocal.TweetsToHtmlConverter.execute(TweetsToHtmlConverter.java:98)
at com.bgu.dsp.main.SqsLooper.run(SqsLooper.java:59)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I've cancelled it for now... I can't think of a solution to this... maybe if each local will only delete his own file, but it might be to complex and risky...
My guess - it happens because of the cleanup in s3 - with multiple locals that each might delete other local's results files.
The scenario:
The exception: