Open davidp1404 opened 1 year ago
Confirmed that this is a problem. This is a regression from apache/jclouds@ab25fc7259ad620a4daa14c12a37cef498320ad5 that I suspect was introduced to work around Windows strange behavior. I am happy to revert these lines from FilesystemStorageStrategyImpl.putBlob:
if (outputFile.exists()) {
delete(outputFile);
}
Note that we should keep the exception handling. Note that the filesystem blobstore needs many improvements, in this case calling Files.remove(outputFile.toPath())
has more concise error propagation. Can you submit a PR?
Hello Andrew, sorry but java is not a language I feel comfortable with so I'd appreciate it if you could release a corrected version of s3proxy that we could use. Thanks in advance.
I committed a partial fix to jclouds where the object will not disappear when being replaced. However, there is a related issue where the object can change while being fetched that requires further work. The symptoms are a mismatch between the expected and actual Content-Length.
Thanks very much @gaul, looking forward to the new release of s3proxy including this fix.
Still thinking about this in apache/jclouds#165.
Hello, I am using the latest s3proxy image with the filesystem backend in our development environment. In our use case we have concurrent access to blobs but when we modify blobs we see that s3proxy report "NoSuchKey when calling the GetObject operation" eventually. This issue is related to the old reported one in https://issues.apache.org/jira/browse/JCLOUDS-835 that seems to be solved years ago, but seems to fail the s3 commitment "a mutating operation like write or overwrite should succeed and expose the new object or fail and retain the old object." Is there any explanation to the filesystem backend doesn't implement atomic behavior? The issue can be reproduced with this code:
Thanks in advance!