Open alexjurkiewicz opened 6 years ago
@kdaily Any news on this MR? Thanks
FYI I developed this with a colleague. It's an S3-only based way of ensuing that only one person will be updating an S3 file at any one time. It's javascript based. Not suggesting this is better than AWS supporting the requested feature, and not suggesting this is better than using a database, but for us this was the best option for now, and may prove useful for others.
Really a needed feature. Still waiting for a solution to this.
Another year. Object storage doesn't support Mutex, Browser doesn't support PWA, WebAssembly no 2.0, you can convince now they are zenophobic
Is this really something that cannot be developed in under 5 years?
it's been 5 years, and no progress
Not to bother people. But I'll provide a use case for this.
There was a timing bug in our build service. Which ended making our servers build the latest code into packages thinking it was the previous version. The worst part is that was a release build. That glitch caused an impostor version to be uploaded to our S3 storage and people downloaded it. A --no-overwrite
flag would have avoided the catastrophe.
Please add the flag.
It costs amazon money to support this because no-overwrite means there is going to be a single thread bottle neck which will be looking at whether the object exist
Essentially, the request itself requires the server to compute whether something exists, without requests from other origin causing racing problem
But they should eat this cost, instead of having us routing all the request first to our own server / worker to implement this check manually, for once the company should offer this tiny feature
S3 is object storage, their selling point is infinite scalability, amazon's refusal on offer this would show a rather poor posture from the tech giant
Meanwhile, amazon's pricing really says a lot about making things cheaper and fair for both player, although other platforms might be catching up with all the unlimited request and egress features too
@Xyncgas
Essentially, the request itself requires the server to compute whether something exists, without requests from other origin causing racing problem
This should be clearly stated in the docs. I think most of the time this is not an issue (at least for a good amount of the usecases).
But they should eat this cost, instead of having us routing all the request first to our own server / worker to implement this check manually, for once the company should offer this tiny feature
I would happily eat the cost myself. The dev time spent on implementing this manually is 1000 times more than I'll ever spend on the request costs.
absolutely pitch-perfect AWS
Has anyone come up with a x-ish one-liner to overcome this?
See comment above re: https://github.com/jfstephe/aws-s3-lock . May help?
This can be done with the If-None-Match: *
HTTP request header.
This can be done with the
If-None-Match: *
HTTP request header.
That's a great idea. This seems like it would be trivial to implement with a flag if that actually works...
https://aws.amazon.com/about-aws/whats-new/2024/08/amazon-s3-conditional-writes/ , https://docs.aws.amazon.com/AmazonS3/latest/userguide/conditional-requests.html#conditional-writes aws s3api put-object --bucket amzn-s3-demo-bucket --key dir-1/my_images.tar.bz2 --body my_images.tar.bz2 --if-none-match "*"
Is there a plan to implement if-none-match
for CopyObject as well? Currently I am getting the following when trying to copy with that header:
<Error>
<Code>NotImplemented</Code>
<Message>A header you provided implies functionality that is not implemented</Message>
<Header>If-None-Match</Header
<RequestId>REDACTED</RequestId
<HostId>REDACTED</HostId>
</Error>
Okay, so while if-none-match
cannot be used for CopyObject
, it can be used to complete a multipart copy with a single part.
if-none-match
to Complete Multipart Upload
It would be nice to have a convenience function
--no-overwrite
foraws s3 cp/mv
commands, which would check the target destination doesn't already exist before putting a file into an s3 bucket.Of course this logic couldn't be guaranteed by the AWS API (afaik...) and is vulnerable to race conditions, etc. But it would be helpful to prevent unintentional mistakes!