Open RauliRuohonen opened 3 years ago
Thanks for reporting.
You seem to have pinpointed the exact problem already - would you like to provide a fix in a PR, ideally with a test showing that it works?
Unfortunately this isn't really a priority for me and I don't have the time for it. Using s3fs instead of boto3, files smaller than 5 MiB, and ACLs at the same time is a really unlikely corner case for me. The only reason I even noticed this issue was that I was testing that ACLs work with s3fs too instead of just boto3, and my test file consisted of the string "foo" instead of something even remotely realistic...
You can always set the ACL in a separate call with chmod
, as a workaround.
I too seem to be experiencing this issue.
openFile = s3.open(file_path, 'w', acl=acl)
I've got acl='public-read'
yet I get a AccessDenied code when I attempt to visit the URL.
I've also tried chmod s3.chmod(file_path, acl)
, but that doesn't appear to work either.
It worked when I did it manually through the console or cli, eg: aws s3api get-object-acl --bucket xxx --key yyy
.
I even tried using boto3:
def chmod(bucket, key, acl = 'public-read'):
import boto3
s3client = boto3.resource('s3')
obj = s3client.Bucket(bucket).Object(key)
return obj.Acl().put(ACL=acl)
Which didn't work either, so it made me think it was a permissions issue.
I thought that I was missing s3:PutObjectAcl
from the policy, but I checked that was present and used the simulator to test the permission.
I'm really at a bit of a loss, nothing is throwing up any errors.
I have added the "help wanted" label to this issue, because I think it should be relatively easy to fix for someone who has a little time. Actually, I just had a look, and I believe th following does it, if someone wouldn't mind testing:
--- a/s3fs/core.py
+++ b/s3fs/core.py
@@ -1994,6 +1994,7 @@ class S3File(AbstractBufferedFile):
Key=self.key,
Bucket=self.bucket,
Body=data,
+ ACL=self.acl,
**self.kwargs,
)
else:
I made this change and tried it out, it didn't make any difference for me, equally it didn't fail or kick up any errors either.
So, instead as a workaround I opted to go with a presigned url.
This issue killed my s3fs use case. I wanted to use it for its nice ability to upload a directory recursively with S3FileSystem.put(..., recursive=True)
, as recommended on SO: https://stackoverflow.com/a/69375738/8534196
However, these are small files and I need to specify ACL for these objects. The suggested chmod
workaround also doesn't work since chmod
does not take a recursive flag.
Maybe this is too niche, but it would be nice to get a fix in. I will try to contribute one if I find the time.
@AndrewEckart , definitely not too niche! I'll try to get to it soon.
@AndrewEckart : doing fs.put(..., ACL="public-read")
appears to work as intended. What exactly did you try, and what is not working?
I made #679 for recursive chmod, you can use this to go back over previously uploaded files.
What happened:
does not obey the
acl
parameter (the resulting owner is incorrect), whereasdoes.
What you expected to happen:
For the acl parameter to work regardless of object size.
Minimal Complete Verifiable Example:
Anything else we need to know?:
I tested this with s3fs 0.4.2 because I can't currently install 0.5.x (dependency issues), but I suspect it is broken even on current master, because it's probably this if-statement causing the use of the single-part code path that does not use
self.acl
the way the multi-part code path does.Environment: