JuliaCloud / AWSS3.jl

AWS S3 Simple Storage Service interface for Julia.
Other
48 stars 34 forks source link

upload a local file to bucket #117

Open yakir12 opened 3 years ago

yakir12 commented 3 years ago

I can not upload a local file to a bucket without delete permissions:

 (jl_sPf54u) pkg> st
Status `/tmp/jl_sPf54u/Project.toml`
  [fbe9abb3] AWS v1.20.0
  [4f1ea46c] AWSCore v0.6.17
  [1c724243] AWSS3 v0.6.12 ⚲
  [48062228] FilePathsBase v0.8.0

julia> cp(p"a.jl", S3Path("s3://skybeetle/b.jl"))
ERROR: ArgumentError: Destination already exists: s3://skybeetle/b.jl
Stacktrace:
 [1] cp(::PosixPath, ::S3Path; force::Bool) at /home/yakir/.julia/packages/FilePathsBase/n76Ka/src/path.jl:488
 [2] cp(::PosixPath, ::S3Path) at /home/yakir/.julia/packages/FilePathsBase/n76Ka/src/path.jl:484
 [3] top-level scope at REPL[9]:1

Note that b.jl doesn't exist on the bucket.

Doing the same with newer versions, results in the same error:

(jl_a04dxw) pkg> st
Status `/tmp/jl_a04dxw/Project.toml`
  [fbe9abb3] AWS v1.20.0
  [4f1ea46c] AWSCore v0.6.17
  [1c724243] AWSS3 v0.7.5
  [48062228] FilePathsBase v0.9.5

julia> cp(p"a.jl", S3Path("s3://skybeetle/b.jl"))
ERROR: ArgumentError: Destination already exists: s3://skybeetle/b.jl
Stacktrace:
 [1] cp(::PosixPath, ::S3Path; force::Bool, follow_symlinks::Bool) at /home/yakir/.julia/packages/FilePathsBase/9nTwN/src/path.jl:549
 [2] cp(::PosixPath, ::S3Path) at /home/yakir/.julia/packages/FilePathsBase/9nTwN/src/path.jl:545
 [3] top-level scope at REPL[6]:1

and if I add force=true:

julia> cp(p"a.jl", S3Path("s3://skybeetle/b.jl"), force = true)
ERROR: AccessDenied -- Access Denied
HTTP.ExceptionRequest.StatusError(403, "DELETE", "/b.jl", HTTP.Messages.Response:
"""
HTTP/1.1 403 Forbidden
x-amz-request-id: 9AAFF9C575FCB339
x-amz-id-2: CH3+hrUgDcu8YmNqlWybhce2AhXUkW7v95bYdSWgkr8eYqbSLXJeYlRyoEJZ5LTIUAb+NPOTDfg=
Content-Type: application/xml
Transfer-Encoding: chunked
Date: Mon, 30 Nov 2020 18:18:02 GMT
Server: AmazonS3

<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>9AAFF9C575FCB339</RequestId><HostId>CH3+hrUgDcu8YmNqlWybhce2AhXUkW7v95bYdSWgkr8eYqbSLXJeYlRyoEJZ5LTIUAb+NPOTDfg=</HostId></Error>""")

Stacktrace:
 [1] request(::Type{HTTP.ExceptionRequest.ExceptionLayer{HTTP.ConnectionRequest.ConnectionPoolLayer{HTTP.StreamRequest.StreamLayer{Union{}}}}}, ::HTTP.URIs.URI, ::Vararg{Any,N} where N; kw::Base.Iterators.Pairs{Symbol,Any,Tuple{Symbol,Symbol,Symbol},NamedTuple{(:iofunction, :verbose, :require_ssl_verification),Tuple{Nothing,Int64,Bool}}}) at /home/yakir/.julia/packages/HTTP/IAI92/src/ExceptionRequest.jl:22
 [2] request(::Type{HTTP.MessageRequest.MessageLayer{HTTP.ExceptionRequest.ExceptionLayer{HTTP.ConnectionRequest.ConnectionPoolLayer{HTTP.StreamRequest.StreamLayer{Union{}}}}}}, ::String, ::HTTP.URIs.URI, ::Array{Pair{SubString{String},SubString{String}},1}, ::String; http_version::VersionNumber, target::String, parent::Nothing, iofunction::Nothing, kw::Base.Iterators.Pairs{Symbol,Integer,Tuple{Symbol,Symbol},NamedTuple{(:verbose, :require_ssl_verification),Tuple{Int64,Bool}}}) at /home/yakir/.julia/packages/HTTP/IAI92/src/MessageRequest.jl:51
 [3] request(::Type{HTTP.BasicAuthRequest.BasicAuthLayer{HTTP.MessageRequest.MessageLayer{HTTP.ExceptionRequest.ExceptionLayer{HTTP.ConnectionRequest.ConnectionPoolLayer{HTTP.StreamRequest.StreamLayer{Union{}}}}}}}, ::String, ::HTTP.URIs.URI, ::Array{Pair{SubString{String},SubString{String}},1}, ::String; kw::Base.Iterators.Pairs{Symbol,Integer,Tuple{Symbol,Symbol},NamedTuple{(:verbose, :require_ssl_verification),Tuple{Int64,Bool}}}) at /home/yakir/.julia/packages/HTTP/IAI92/src/BasicAuthRequest.jl:28
 [4] macro expansion at /home/yakir/.julia/packages/AWSCore/wNWgl/src/http.jl:42 [inlined]
 [5] macro expansion at /home/yakir/.julia/packages/Retry/vS1bg/src/repeat_try.jl:192 [inlined]
 [6] http_request(::Dict{Symbol,Any}) at /home/yakir/.julia/packages/AWSCore/wNWgl/src/http.jl:20
 [7] macro expansion at /home/yakir/.julia/packages/AWSCore/wNWgl/src/AWSCore.jl:411 [inlined]
 [8] macro expansion at /home/yakir/.julia/packages/Retry/vS1bg/src/repeat_try.jl:192 [inlined]
 [9] do_request(::Dict{Symbol,Any}; return_headers::Bool) at /home/yakir/.julia/packages/AWSCore/wNWgl/src/AWSCore.jl:394
 [10] macro expansion at /home/yakir/.julia/packages/AWSS3/9NxGJ/src/AWSS3.jl:0 [inlined]
 [11] macro expansion at /home/yakir/.julia/packages/Retry/vS1bg/src/repeat_try.jl:192 [inlined]
 [12] s3(::Dict{Symbol,Any}, ::String, ::SubString{String}; headers::Dict{String,String}, path::String, query::Dict{String,String}, version::String, content::String, return_stream::Bool, return_raw::Bool, return_headers::Bool, proxy::Nothing) at /home/yakir/.julia/packages/AWSS3/9NxGJ/src/AWSS3.jl:88
 [13] #s3_delete#13 at /home/yakir/.julia/packages/AWSS3/9NxGJ/src/AWSS3.jl:258 [inlined]
 [14] s3_delete(::Dict{Symbol,Any}, ::SubString{String}, ::String) at /home/yakir/.julia/packages/AWSS3/9NxGJ/src/AWSS3.jl:258
 [15] rm(::S3Path; recursive::Bool, kwargs::Base.Iterators.Pairs{Symbol,Bool,Tuple{Symbol},NamedTuple{(:force,),Tuple{Bool}}}) at /home/yakir/.julia/packages/AWSS3/9NxGJ/src/s3path.jl:253
 [16] cp(::PosixPath, ::S3Path; force::Bool, follow_symlinks::Bool) at /home/yakir/.julia/packages/FilePathsBase/9nTwN/src/path.jl:547
 [17] top-level scope at REPL[7]:1
mattBrzezinski commented 3 years ago

Could you post the IAM policy for the credentials which are trying to perform this request?

yakir12 commented 3 years ago
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::skybeetle/*"
        }
    ]
}
yakir12 commented 3 years ago

I'll add that

shell> aws s3 cp a.jl s3://skybeetle/new_name
upload: ./a.jl to s3://skybeetle/new_name                         

does work. And that both use the same exact credentials.

mattBrzezinski commented 3 years ago

I can confirm this is an IAM issue with permissions. Using my master credentials I'm able to do:

using AWSS3, FilePaths
cp(p"test.txt", S3Path("s3://mattbr-test-bucket/test.txt"))  # File DNE on S3

And it works. However, creating new credentials with a limited-scope policy and doing the same thing results in:

ERROR: LoadError: 403 -- AWSException
HTTP.ExceptionRequest.StatusError(403, "HEAD", "/rm2.md", HTTP.Messages.Response:
"""
HTTP/1.1 403 Forbidden
x-amz-request-id: 5E53B1B5FA8FBCFF
x-amz-id-2: IOhOirFxe0nipNAs8gsD2sIxDaj2FvfxSgp10OP/f/LBErL28vrQjCG6SZW3wQbCdLd6n89V+pY=
Content-Type: application/xml
Date: Mon, 30 Nov 2020 18:32:50 GMT
Server: AmazonS3
Connection: close

""")

The error is being thrown from this line: https://github.com/JuliaCloud/AWSS3.jl/blob/master/src/AWSS3.jl#L222

mattBrzezinski commented 3 years ago

I'm able to resolve this by using the policy below. This is a PutObject permission, alongside every list and read permission on ALL objects. Note you would want to limit this scope down.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:GetObjectVersionTagging",
                "s3:GetStorageLensConfigurationTagging",
                "s3:GetObjectAcl",
                "s3:GetBucketObjectLockConfiguration",
                "s3:GetObjectVersionAcl",
                "s3:GetBucketPolicyStatus",
                "s3:GetObjectRetention",
                "s3:GetBucketWebsite",
                "s3:GetJobTagging",
                "s3:ListJobs",
                "s3:GetObjectLegalHold",
                "s3:GetBucketNotification",
                "s3:GetReplicationConfiguration",
                "s3:ListMultipartUploadParts",
                "s3:PutObject",
                "s3:GetObject",
                "s3:DescribeJob",
                "s3:GetAnalyticsConfiguration",
                "s3:GetObjectVersionForReplication",
                "s3:GetStorageLensDashboard",
                "s3:GetLifecycleConfiguration",
                "s3:GetAccessPoint",
                "s3:GetInventoryConfiguration",
                "s3:GetBucketTagging",
                "s3:GetBucketLogging",
                "s3:ListBucketVersions",
                "s3:ListBucket",
                "s3:GetAccelerateConfiguration",
                "s3:GetBucketPolicy",
                "s3:GetEncryptionConfiguration",
                "s3:GetObjectVersionTorrent",
                "s3:GetBucketRequestPayment",
                "s3:GetAccessPointPolicyStatus",
                "s3:GetObjectTagging",
                "s3:GetMetricsConfiguration",
                "s3:GetBucketOwnershipControls",
                "s3:GetBucketPublicAccessBlock",
                "s3:ListBucketMultipartUploads",
                "s3:ListAccessPoints",
                "s3:GetBucketVersioning",
                "s3:GetBucketAcl",
                "s3:ListStorageLensConfigurations",
                "s3:GetObjectTorrent",
                "s3:GetStorageLensConfiguration",
                "s3:GetAccountPublicAccessBlock",
                "s3:ListAllMyBuckets",
                "s3:GetBucketCORS",
                "s3:GetBucketLocation",
                "s3:GetAccessPointPolicy",
                "s3:GetObjectVersion"
            ],
            "Resource": "*"
        }
    ]
}
rofinn commented 3 years ago

@mattBrzezinski Any idea why the awscli command worked for @yakir12? Does AWS.jl not grab credentials from the same location as the cli?

mattBrzezinski commented 3 years ago

@mattBrzezinski Any idea why the awscli command worked for @yakir12? Does AWS.jl not grab credentials from the same location as the cli?

So this package is using AWSCore.jl currently underneath to make the requests. However I have not looked at the cp command underneath and how it's getting credentials.

AWSCore.jl (and AWS.jl) will both follow this order:

https://github.com/JuliaCloud/AWSCore.jl/blob/master/src/AWSCredentials.jl#L135-L141

ENV VARs -> ~/.aws/credentials -> ~/.aws/config -> Check if we're on ECS -> Check if we're on EC2

yakir12 commented 3 years ago

I'm able to resolve this by using the policy below. This is a PutObject permission, alongside every list and read permission on ALL objects. Note you would want to limit this scope down.

Hmm... It's not working for me. I changed the permission policy to what you posted (first I limited the resource to the relevant bucket but then when that didn't work I relaxed it to "*"), but I still get the exact same error messages (force or not). Did you run the same cp(... command?

mattBrzezinski commented 3 years ago

I'm able to resolve this by using the policy below. This is a PutObject permission, alongside every list and read permission on ALL objects. Note you would want to limit this scope down.

Hmm... It's not working for me. I changed the permission policy to what you posted (first I limited the resource to the relevant bucket but then when that didn't work I relaxed it to "*"), but I still get the exact same error messages (force or not). Did you run the same cp(... command?

I'm created a user with the policy from above. I then ran the following:

using AWSS3, FilePaths

ENV["AWS_ACCESS_KEY_ID"] = "redacted"
ENV["AWS_SECRET_ACCESS_KEY"] = "redacted"

cp(p"README.md", S3Path("s3://mattbr-test-bucket/rm4.md"))

I also placed a debugging line in s3_put() to confirm it's using the credentials from the set ENV VARS.

yakir12 commented 3 years ago

Frustrating... I made a new policy, added a new user adding that policy, and I still get:

julia> cp(p"README.md", S3Path("s3://skybeetle/rm4.md"))
ERROR: ArgumentError: Destination already exists: s3://skybeetle/rm4.md
Stacktrace:
 [1] cp(::PosixPath, ::S3Path; force::Bool, follow_symlinks::Bool) at /home/yakir/.julia/packages/FilePathsBase/9nTwN/src/path.jl:549
 [2] cp(::PosixPath, ::S3Path) at /home/yakir/.julia/packages/FilePathsBase/9nTwN/src/path.jl:545
 [3] top-level scope at REPL[6]:1
mattBrzezinski commented 3 years ago

Frustrating... I made a new policy, added a new user adding that policy, and I still get:

julia> cp(p"README.md", S3Path("s3://skybeetle/rm4.md"))
ERROR: ArgumentError: Destination already exists: s3://skybeetle/rm4.md
Stacktrace:
 [1] cp(::PosixPath, ::S3Path; force::Bool, follow_symlinks::Bool) at /home/yakir/.julia/packages/FilePathsBase/9nTwN/src/path.jl:549
 [2] cp(::PosixPath, ::S3Path) at /home/yakir/.julia/packages/FilePathsBase/9nTwN/src/path.jl:545
 [3] top-level scope at REPL[6]:1

I assume you're setting the ENV VARs for the Key Id and Secret Key? Also, does the rm4.md file exist already?

yakir12 commented 3 years ago

yea, I set the env vars and no, that file doesn't already exist. Hmm... But I admit I'm no AWS wizard, I might have messed something up...

mattBrzezinski commented 3 years ago

yea, I set the env vars and no, that file doesn't already exist. Hmm... But I admit I'm no AWS wizard, I might have messed something up...

Have you reset your Julia session in between these attempts? The way AWSCore works underneath, the 1st request you make it sets a global AWSConfig which it uses every time after that.

yakir12 commented 3 years ago

I have. I use ] activate --tmp... Gonna go try one more time. I'll detail everything I do. brb...

yakir12 commented 3 years ago

I'm stumped. Super simple:

  1. create a new policy, copy paste your json.
  2. create a new user and add said policy.
  3. download csv credentials for said user.
  4. start a temporary environment in Julia
  5. add AWSS3 and FilePathsBase
  6. made sure ENV["AWS_ACCESS_KEY_ID"] wasn't defined.
  7. defined it and the secret one.
  8. tried the cp call and... Destination already exists. It doesn't.

My only conclusion is that my bucket must have some settings that are different than your mattbr-test-bucket.

yakir12 commented 3 years ago

@mattBrzezinski could you please share the settings/properties of your bucket?

yakir12 commented 3 years ago

I'm really confused now: using credentials linked to Matt's policy I tested with all of my other buckets. It worked with one of the other buckets. OK... So I looked through all the settings, comparing between the two buckets (the one that worked and the one that didn't), and changed some small things so they'd be identical. Didn't help. I opened a new bucket by copying the settings from the bucket that worked (there's an option for that in S3). Didn't work. Made sure the settings of the two buckets were identical (the bucket policy was initially empty so I copied the policy of the bucket that worked). Still didn't work. I added s3:DeleteObject to Matt's policy, just to see if it solved the problem: it did not! So what is going on here...???

yakir12 commented 3 years ago

OK, after some extensive testing here are my conclusions:

Using Matt's policy (and perhaps a more restrictive one would work as well), I can get it to work (without using force=true) on buckets whose region is us-east-1. If the bucket's region is not us-east-1 (I tested with eu-central-1 and eu-north-1), then it doesn't work (ERROR: ArgumentError: Destination already exists: s3://...).

So somehow the region of the bucket causes an error when trying to

cp(p"local_file", S3Path("s3://bucket_name/new_unique_name"))

I'll just create a bucket in the us-east-1 region for now, but this bug is pretty weird.

yakir12 commented 3 years ago

On the subject of a more restrictive policy, the following works and is significantly more restrictive than Matt's version (I think?):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:ListBucket"
            ],
            "Resource": "arn:aws:s3:::<bucket>/*"
        }
    ]
}
mattBrzezinski commented 3 years ago

OK, after some extensive testing here are my conclusions:

Using Matt's policy (and perhaps a more restrictive one would work as well), I can get it to work (without using force=true) on buckets whose region is us-east-1. If the bucket's region is not us-east-1 (I tested with eu-central-1 and eu-north-1), then it doesn't work (ERROR: ArgumentError: Destination already exists: s3://...).

So somehow the region of the bucket causes an error when trying to

cp(p"local_file", S3Path("s3://bucket_name/new_unique_name"))

I'll just create a bucket in the us-east-1 region for now, but this bug is pretty weird.

I have a suspicion that the cp function will get the default credentials in the default region (us-east-1), then attempt to do the cp call. However because the bucket is not in that region an error is thrown, swallowed up and this ArgError is returned instead.

If you set the ENVVAR AWS_DEFAULT_REGION=eu-central-1 and test with a bucket in that region, does it work?

rofinn commented 3 years ago

That should also be easy to check by inspecting S3Path("s3://bucket_name/new_unique_name").config. The config is encapsulated in the path type and passed on to s3_exists, so maybe some default there is conflicting...?

mattBrzezinski commented 3 years ago

That should also be easy to check by inspecting S3Path("s3://bucket_name/new_unique_name").config. The config is encapsulated in the path type and passed on to s3_exists, so maybe some default there is conflicting...?

Ok, I figured out how to resolve the issue. I have not looked into fixing it however. I have a bucket in us-east-1 and another in eu-central-1.

The cp function by default will use us-east-1 as the region, if you're trying to use a bucket not in this region you'll run into the ArgumentError. If before doing this you set the env var AWS_DEFAULT_REGION={bucket_region} it'll work, then trying to do cp into the us-east-1 bucket, that will fail.

So users need to ensure their default config sets a region to one matching where the bucket is. We should also maybe make a kwarg into cp to override the config region.

rofinn commented 3 years ago

Should probably just specify the config during construction cause otherwise that'll create ambiguities when using cp for s3 paths in different regions. That's why tryparse and all the constructors take a config kwarg. I suppose we could also add another S3Path(::S3Path; kwargs) constructor for remaking an existing path with specific changes?

mattBrzezinski commented 3 years ago

Should probably just specify the config during construction cause otherwise that'll create ambiguities when using cp for s3 paths in different regions. That's why tryparse and all the constructors take a config kwarg. I suppose we could also add another S3Path(::S3Path; kwargs) constructor for remaking an existing path with specific changes?

I'm just thinking of a case where users might be doing stuff in multiple buckets, in different regions and how to handle that case.

rofinn commented 3 years ago

We can add a test case, but it should just work cause the read and write operations are independent with different configs/credentials. I suppose if the file is too large to fit in memory then you'd need to write locally, but that'd be the case anyways.

yakir12 commented 3 years ago

I can confirm that setting the env var

ENV["AWS_DEFAULT_REGION"] = "your_aws_region"

fixes this. But I can also report that adding

[default]
aws_access_key_id = ...
aws_secret_access_key = ...
region = your_aws_region

to your aws-credentials file doesn't work, although one would expect it to (see "Setting the AWS Region" here). Neither does aws_default_region work.

I thought that bucket names are globally unique (at least per user), like, if you have the credentials, and you have the name of the bucket, why would you also need to specify the region...?

mattBrzezinski commented 3 years ago

I can confirm that setting the env var

ENV["AWS_DEFAULT_REGION"] = "your_aws_region"

fixes this. But I can also report that adding

[default]
aws_access_key_id = ...
aws_secret_access_key = ...
region = your_aws_region

to your aws-credentials file doesn't work, although one would expect it to (see "Setting the AWS Region" here). Neither does aws_default_region work.

This is because of how AWSCore.jl sets the default region. It will only look at the ENVVAR and not in the configuration file. This is because region is tied to the configuration, rather than the credentials themselves.

I thought that bucket names are globally unique (at least per user),

Bucket names are globally unique for all users on AWS.

like, if you have the credentials, and you have the name of the bucket, why would you also need to specify the region...?

When making a call to S3, or any AWS service you need to sign the request with a region as part of the Authorization header. In doing so you also match the URL endpoint you're calling to, https://s3.us-east-1.amazonaws.com as well. You might be able to do something like https://s3.amazonaws.com, but I can't seem to get that to work with passing a region into signing.

There is something further downstream that is causing this error to happen. I just tested using AWS.jl, having a bucket in us-east-1 and setting my config's region to eu-central-1 and placing an object in the bucket. It succeeds with no problem.

using AWS
@service S3

aws = AWSConfig()
aws.region = "eu-central-1"
resp = S3.put_object("mattbr-test-bucket", "hello-world.txt")  # This bucket is in us-east-1
yakir12 commented 3 years ago

So I guess this is related, I can't get an object from a bucket in eu-north-1. MWE:

using  AWS, AWSS3, FilePathsBase
bucket = "nicolas-cage-skyroom"
region = "eu-north-1"
config = global_aws_config(; region)
p = S3Path("s3://$bucket"; config)
file = first(readdir(p))
s3_get_file(config, bucket, file) # does not work

tb = "tmp.tar"
run(`aws s3 cp s3://$bucket/$file $tb`) # works
mattBrzezinski commented 3 years ago

So I guess this is related, I can't get an object from a bucket in eu-north-1.

I believe that this strongly is related to permissions from the credentials making the request, as I don't seem to be having any issues myself.

But some questions to ask,

yakir12 commented 3 years ago

related to permissions from the credentials making the request

That is great. So I tried to rerun this after removing my .aws credential folder from home, and now

config = global_aws_config(; region)

stalls (10 minutes in). So right off the bat, I'd say that this scenario needs at least a note in the README. Namely, these two lines stall Julia:

using AWS
global_aws_config()

Which leads me to my next question (although this feels like I'm now polluting this thread; let me know if we should move this elsewhere): What is the correct way to get an object/file disregarding any preexisting local awscli configurations/credentials (which in my case, apparently, corrupt any requests I make)?

mattBrzezinski commented 3 years ago

related to permissions from the credentials making the request

That is great. So I tried to rerun this after removing my .aws credential folder from home, and now

config = global_aws_config(; region)

stalls (10 minutes in). So right off the bat, I'd say that this scenario needs at least a note in the README. Namely, these two lines stall Julia:

using AWS
global_aws_config()

This is going to be timing out as it's trying to find credentials using other methods. It didn't find them in your environment variables, or any configuration files. So it's attempting to request them from various AWS meta services. You need some form of credentials to make the requests go through, unless these are publicly available objects, in which case you can create your own types to avoid fetching credentials.

Which leads me to my next question (although this feels like I'm now polluting this thread; let me know if we should move this elsewhere): What is the correct way to get an object/file disregarding any preexisting local awscli configurations/credentials (which in my case, apparently, corrupt any requests I make)?

This really depends on what your use case is here. For me personally my default credentials are full root for my personal account, which allows me to easily test and create some solutions on AWS. Then when I am fleshing this out as actual production level code I create new roles with only the minimal permissions required by the application to run. See IAM best practices for more details.

Without really seeing what the IAM policy on the credentials that you're using it tough to say what you're missing here to make the request successful.