fastlane-community / fastlane-plugin-s3

fastlane plugin to upload IPA or APK to AWS S3 by @joshdholtz
MIT License
146 stars 65 forks source link

AWS S3 access denied error when trying to push a build to S3 #76

Open ram-nadella opened 5 years ago

ram-nadella commented 5 years ago

Hi,

Thank you for creating and maintaining this plugin.

I've managed to get this plugin working using my personal AWS credentials to get an iOS app build uploaded to S3.

We're working on getting this setup in CI (Circle) and would like to create a dedicated IAM user for use in CI with the bare minimum AWS permissions to allow builds to be uploaded to S3. Before we get this into CI, I am testing with the credentials on my machine, so any CI related factors are not at play here.

I am running into Aws::S3::Errors::AccessDenied: [!] Access Denied error after a few attempts trying to set the right permissions on the new IAM account. Wanted to share what I have and try to get help from the community on S3 permissions that work.

We have a bucket dedicated to builds, let's call it bucket-name and the permissions I've tried are as follows, based on this S3 help doc:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::bucket-name"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "s3:*Object",
            "Resource": "arn:aws:s3:::bucket-name/*"
        }
    ]
}

I was still getting the access denied error and so I expanded the permissions to allow the client to be able to list buckets (as per AWS docs):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::bucket-name"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "s3:*Object",
            "Resource": "arn:aws:s3:::bucket-name/*"
        },
        {
            "Sid": "VisualEditor2",
            "Effect": "Allow",
            "Action": "s3:ListAllMyBuckets",
            "Resource": "*"
        }
        {
            "Sid": "VisualEditor3",
            "Effect": "Allow",
            "Action": "s3:GetBucketLocation",
            "Resource": "*"
        }
    ]
}

But I am still getting the same error:

Aws::S3::Errors::AccessDenied: [!] Access Denied

Any help would be much appreciated!

Environment:

$ ruby --version
ruby 2.6.3p62 (2019-04-16 revision 67580) [x86_64-darwin18]
$ bundle list | grep aws
  * aws-eventstream (1.0.3)
  * aws-sdk (2.11.292)
  * aws-sdk-core (2.11.292)
  * aws-sdk-resources (2.11.292)
  * aws-sigv4 (1.1.0)
  * fastlane-plugin-aws_s3 (1.6.0)
$ bundle list | grep fastlane
  * commander-fastlane (4.4.6)
  * fastlane (2.125.2)
  * fastlane-plugin-aws_s3 (1.6.0)
$
matthewweldon commented 5 years ago

this is also happening to me: Aws::S3::Errors::AccessDenied: [!] Access Denied

Here's my action

aws_s3(
               access_key: ENV["S3_ACCESS_KEY"],  
               secret_access_key: ENV["S3_SECRET_ACCESS_KEY"], 
               bucket: ENV["S3_BUCKET"],
               region: "ca-central-1",
               server_side_encryption:  "AES256",
               upload_metadata: true,
               )

I've triple checked all those environment variables and can upload files directly with the same credentials, not sure where to go now.

matthewweldon commented 5 years ago

solved my issue, I had to specify a less public acl based on the custom default acl our bucket had. For me it was the following in my action in the fastfile: acl: 'bucket-owner-full-control',

mattlorimor commented 1 week ago

There would appear to be a similar issue for an S3 bucket for which ACLs are disabled. If that's the case, then no object-level ACLs are allowed to be applied.

Looking at the code, an ACL argument would always appear to be included in the functions that write the objects:

      def self.upload_file(s3_client, bucket_name, app_directory, file_name, file_data, acl, server_side_encryption, download_endpoint, download_endpoint_replacement_regex)

        if app_directory
          file_name = "#{app_directory}/#{file_name}"
        end

        bucket = Aws::S3::Bucket.new(bucket_name, client: s3_client)
        details = {
          acl: acl,
          key: file_name,
          body: file_data,
          content_type: MIME::Types.type_for(File.extname(file_name)).first.to_s
        }
        details = details.merge(server_side_encryption: server_side_encryption) if server_side_encryption.length > 0
        obj = bucket.put_object(details)

And the acl argument appears to be defaulted to public-read. So, that ACL gets placed on the PutObject call, and fails because the AWS API will fail PutObject calls with x-amz-acl set when they're going to a bucket that has ACLs disabled. When ACLs are disabled on the bucket, the call to write the object must either have bucket-owner-full-control set as the acl or do not specify one at all:

If the bucket that you're uploading objects to uses the bucket owner enforced setting for S3 Object Ownership, ACLs are disabled and no longer affect permissions. Buckets that use this setting only accept PUT requests that don't specify an ACL or PUT requests that specify bucket owner full control ACLs, such as the bucket-owner-full-control canned ACL or an equivalent form of this ACL expressed in the XML format. PUT requests that contain other ACLs (for example, custom grants to certain AWS accounts) fail and return a 400 error with the error code AccessControlListNotSupported. For more information, see Controlling ownership of objects and disabling ACLs in the Amazon S3 User Guide.

I'd argue something in this plugin code needs to change to reflect the current "default" of S3 bucket ACL configuration. I probably wouldn't go so far as to just alter the default to be bucket-owner-full-control all the time or defaulting to no ACL being passed to the PutObject call - mostly due to backward compatibility concerns. Based on what I see, it might be possible to to check the ACL status of the bucket and conditionally override the passed-in ACL with bucket-owner-full-control or an empty string.