foo-software / lighthouse-check-action

GitHub Action for running @GoogleChromeLabs Lighthouse audits with all the bells and whistles 🔔 Multiple audits, Slack notifications, and more!
https://github.com/marketplace/actions/lighthouse-check
MIT License
484 stars 24 forks source link

S3 upload not working #26

Closed iler closed 4 years ago

iler commented 4 years ago

We have tried setting up this action in our CI flow but have not got the S3 integration working. If we comment out the AWS stuff from the config .yml everything works as it should. But when we try to use the S3 integration it fails with the following error message:

lighthouse-check:
 AccessDenied: Access Denied
    at Request.extractError (/home/runner/work/_actions/foo-software/lighthouse-check-action/v1.0.12/node_modules/aws-sdk/lib/services/s3.js:835:35)
    at Request.callListeners (/home/runner/work/_actions/foo-software/lighthouse-check-action/v1.0.12/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
    at Request.emit (/home/runner/work/_actions/foo-software/lighthouse-check-action/v1.0.12/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
    at Request.emit (/home/runner/work/_actions/foo-software/lighthouse-check-action/v1.0.12/node_modules/aws-sdk/lib/request.js:683:14)
    at Request.transition (/home/runner/work/_actions/foo-software/lighthouse-check-action/v1.0.12/node_modules/aws-sdk/lib/request.js:22:10)
    at AcceptorStateMachine.runTo (/home/runner/work/_actions/foo-software/lighthouse-check-action/v1.0.12/node_modules/aws-sdk/lib/state_machine.js:14:12)
    at /home/runner/work/_actions/foo-software/lighthouse-check-action/v1.0.12/node_modules/aws-sdk/lib/state_machine.js:26:10
    at Request.<anonymous> (/home/runner/work/_actions/foo-software/lighthouse-check-action/v1.0.12/node_modules/aws-sdk/lib/request.js:38:9)
    at Request.<anonymous> (/home/runner/work/_actions/foo-software/lighthouse-check-action/v1.0.12/node_modules/aws-sdk/lib/request.js:685:12)
    at Request.callListeners (/home/runner/work/_actions/foo-software/lighthouse-check-action/v1.0.12/node_modules/aws-sdk/lib/sequential_executor.js:116:18) {
  message: 'Access Denied',
  code: 'AccessDenied',
  region: null,
  time: 2020-06-01T10:50:10.084Z,
  requestId: 'id_here',
  extendedRequestId: 'id_here',
  cfId: undefined,
  statusCode: 403,
  retryable: false,
  retryDelay: 48.0328966970021
}

We have tested the S3 credentials directly from terminal and have succeeded in uploading to the bucket and listing the contents of the bucket. Any pointers where we should go from here?

adamhenson commented 4 years ago

Hi @iler - sorry for the troubles. The error message does seem to point to the credentials. I can confirm that it's working correctly for me with this setup. I would double check your configuration and how you're passing in secrets. I'm using GitHub secrets in this repo. Can you paste your code in a comment here?

If you are using GitHub secrets I'd double check that you have them set in the repo with the GitHub action or if they're set at the organization level - make sure that repo has permission (actually not sure where that is determined).

iler commented 4 years ago

Hi @adamhenson - no worries! Happy that you can help us :) We are using the following configuration together with GitHub secrets. We have configured the secrets in the repo where the action is setup so it should be ok. We have also double checked the credentials and also tested manually that those work. And to be sure we also gave all permissions for the user in AWS for that S3 bucket while debugging this.

name: Test Lighthouse Check
on:
  push:
    branches:
      - master

jobs:
  lighthouse-check:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2.1.1
      - run: mkdir /tmp/artifacts
      - name: Run Lighthouse
        uses: foo-software/lighthouse-check-action@v1.0.13
        with:
          accessToken: ${{ secrets.LIGHTHOUSE_CHECK_GITHUB_ACCESS_TOKEN }}
          author: ${{ github.actor }}
          awsAccessKeyId: ${{ secrets.LIGHTHOUSE_CHECK_AWS_ACCESS_KEY_ID }}
          awsBucket: ${{ secrets.LIGHTHOUSE_CHECK_AWS_BUCKET }}
          awsRegion: ${{ secrets.LIGHTHOUSE_CHECK_AWS_REGION }}
          awsSecretAccessKey: ${{ secrets.LIGHTHOUSE_CHECK_AWS_SECRET_ACCESS_KEY }}
          branch: ${{ github.ref }}
          outputDirectory: /tmp/artifacts
          urls: 'list_of_urls'
          sha: ${{ github.sha }}
          slackWebhookUrl: ${{ secrets.LIGHTHOUSE_CHECK_WEBHOOK_URL }}
      - name: Upload artifacts
        uses: actions/upload-artifact@v2
        with:
          name: Lighthouse reports
          path: /tmp/artifacts

And here is a screenshot from the repository secrets https://take.ms/3YwcW.

adamhenson commented 4 years ago

No problem @iler. That looks good to me. I'd just double check that you're populating the correct access key id and secret access key per AWS docs. I'm guessing you've already confirmed that.

The last thing you can do is try running the same packages locally, to rule out CI. Under the hood we're using @foo-software/lighthouse-persist. You could try running it locally with the same credentials. If you're successful with it, we know the issue is more about the GitHub Action setup. If you can reproduce the error there, locally - you could dig more into the AWS setup.

I'd recommend doing the following to test locally:

  1. mkdir lighthouse-test && cd lighthouse-test to just create directory for the testing.
  2. npm init to set it up the project
  3. npm install @foo-software/lighthouse-persist
  4. Create a file with the contents below and name it index.js. Populate the configuration with the same creds your using here.
  5. node index.js
  6. Post the output here or summarize for us what you see (it will hang for about 30 seconds if successful and then eventually you'll see output).

lighthouse-test/index.js


const lighthousePersist = require('@foo-software/lighthouse-persist').default;

(async () => { const { report, result } = await lighthousePersist({ url: 'https://www.google.com.com', awsAccessKeyId: 'your-access-key-id', awsBucket: 'your-bucket', awsRegion: 'your-bucket-region', awsSecretAccessKey: 'your-secret-access-key' });

console.log({ report, result }); })();

iler commented 4 years ago

@adamhenson I ran this code and got the following output:

(node:46316) UnhandledPromiseRejectionWarning: AccessDenied: Access Denied
    at Request.extractError (/Users/iler/workspace/lighthouse-test/node_modules/aws-sdk/lib/services/s3.js:831:35)
    at Request.callListeners (/Users/iler/workspace/lighthouse-test/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
    at Request.emit (/Users/iler/workspace/lighthouse-test/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
    at Request.emit (/Users/iler/workspace/lighthouse-test/node_modules/aws-sdk/lib/request.js:683:14)
    at Request.transition (/Users/iler/workspace/lighthouse-test/node_modules/aws-sdk/lib/request.js:22:10)
    at AcceptorStateMachine.runTo (/Users/iler/workspace/lighthouse-test/node_modules/aws-sdk/lib/state_machine.js:14:12)
    at /Users/iler/workspace/lighthouse-test/node_modules/aws-sdk/lib/state_machine.js:26:10
    at Request.<anonymous> (/Users/iler/workspace/lighthouse-test/node_modules/aws-sdk/lib/request.js:38:9)
    at Request.<anonymous> (/Users/iler/workspace/lighthouse-test/node_modules/aws-sdk/lib/request.js:685:12)
    at Request.callListeners (/Users/iler/workspace/lighthouse-test/node_modules/aws-sdk/lib/sequential_executor.js:116:18)
(node:46316) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:46316) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

I'm now going to double and triple check the credentials for this and will comment here once done.

iler commented 4 years ago

@adamhenson so it seems that the problem is with the credentials. I tested with my own credentials and the upload works correctly. So your implementation works correctly and we just need to get the access rights correctly setup. Thanks for your help!

P.S. I did not close this thread as I don't know how you normally handle these cases.

adamhenson commented 4 years ago

Thanks for following up @iler. Sorry for the AWS struggles - trust me I've had my share. For deeper debugging perhaps you could try the aws-sdk NPM module directly 🤷‍♂️

Anyways, good luck! Closing this.

benjick commented 3 years ago

Can someone tell me what policies are needed to not get this error? Currently trying with these:

{
  Version: '2012-10-17',
  Statement: [
    {
      Effect: 'Allow',
      Action: ['s3:ListBucket'],
      Resource: [`arn:aws:s3:::${bucketName}`],
    },
    {
      Effect: 'Allow',
      Action: ['s3:PutObject', 's3:GetObject', 's3:DeleteObject'],
      Resource: [`arn:aws:s3:::${bucketName}/*`],
    },
  ],
}

Edit: Seems this is the issue with my policy: https://github.com/foo-software/lighthouse-persist/blob/435c21b1792d8509369325c7b3caf44491fe8dba/src/index.js#L97

Edit2: 💡 Needed the s3:PutObjectAcl action

adamhenson commented 3 years ago

@benjick - thanks for your input. It's possible this discussion will creep beyond the scope of this project, but I'm happy to do the best I can until then. We use ACL: 'public-read' assuming that uploads will be publicly accessible, but I could see the need for customization here. What is the full error you are seeing? If you can extract an issue that relates to this project and not user settings relating to aws-sdk - then please open a separate issue with a clear statement of the problem and how it relates to this project. Propose a solution... even just an abstract idea. And bonus points - open a PR after that.

benjick commented 3 years ago

@adamhenson I had the same error, but I solved it by adding the s3:PutObjectAcl, hopefully someone else will find this comment if they need it in the future. Before I looked at the code I had created a use with s3:PutObject and then set the public-read on the whole bucket.

adamhenson commented 3 years ago

Great, thanks 🙌