aws-actions / configure-aws-credentials

Configure AWS credential environment variables for use in other GitHub Actions.
MIT License
2.46k stars 468 forks source link

Error: The security token included in the request is invalid - when AWS key/secret changes between GHA jobs #340

Closed scott-doyland-burrows closed 1 year ago

scott-doyland-burrows commented 2 years ago

Hi,

My workflow - for purposes of testing - looks like this:

name: devops
on:
  workflow_dispatch:
jobs:
  devops1:
    runs-on: ubuntu-latest
    steps:
      - uses: aws-actions/configure-aws-credentials@v1
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID}}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: eu-west-2
      - run: aws s3 ls 
      - run: sleep 60
  devops2:
    runs-on: ubuntu-latest
    needs: devops1
    steps:
      - uses: aws-actions/configure-aws-credentials@v1
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: eu-west-2
      - run: aws s3 ls

While devops1 is in the 60 second sleep, I generate a new AWS access key and secret and put these into GitHub Secrets.

When devops2 runs I get this error:

Run aws-actions/configure-aws-credentials@v1
  with:
    aws-access-key-id: ***
    aws-secret-access-key: ***
    aws-region: eu-west-2
Error: The security token included in the request is invalid.

This may seem like an odd thing to be doing, but the reason is that my actual workflow (not this test workflow) is rotating AWS access keys and pushing them to GitHub secrets. I have one AWS key that I rotate first, and then the other keys are rotated using this key. But this fails due to the error as above.

It looks like a token is left from the previous key, and the new key then fails due to this old token.

Is there a way to clear the old token?

rjeczalik commented 2 years ago

Error: The security token included in the request is invalid.

The same error happens when GitHub OIDC provider is used with role-to-assume. When two different actions within the same job try to assume different roles, first assume always works while every consequent one fails with invalid token error.

Using different session names and cleaning all AWS_ envs from $GITHUB_ENV does not help.

Is calling configure-aws-credentials multiple times with different roles within the same job supported?

rsavage-nozominetworks commented 2 years ago

I was running into the same issue, but I have discovered a work-around that should not be needed, but it works. I was getting the exact same error above, until I added the unset of the ENV VARS, and sending NULL'd ones to the next step.

image

ghost commented 2 years ago

I'm facing the same issue even with a much simpler setup...

      - name: Configure AWS CLI
        uses: aws-actions/configure-aws-credentials@v1
        with:
          aws-access-key-id: ${{ secrets.TEST_AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.TEST_AWS_SECRET_ACCESS_KEY }}
          aws-region: eu-central-1
flexwang2 commented 2 years ago

+1

benmcp commented 2 years ago

I had a slightly different scenario but experienced a similar problem. As a means to potentially help future engineers, I'll be a bit more thorough here.

My scenario was, within a single job:

I experienced a range of issues such as:

The No OpenIDConnect provider found in your account error was solved by explicitly including the aws-access-key-id and aws-secret-access-key environment variables outputted from the first step into the second step such that it will assume a role via the access keys and not the OpenID Connect.

This then caused the The security token included in the request is invalid error, which was solved by including the outputted aws-session-token from the first assume role step into the second assume role step.

This then caused the The requested DurationSeconds exceeds the MaxSessionDuration set for this role. error, which was resolved by specifying a role-duration-seconds limit of less than 1 hour in the second assume role step. Further information about this can be found here.

In summary, the following setup worked for me:

name: List Buckets
on: [push]

permissions:
  id-token: write
  contents: read

jobs:
  list-buckets:
    name: List S3 Buckets
    runs-on: ubuntu-latest

    steps:
    - name: Checkout
      uses: actions/checkout@v2

    - name: Assume Role A in AWS Account A via Open ID Connect
      uses: aws-actions/configure-aws-credentials@v1
      with:
        role-session-name: account-a-session
        role-to-assume: ${{ secrets.ROLE_A_IN_ACCOUNT_A }}
        aws-region: us-east-1

    - name: Assume Role B in AWS Account B via sts:assumeRole
      uses: aws-actions/configure-aws-credentials@v1
      with:
        role-session-name: account-b-session
        role-to-assume: ${{ secrets.ROLE_B_IN_ACCOUNT_B }}
        aws-region: us-east-1
        aws-access-key-id: ${{ env.AWS_ACCESS_KEY_ID }}
        aws-secret-access-key: ${{ env.AWS_SECRET_ACCESS_KEY }}
        aws-session-token: ${{ env.AWS_SESSION_TOKEN }}
        role-duration-seconds: 1200

    - name: List Buckets 
      run: |
        aws s3api list-buckets

I hope this helps people coming here in the future!

tung-hl commented 1 year ago

I was able to assume a different role by setting AWS env vars to null:

jobs:
  foo:
    runs-on: ubuntu-latest
    name: foo-build
    steps:
      - name: Configure AWS credentials
        id: aws_credentials
        uses: aws-actions/configure-aws-credentials@v1
        with:
          role-to-assume: ${{ env.ROLE_ARN_A }}
          aws-region: ${{ env.AWS_DEFAULT_REGION_A }}

      - name: List Buckets 
      run: |
        aws s3api list-buckets

      - name: Switch AWS credentials
        id: aws_aws_credentials
        uses: aws-actions/configure-aws-credentials@v1
        env:
          AWS_DEFAULT_REGION: ${{ null }}
          AWS_REGION: ${{ null }}
          AWS_ACCESS_KEY_ID: ${{ null }}
          AWS_SECRET_ACCESS_KEY: ${{ null }}
          AWS_SESSION_TOKEN: ${{ null }}
        with:
          role-to-assume: ${{ env.ROLE_ARN_B }}
          aws-region: ${{ env.AWS_DEFAULT_REGION_B }}

      - name: List Buckets Again
            run: |
              aws s3api list-buckets
grudelsud commented 1 year ago

hey,

I have a similar problem and unfortunately I wasn't able to solve it by setting env vars to null, as someone suggested in this thread.

here's what happens to me:

      - name: Configure AWS Credentials with GitHub OpenID STS
        uses: aws-actions/configure-aws-credentials@v1-node16
        with:
          role-to-assume: arn:aws:iam::123456:role/GitHub_OpenID
          aws-region: eu-west-1

      - name: Do stuff
        run: do_stuff_with_1st_role.sh

      - name: cleanup
        run: |
          unset AWS_DEFAULT_REGION
          unset AWS_REGION
          unset AWS_ACCESS_KEY_ID
          unset AWS_SECRET_ACCESS_KEY
          unset AWS_SESSION_TOKEN

      - name: Configure AWS Credentials again with IAM user
        uses: aws-actions/configure-aws-credentials@v1
        with:
          aws-access-key-id: ${{ env.AWS_ACCESS_KEY_ }}
          aws-secret-access-key: ${{ env.AWS_SECRET_ACCESS_KEY_ }}
          aws-region: ${{ env.AWS_DEFAULT_REGION_ }}

      - name: Do some more stuff
        run: do_stuff_with_2nd_role.sh

when I run the second step do_stuff_with_2nd_role.sh I get an error in my script clearly showing that the role is still set from the previous credential.

I don't have a solution for this yet, but anyone who can suggest a fix is more than welcome!

rjeczalik commented 1 year ago

@grudelsud Apparently setting envs explicitely to null in addition to the unset part should do the trick. This is missing in your snippet:

      - name: Configure AWS Credentials again with IAM user
        uses: aws-actions/configure-aws-credentials@v1
        env:
          AWS_DEFAULT_REGION:
          AWS_REGION:
          AWS_ACCESS_KEY_ID:
          AWS_SECRET_ACCESS_KEY:
          AWS_SESSION_TOKEN:
        with:
          aws-access-key-id: ${{ env.AWS_ACCESS_KEY_ }}
          aws-secret-access-key: ${{ env.AWS_SECRET_ACCESS_KEY_ }}
          aws-region: ${{ env.AWS_DEFAULT_REGION_ }}
grudelsud commented 1 year ago

hey @rjeczalik thanks for your reply

I had actually tried to set it as suggested, but I get this error from the runner when running the "aws configure credentials" step:

Credentials could not be loaded, please check your action inputs: Could not load credentials from any providers

rjeczalik commented 1 year ago

@grudelsud Then it works correctly, since it did not reuse your prior credentials. Now you need to fix input arguments in the with: object, since they are wrong and you're done.

grudelsud commented 1 year ago

whoops, was it just a misspelled variable name? embarrassing šŸ˜Š thanks @rjeczalik I have to double check my script as I was sure it was correctly set and now, after making a few changes, not so much. I'll report back should I spot further troubles. Many thanks for your checks

grudelsud commented 1 year ago

ok, after doing quite a few tests I thought it would be useful to share my findings on this thread, even because I find that similar problems are reported on #383 and #423

my flow needs 2 separate configurations for AWS roles and users, the first call is used to retrieve some secrets from our vault, the second call is used to execute deployment scripts

the odd thing I noticed is that after configuring my credentials, our terraform script worked fine, while the following step running the aws cli to upload an updated lambda through the error An error occurred (UnrecognizedClientException) when calling the UpdateFunctionCode operation: The security token included in the request is invalid

eventually, I solved the problem by manually setting all aws variables to null, apart from the region, access key and secret in the step that threw the error.

below is a snippet that works for me, hope this helps, and thanks everyone for your suggestions!

      - name: Authenticated on OpenID identity provider to get AWS tokens
        uses: aws-actions/configure-aws-credentials@v1-node16
        with:
          role-to-assume: arn:aws:iam::123456:role/GitHub_OpenID
          aws-region: eu-west-1

      - name: create deploy environment using role token
        run: |
          make retrieve_secrets_from_vault

      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v1-node16
        env:
          AWS_DEFAULT_REGION:
          AWS_REGION:
          AWS_ACCESS_KEY_ID:
          AWS_SECRET_ACCESS_KEY:
          AWS_SESSION_TOKEN:
        with:
          aws-access-key-id: ${{ env.AWS_ACCESS_KEY_ID_ }}
          aws-secret-access-key: ${{ env.AWS_SECRET_ACCESS_KEY_ }}
          aws-region: ${{ env.REGION }}

      - name: do stuff with terraform (uses aws and works without extra env)
        run: |
          make stuff_with_terraform

      - name: do stuff with aws cli (this step throws error if env vars aren't explicitly set)
        env:
          AWS_DEFAULT_REGION: ${{ env.REGION }}
          AWS_REGION: ${{ env.REGION }}
          AWS_ACCESS_KEY_ID: ${{ env.AWS_ACCESS_KEY_ID_ }}
          AWS_SECRET_ACCESS_KEY: ${{ env.AWS_SECRET_ACCESS_KEY_ }}
          AWS_SESSION_TOKEN:
          AWS_ROLE_ARN:
          AWS_WEB_IDENTITY_TOKEN_FILE:
          AWS_ROLE_SESSION_NAME:
        run: |
          make stuff_with_aws_cli

ain't pretty but it works šŸ˜„

MaxwellEnemuo commented 1 year ago

I added the secrets.AWS_SESSION_TOKEN in the repository secrets, and it worked. At the least for my testing needs:

  - name: Configure AWS credentials
    uses: aws-actions/configure-aws-credentials@v1
    with:
      aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
      aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
      aws-session-token: ${{ secrets.AWS_SESSION_TOKEN }}
      aws-region: eu-central-1
maciejbak85 commented 1 year ago

bro you saved my life :) thats crazy with this naming and error message saying nothin

samashtidevshukla commented 1 year ago

I added the secrets.AWS_SESSION_TOKEN in the repository secrets, and it worked. At the least for my testing needs:

  - name: Configure AWS credentials
    uses: aws-actions/configure-aws-credentials@v1
    with:
      aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
      aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
      aws-session-token: ${{ secrets.AWS_SESSION_TOKEN }}
      aws-region: eu-central-1

worked for me too

mitchellreid commented 1 year ago

I added the secrets.AWS_SESSION_TOKEN in the repository secrets, and it worked. At the least for my testing needs:

  - name: Configure AWS credentials
    uses: aws-actions/configure-aws-credentials@v1
    with:
      aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
      aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
      aws-session-token: ${{ secrets.AWS_SESSION_TOKEN }}
      aws-region: eu-central-1

Saved my assignment, thanks a bunch

peterwoodworth commented 1 year ago

It seems to me that the OP of this issue has a different issue than any commenters had. First, to address the commenters who are concerned with multiple invocations in a single job:

A different combination between enabling unset-current-credentials and role-chaining should work for any instances where this action is invoked multiple times. You can do this on v3, check out the README.

As for the OP, different jobs run on different containers. The credentials in one runner shouldn't impact the credentials of another. I'd need to look into if the values of inputs/secrets are set at the initialization of the runner, or the start of the workflow step. I'd guess initialization of the runner, in which case you'd need to have the secret properly set at the time the job starts. If setting unset-current-credentials on v3 helps, let me know!

github-actions[bot] commented 1 year ago

This issue has not received a response in a while. If you want to keep this issue open, please leave a comment below and auto-close will be canceled.