Closed lpossamai closed 1 year ago
facing same issue and would love to find a solution
I can provide a suitable solution for the multiple regions & multiple accounts. here is the repo link: http://github.com/startuplcoud/infra-multi-account-region-startup-kit/ but I need to update the document and much more details.
This is how I manage in my pipeline:
- name: Terraform Validate
working-directory: ./ProvisionAWSGlobal
id: validate
run: terraform validate -no-color
env:
AWS_ACCESS_KEY_ID: "${{ secrets.APPID }}"
AWS_SECRET_ACCESS_KEY: "${{ secrets.APPSECRET }}"
continue-on-error: true
And in my terraform:
provider "aws" {
alias = "base"
region = var.deploy_region
default_tags {
tags = {
managed_by = "Terraform"
}
}
}
provider "aws" {
alias = "other"
region = var.deploy_region
assume_role {
role_arn = "arn:aws:iam::${var.management_account_id}:role/${rolename}"
}
default_tags {
tags = {
managed_by = "Terraform"
}
}
}
We do Assume Role twice to manage multiple provider situations like this case.
[GithubAction] -----------------------> [prod_role] -----------------------> [staging_role]
assume role with assume role with
configure-aws-credentials terraform assume_role
[GithubAction] -----------------------> [prod_role] -----------------------> [test_role]
assume role with assume role with
configure-aws-credentials terraform assume_role
We do Assume Role twice to manage multiple provider situations like this case.
[GithubAction] -----------------------> [prod_role] -----------------------> [staging_role] assume role with assume role with configure-aws-credentials terraform assume_role [GithubAction] -----------------------> [prod_role] -----------------------> [test_role] assume role with assume role with configure-aws-credentials terraform assume_role
If you could provide example code, that would be awesome please.
Hello, i recently encountered this same issue. is there any update on a fix?
@CyberViking949 This advice worked for me to assume multiple roles https://github.com/aws-actions/configure-aws-credentials/issues/636#issuecomment-1418641641
@CyberViking949 This advice worked for me to assume multiple roles #636 (comment)
Thanks @Constantin07, however this requires static access keys setup. The whole reason i was leveraging this action was to use the Github OIDC provider in aws. so im assuming a role in an identity account to assume a role in a prod/dev account all using ephemeral tokens.
Action assume role --> Identity role (this action) --> backend role for s3 statefiles --> Child role for plan/apply.
the backend role is assumed properly and state is pulled. However plan/apply is not using the role defined in the provider and is instead using the role from the identity account
I'm standing on the shoulders of giants with this, but here is something that I whipped up to meet my use case: https://github.com/marketplace/actions/configure-aws-profile
Thanks for sharing this mcblair, this is excellent. I'm going to be closing this issue in favor of https://github.com/aws-actions/configure-aws-credentials/issues/112, as I suspect once #112 is implemented that will work for this use case. Let me know if you disagree and I can reopen this issue
Comments on closed issues are hard for our team to see. If you need more assistance, please either tag a team member or open a new issue that references this one. If you wish to keep having a conversation with other community members under this issue feel free to do so.
Hi @peterwoodworth ,
I disagree that #112 will fix this issue here. #112 will use profiles and not the IAM Roles. That would cause a very long pipeline config file, depending on your setup, and lots and lots of Github Secrets to configure... which isn't something practical.
If we take the #112 example:
- name: Add Dev profile credentials to ~/.aws/credentials
env:
AWS_ACCESS_KEY_ID: ${{ secrets.DEV_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.DEV_AWS_SECRET_ACCESS_KEY }}
run: |
aws configure set aws_access_key_id $DEV_AWS_ACCESS_KEY_ID --profile my-app-name-dev
aws configure set aws_secret_access_key $DEV_AWS_SECRET_ACCESS_KEY --profile my-app-name-dev
- name: Add Staging profile credentials to ~/.aws/credentials
env:
AWS_ACCESS_KEY_ID: ${{ secrets.STAGING_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.STAGING_AWS_SECRET_ACCESS_KEY }}
run: |
aws configure set aws_access_key_id $STAGING_AWS_ACCESS_KEY_ID --profile my-app-name-staging
aws configure set aws_secret_access_key $STAGING_AWS_SECRET_ACCESS_KEY --profile my-app-name-staging
- name: Add Prod profile credentials to ~/.aws/credentials
env:
AWS_ACCESS_KEY_ID: ${{ secrets.PROD_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.PROD_AWS_SECRET_ACCESS_KEY }}
run: |
aws configure set aws_access_key_id $PROD_AWS_ACCESS_KEY_ID --profile my-app-name-prod
aws configure set aws_secret_access_key $PROD_AWS_SECRET_ACCESS_KEY --profile my-app-name-prod
what I propose is a way to support multiple AWS authentication using IAM Roles.
Thanks @lpossamai, I see why profiles doesn't solve this for you
I'm curious to know more about how exactly you're using this action within your workflow, and what exactly you're doing in terraform. I'm unfamiliar with terraform, is there one command that you're running in one step, and you need to be able to assume multiple roles at once for this one terraform command to work?
Hi @peterwoodworth , thanks for your prompt reply.
TBH, I have changed the way I use Terraform and authenticate with AWS. So, this issue is not needed for me and I cannot replicate it anymore. Looking at this further, I realize now that the limitation I was facing is not something that needs and can be fixed by the maintainers of aws-actions/configure-aws-credentials
. It should be addressed at the Terraform level.
A little background for further reference.
Before the change I made, I was using Github Actions to deploy my infrastructure to AWS with Terraform. A sample code would be:
// terraform/elb/main.tf
resource "aws_lb" "alb" {
count = terraform.workspace == "test" || terraform.workspace == "staging" ? 1 : 0
name = "example-${terraform.workspace}-alb"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.alb[count.index].id]
subnets = data.terraform_remote_state.network.outputs.public_subnets
idle_timeout = 300
enable_deletion_protection = true
enable_http2 = true
preserve_host_header = true
drop_invalid_header_fields = true
access_logs {
bucket = module.alb_log_bucket[count.index].s3_bucket_id
prefix = terraform.workspace
enabled = true
}
tags = merge({
Environment = terraform.workspace
}, var.tags)
}
The github workflow for that particular folder would look like this:
jobs:
ELB-TEST:
name: "ELB-TEST"
runs-on: ubuntu-latest
environment: test
env:
TF_VAR_iam_role_to_assume_test: ${{ secrets.iam_role_to_assume_test }}
ENVIRONMENT: test
defaults:
run:
working-directory: ${{ env.WORKING_DIRECTORY }}
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v2
with:
role-to-assume: ${{ env.TF_VAR_iam_role_to_assume_test }}
role-session-name: github-ELB-test
aws-region: ${{ env.AWS_REGION }}
- name: Setup Terraform
uses: hashicorp/setup-terraform@v2
- name: Terraform Format
id: fmt
run: terraform fmt -check -recursive
- name: Terraform Init
id: init
run: |
terraform init -backend-config="role_arn=$TF_VAR_iam_role_to_terraform_backend"
- name: Terraform Validate
id: validate
run: |
terraform validate -no-color
env:
TF_WORKSPACE: test
TF_IN_AUTOMATION: true
- name: Terraform Plan
id: plan
if: github.event_name == 'pull_request'
run: terraform plan -input=false -out=tf_plan_out_${{ env.ENVIRONMENT }}_${{ env.TF_MODULE_NAME }}.tfplan
continue-on-error: false
env:
TF_WORKSPACE: test
TF_IN_AUTOMATION: true
- name: Terraform Apply
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: terraform apply -input=false -auto-approve tf_plan_out_${{ env.ENVIRONMENT }}_${{ env.TF_MODULE_NAME }}.tfplan
env:
TF_WORKSPACE: test
TF_IN_AUTOMATION: true
ELB-STAGING:
name: "ELB-STAGING"
runs-on: ubuntu-latest
needs: ELB-TEST
environment: staging
env:
TF_VAR_iam_role_to_assume_staging: ${{ secrets.iam_role_to_assume_staging }}
ENVIRONMENT: staging
defaults:
run:
working-directory: ${{ env.WORKING_DIRECTORY }}
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v2
with:
role-to-assume: ${{ env.TF_VAR_iam_role_to_assume_staging }}
role-session-name: github-ELB-staging
aws-region: ${{ env.AWS_REGION }}
- name: Setup Terraform
uses: hashicorp/setup-terraform@v2
- name: Terraform Format
id: fmt
run: terraform fmt -check -recursive
- name: Terraform Init
id: init
run: |
terraform init -backend-config="role_arn=$TF_VAR_iam_role_to_terraform_backend"
- name: Terraform Validate
id: validate
run: |
terraform validate -no-color
env:
TF_WORKSPACE: staging
TF_IN_AUTOMATION: true
- name: Terraform Plan
id: plan
if: github.event_name == 'pull_request'
run: terraform plan -input=false -out=tf_plan_out_${{ env.ENVIRONMENT }}_${{ env.TF_MODULE_NAME }}.tfplan
continue-on-error: false
env:
TF_WORKSPACE: staging
TF_IN_AUTOMATION: true
- name: Terraform Apply
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: terraform apply -input=false -auto-approve tf_plan_out_${{ env.ENVIRONMENT }}_${{ env.TF_MODULE_NAME }}.tfplan
env:
TF_WORKSPACE: staging
TF_IN_AUTOMATION: true
So not very good as I would have to have a Job for each of my environments and for each of my terraform/**
folders/modules. And not only that, but what if I want to deploy to multiple accounts in the same PR? That wouldn't be possible.
What I ended up doing was:
This allows me to deploy to multiple accounts now in the same PR using provider = aws.alias
. You can check this diagram to understand this concept now.
Safe to close this issue now. Thanks!
Comments on closed issues are hard for our team to see. If you need more assistance, please either tag a team member or open a new issue that references this one. If you wish to keep having a conversation with other community members under this issue feel free to do so.
Sorry, I didn't read all the comments before replying. #112 covers my requirements
Hello,
I have a question on how can I use
configure-aws-credentials
to assume multiple roles so that my TFprovider.tf
file can apply all the necessary changes to multiple accounts?Example: In my
PROD
workspace, I need to deploy toTEST
andDEV
workspaces. In myprovider.tf
file I have the following:In my Github Actions workflow I have the following:
But that gives me an error, because Github didn't have permissions to assume the other two roles
staging
andtest
.Is there a workaround this? Any suggestions is welcome.
Thanks!