gruntwork-io / terragrunt

Terragrunt is a flexible orchestration tool that allows Infrastructure as Code written in OpenTofu/Terraform to scale.
https://terragrunt.gruntwork.io/
MIT License
7.79k stars 961 forks source link

Setting environment variables not working #2937

Open ezienecker opened 4 months ago

ezienecker commented 4 months ago

Describe the bug I use AWS S3 as backend. I use AWS SSO (not the legacy version of SSO) as my authentication method. In addition, I use multiple AWS accounts (development, testing, production), which means I work with multiple profiles (development, testing, production) in my AWS Config (~/.aws/config). This means that, depending on the environment, I have to call an export AWS_PROFILE="development" in the current shell so that I can run terraform or terragrunt without errors. Since this can be very error-prone under certain circumstances, I was looking for an option that would save me this. I came across the section "Keep your CLI flags DRY". I have configured this so that in my opinion this envrionment variable is set accordingly before the defined commands, but unfortunately I keep running into the same error (see below). I am not sure if this does not work in the current terragrunt version or if I have misunderstood the feature.

If I execute export AWS_PROFILE="development" in the shell before using terragrunt, terragrunt definitely works.

To Reproduce

locals {
  account_vars = read_terragrunt_config(find_in_parent_folders("account.hcl"))
  region_vars = read_terragrunt_config(find_in_parent_folders("region.hcl"))
  environment_vars = read_terragrunt_config(find_in_parent_folders("env.hcl"))

  account_name = local.account_vars.locals.aws_account_name
  account_id   = local.account_vars.locals.aws_account_id
  user_profile = local.account_vars.locals.aws_user_profile
  aws_region   = local.region_vars.locals.aws_region
}

generate "provider" {
  path      = "provider.tf"
  if_exists = "overwrite_terragrunt"
  contents  = <<EOF
provider "aws" {
  region = "${local.aws_region}"
  allowed_account_ids = ["${local.account_id}"]

  profile                  = "${local.user_profile}"
}
EOF
}

remote_state {
  backend = "s3"
  config  = {
    encrypt = true
    bucket  = "terraform-state-${local.account_name}"
    key     = "${path_relative_to_include()}/terraform.tfstate"
    region  = local.aws_region
    #dynamodb_table = "terraform-locks"
  }
  generate = {
    path      = "backend.tf"
    if_exists = "overwrite_terragrunt"
  }
}

terraform {
  extra_arguments "aws_profile_config" {
    commands  = ["init", "plan", "apply"]
    arguments = []
    env_vars  = {
      AWS_PROFILE = "development"
      TF_VAR_AWS_PROFILE = "development"
      TF_VAR_aws_profile = "development"
    }
  }
}

inputs = merge(
  local.account_vars.locals,
  local.region_vars.locals,
  local.environment_vars.locals,
)

Expected behavior I expect that I don't have to export AWS_PROFILE="development" in the shell so that I can connect against the AWS S3 backend. I expect the environment variables from the extra_arguments to be set correctly.

Nice to have Terminal output

➜ terragrunt run-all init
INFO[0000] The stack at /Users/muster/sample/infrastructure/environments/development will be processed in the following order for command init:
Group 1
- Module /Users/muster/sample/infrastructure/environments/development/s3-tenant-bucket

ERRO[0005] Module /Users/muster/sample/infrastructure/environments/development/s3-tenant-bucket has finished with an error: Error finding AWS credentials (did you set the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables?): NoCredentialProviders: no valid providers in chain. Deprecated.
        For verbose messaging see aws.Config.CredentialsChainVerboseErrors  prefix=[/Users/muster/sample/infrastructure/environments/development/s3-tenant-bucket] 
ERRO[0005] 1 error occurred:
        * Error finding AWS credentials (did you set the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables?): NoCredentialProviders: no valid providers in chain. Deprecated.
        For verbose messaging see aws.Config.CredentialsChainVerboseErrors

ERRO[0005] Unable to determine underlying exit code, so Terragrunt will exit with error code 1 
ERRO[0005] Suggested fixes: 
Missing AWS credentials. Provide credentials to proceed.

Versions

Additional context

acontrerasmp commented 4 months ago

i have the same issue, since I upgraded to the last terragrunt version this is failing

ezienecker commented 4 months ago

@coc-monpasse from what version do you upgrade? I tried with version 0.54.0 and it's not working too

acontrerasmp commented 4 months ago

Hi I fixed the issue updating terraform and terrgrunt to the last versions, in your backend configuration add the profile. In my case it fixed the issue

ezienecker commented 4 months ago

Unfortunately, this did not solve the problem.

krachynski commented 3 months ago

I am also running into this on 0.55.13.

We're using Jenkins and the AWS Pipeline plugin (jenkinsci/pipeline-aws-plugin) which provides the credentials to our build session as Environment Variables only. There is no profile on the agents at any time.

We're also doing role assumption and terragrunt is failing to assume the role properly as a result.

mpascu commented 1 month ago

I'm facing the same issue using access_key and secret_key to authenticate the backend. I noticed that if I add the disable_init feature in the backend, terragrunt can init and plan until I change something and then I get an error. This makes me think that the modules are properly getting the authentication because of the generated backend block and the error comes from the terragrunt binary.

remote_state {
  backend = "s3"
  disable_init = true
. . .
}