hashicorp / terraform

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is a source-available tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.
https://www.terraform.io/
Other
42.5k stars 9.52k forks source link

AWS Credentials when working with s3 Remote State #5839

Closed realdavidops closed 5 years ago

realdavidops commented 8 years ago

I ran into an interesting problem when working with a terraform project that used a S3 remote tfstate file.

The setup: I have been working on a terraform project on my personal laptop, I set it up locally with a remote state s3 file. AWS credentials are loaded into the project using a .tfvars file, for simplicity. (this file is not committed to git).

During the course of the project, it was determined that we would need to move where we were running terraform from to a server with firewall access to the AWS instances (for provisioning). I move the terraform project, and as a test run terraform plan, after that I get the following error:

Unable to determine AWS credentials. Set the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.
(error was: NoCredentialProviders: no valid providers in chain. Deprecated. 
    For verbose messaging see aws.Config.CredentialsChainVerboseErrors)

I checked and made sure that my tfvars file was in place. it looks like:

# AWS Access Key
ak = "[my actual access key]"
# AWS Secret Key
sk = "[my actual secret key]"
# AWS Region
region = "eu-central-1"
# Environment Name
enviro = "eu3"

And also my terraform file that is important for this conversation:

# Variables for AWS
variable "ak" {
  type = "string"
}
variable "sk" {
  type = "string"
}
variable "region" {
  type = "string"
  default = "eu-central-1"
}

# Variables for Environment
variable "enviro" {
  type = "string"
  default = "eu3"
}

# Set up AWS access to environment
provider "aws" {
    access_key = "${var.ak}"
    secret_key = "${var.sk}"
    region = "${var.region}"
}

# Setup storage of terraform statefile in s3.
# You should change stuff here if you are working on a different environment,
# especially if you are working with two separate environments in one region.
resource "terraform_remote_state" "ops" {
    backend = "s3"
    config {
        bucket = "eu3-terraform-ops"
        key = "terraform.tfstate"
        region = "${var.region}"
    }
}

Things I checked at this point:

  1. My Access Key/Secret Key are both valid
  2. I am using the same version of terraform:Terraform v0.6.12
  3. It also does not work with latest Terraform v0.6.14
  4. I am able to access the API endpoints via network
  5. Removing the remote provider and deleting the .terraform file does allow me to run terraform plan, but this obviously does not have the right state. if I try to re setup the remote via terraform remote... it throws the above error.

Through trial and error this is what we found to be the problem: A file has to exist in ~/.aws/credentials that has a [default] credential that is valid. This credential does NOT have to be for the environment that terraform is working on, and actually the key I used is for a completely separate AWS account. When I add that file to the new server, suddenly the s3 remote state is working. If I invalidate the key that is in that [default] profile, I get the following:

Error reloading remote state: InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.
    status code: 403, request id: C871F77837CFC156

Note: this key has no access to the s3 bucket we are using. If there are any questions about this please let me know. Thanks

realdavidops commented 8 years ago

This is limited specifically to s3 remote state, working with terraform and AWS is fine.

kendawg2 commented 8 years ago

i believe PR #5270 may address this issue.

realdavidops commented 8 years ago

I believe you might be right. I'll work on testing

simonvanderveldt commented 8 years ago

I've just tried v0.6.16 which included #5270 with our code-base which uses S3 for remote state and it actually doesn't fix this issue. If I set the required access key, secret access key and region variables it doesn't work. When setting variables using the AWS env vars it does work.

I haven't tried the ~/.aws/credentials file workaround mentioned in the initial report, so can't comment on that.

rarkins commented 8 years ago

I hit the same problem. Somehow terraform is not using ~/.aws/credentials properly. e.g. aws s3 ls ... will work and then terraform apply will not. Moving the credentials into default profile worked.

jwadolowski commented 8 years ago

Same here. The following terraform_remote_state:

resource "terraform_remote_state" "project" {
  backend = "s3"

  config {
    bucket     = "${var.project_state_bucket}"
    key        = "${var.project_state_key}"
    region     = "${var.project_aws_region}"
    access_key = "${var.project_aws_access_key}"
    secret_key = "${var.project_aws_secret_key}"
  }
}

always ends with

Error applying plan:

1 error(s) occurred:

* terraform_remote_state.project: AccessDenied: Access Denied
    status code: 403, request id: DD5189A46X0BX0BA

At the same time I was able to list and download content of the this S3 bucket using aws s3 command.

Workaround in my case:

[default]
aws_access_key_id = <ACCESS_KEY_1>
region = eu-west-1
aws_secret_access_key = <SECRET_KEY_1>

[project]
aws_access_key_id = <ACCESS_KEY_2>
region = eu-west-1
aws_secret_access_key = <SECRET_KEY_2>
resource "terraform_remote_state" "project" {
  backend = "s3"

  config {
    bucket                  = "${var.project_state_bucket}"
    key                     = "${var.project_state_key}"
    region                  = "${var.project_aws_region}"
    shared_credentials_file = "~/.aws/config"
    profile                 = "project"
  }
}
rarkins commented 8 years ago

I've resorted to just using aws s3 cp for terraform.tfstate before and after the terraform commands, and removing the terraform state configuration altogether. For a "single state" project am I missing out on anything?

marcboudreau commented 8 years ago

@jwadolowski: I'm curious, which version of terraform are you running? The shared_credentials_file and profile options seem new to me.

jwadolowski commented 8 years ago

@marcboudreau I'm running v0.6.16. Both profile and shared_credentials_file are fairly new, however fully documented here.

marcboudreau commented 8 years ago

@jwadolowski Thanks. I'm running 0.6.15 and it's not in. I found the code change that adds support for it.

flyinprogrammer commented 8 years ago

this is still broken in 0.6.16 as far as i can tell:

After rotating my keys, and supplying new ones into the config, I get these errors:

* terraform_remote_state.r53: InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.
akatrevorjay commented 8 years ago

If you're getting:

Error reloading remote state: SignatureDoesNotMatch: The request signature we calculated does not match the signature y
ou provided. Check your key and signing method.

Triple check your bucket names for your remote config. I for instance had the s3:// prefix on it, which it really should strip, ala the aws s3 client, which actually requires it (sigh).

veqryn commented 8 years ago

I began receiving this error, on 0.6.16, only after rotating my AWS access keys. My guess is that terraform is expecting the keys to match what I was using a week ago. I've tried removing the .terraform directory and .tfstate files from my computer, and re-pulling them with the remote state config command, but nothing is helping.

veqryn commented 8 years ago

It appears to be related to how we are pulling the remote config:

terraform remote config -backend=s3 -backend-config="<bucket>" -backend-config="region=<region>" -backend-config="access_key=<aws_access_key>" -backend-config="secret_key=<aws_secret>" -backend-config="key=<path>" -state="terraform.tfstate"

Once I changed this out to use profile instead, it worked fine:

terraform remote config -backend=s3 -backend-config="<bucket>" -backend-config="region=<region>" -backend-config="profile=<profile>" -backend-config="key=<path>" -state="terraform.tfstate"
veqryn commented 8 years ago

In my post above, switching to profile worked, but only after manually deleting the offending key and secret lines from the state file. However, if you change the name of your profile, or a colleague names them differently, it will stop working again until you manually update the state file.

Terraform needs to be more flexible about pulling and applying remote state. Maybe provide a way to change and delete remote state backend-config parameters? Maybe just overwrite them on pulling down, with whatever was provided just now? API Keys are regularly rotated, and profile names are different from one person to the next, so maybe don't even include them in the remote state...

sjpalf commented 8 years ago

I have also experienced this problem but discovered that an easier workaround is to run the terraform remote config a second time. When you run it the second time it then automatically updates the access_key and secret_key fields.

willis7 commented 7 years ago

I'm also getting this issue, but I have an added layer of complexity in that I have Multi Factor Auth enabled too.

gerr1t commented 7 years ago

I just hit this bug. Quite silly this is actually taking so long to get resolved.

adamhathcock commented 7 years ago

I'm also getting the name error as @flyinprogrammer after rotating keys

Failed to read state: Error reloading remote state: InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.

Version 0.8.1

adamhathcock commented 7 years ago

I've got around this by some combination of:

However, when I remove the key/secret from my config explicitly then it doesn't work. My .aws/credentials file has the same key/secret,

This all resulted from rotating my user's key/secret

ethangunderson commented 7 years ago

Is there any update on this bug? I just ran into it today. v0.8.6. Created remote s3 state, now whenever I do a plan I get an error saying no credential sources found.

I originally had just a default profile in my aws credential file, but recently added a development and production profile with no default. I reverted back to my original profile of default, but am still getting this error.

mars64 commented 7 years ago

Confirmed the bug is still present in v0.8.8. Worked around by explicitly setting profile.

kevgliss commented 7 years ago

Still seeing this error in 0.9.1

adilnaimi commented 7 years ago

Confirmed the bug is still present in v0.9.1, it works if I'm replacing aws profile with key and secret

my config.tfvars (not working)

profile = "my-profile"
region = "us-east-1"
lock_table = "stage-terraform-remote-state-locks"
encrypt = "true"
bucket = "stage-terraform-remote-state-storage"
key = "some-path/terraform.tfstate"
encrypt = "true"
kms_key_id = "xxxxxxx"
➜ terraform init -backend-config=config.tfvars 
Downloading modules (if any)...
Get: git::ssh://git@github.com/myaccount/terraform-aws-modules.git?ref=v0.0.3
Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your environment. If you forget, other
commands will detect it and remind you to do so if necessary.
➜  terraform plan                               
Error loading state: InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.
    status code: 403, request id: XXXXXXXXX

new config.tfvars (working fine)

delete the .terraform diretory and replace the profile = "my-profile" with access_key and secret_key

access_key = "my-key"
secret_key = "my-secret-key"
region = "us-east-1"
lock_table = "stage-terraform-remote-state-locks"
encrypt = "true"
bucket = "stage-terraform-remote-state-storage"
key = "some-path/terraform.tfstate"
encrypt = "true"
kms_key_id = "xxxxxxx"
➜ terraform init -backend-config=config.tfvars 
Downloading modules (if any)...
Get: git::ssh://git@github.com/myaccount/terraform-aws-modules.git?ref=v0.0.3
Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your environment. If you forget, other
commands will detect it and remind you to do so if necessary.
➜  terraform plan                               
var.subnets
  List of subnets DB should be available at. It might be one subnet.

  Enter a value:
nwhite-sf commented 7 years ago

Having this issue with 0.9.2. Maybe I have wrong expectations for how this should work. Typically I have been passing in the ACCESS and SECRET on the cmd line using -var which I reference in things like my aws provider section. I tried configuring an s3 backend in a terraform backend block, however, I get an error saying that terraform.backend: configuration cannot contain interpolations.

If I leave access_key and secret_key out of the backend block I would assume it would leverage what is configured by the aws provider block but it does not and I get an S3 access error.

It's also not clear to me what the functional differences is between using terraform backend and terraform_remote_state to configure the S3 backend. Obviously using the resource would let me reference different s3 state files within my terraform project. Other then that is there any reason to use one way over the other?

sysadmiral commented 7 years ago

@nwhite-sf terraform_remote_state is now a data source rather than a resource so you would use it to pull in things defined in statefile elsewhere to use in the config/module you are working in.

I seem to be hitting the issue mentioned in this thread in 0.9.2 though. If I set the provider for AWS to use a profile and my terraform config defines a backend that uses the same profile for remote state I continually get an access denied message. I can confirm the profile in question has admin access to s3 so I'm not sure why this is happening yet but it definitely feels like a bug right now.

pioneerit commented 7 years ago

I've fixed this with removing all profile from the terraform code and just give the aws sdk to works with my environment variable AWS_PROFILE. More details in section SharedCredentialsProvider - https://docs.aws.amazon.com/sdk-for-go/api/aws/credentials/

thedrewster-u51 commented 7 years ago

Run into this error today, thought it was related to this bug... it wasn't. Showed a Dev how to use AWS credentials earlier in the week, and had bogus AWS credentials set in my environmental variables. This appears to take precedence over what is defined in the state file. So just make sure that if you have a profile defined in the terraform remote config, environmental variables will override that.

combinatorist commented 7 years ago

Just noting this is related to #13589, right?

gaffo commented 7 years ago

Confirmed on 0.10.1... if I pull while specifying a different provider in provider "aws" it fails... if I rename my provider for this project to default it works. Means I can only work on one at a time.

gaffo commented 7 years ago

If you change profile in the config block of the remote provider it works. eg

data "terraform_remote_state" "aws_account" { backend = "s3" config { bucket = "bucket" key = "key" region = "us-west-2" profile = "PROFILE_HERE" } }

bgdnlp commented 6 years ago

Can confirm that this behavior is present in 0.10 and that adding profile (and region) to the backend config works.

yoaquim commented 6 years ago

How is this still a bug in 2018?

swoodford commented 6 years ago

I ran into a similar problem today with:

Error: Error refreshing state: 1 error(s) occurred:

* module.aws.default: NoCredentialProviders: no valid providers in chain. Deprecated.
    For verbose messaging see aws.Config.CredentialsChainVerboseErrors

I verified my credentials, checked that I was still active in AWS IAM, checked the IAM policy, checked the git repo in CodeCommit, ran a terraform init --upgrade, tested pushing code to CodeCommit, tested the AWS profile with an aws s3 ls, updated Terraform to Terraform v0.11.6, provider.aws v1.14.1, updated awscli to 1.15.0, verified the profile name is correct and the region is in the backend "s3"config.

I still cannot run any TF plan or apply!

Edit: I solved my problem, this was user error... it turns out I was unaware the Terraform backend was configured to use a "provider" with an alias and profile I did not recognize and in order to run any plan/apply, every resource needed to declare that specific provider name and alias or it would fail with this error.

ghost commented 6 years ago

May not be helpful and sorry for creeping on an old Issue but I got around this by moving my .terraform directory out and rerunning an init. This came up:

[admin@test aws]$ diff .terraform/terraform.tfstate us-west-2/.terraform/terraform.tfstate
3,4c3,4
<     "serial": 5,
<     "lineage": "<hash>",
---
>     "serial": 1,
>     "lineage": "<hash>",
11a12
>             "profile": "poc",```

It appears the profile flag was never getting added to my .terraform/terraform.tfstate. This is not the primary tfstate used for your infra BTW. I found this out when new team members were able to proceed without issue, but my local environment never worked despite how many flags, options, etc I passed. Hope this helps others.

EDIT: Full .terraform/terraform.tfstate backend block:

...
    "backend": {
        "type": "s3",
        "config": {
            "bucket": "<bucket_name>",
            "dynamodb_table": "<table_name>",
            "encrypt": true,
            "key": "<tf/path/to/terraform.tfstat>",
            "profile": "<dev/stage/prod/etc",
            "region": "<region_name>"
        },
        "hash": <some_numeric_hash>
    },
...
tkjef commented 6 years ago

check to make sure your environment is using the correct AWS account. You may have multiple accounts on your .aws/credentials

rkt2spc commented 6 years ago

Can confirm that this behaviour is present in 0.11.7 and that adding profile (and region) to the backend config works.

dallasgraves commented 6 years ago

On 0.11.7 and didn't work until my config block looked like this (obvious variables):

  backend "s3" {
    shared_credentials_file = "/location/of/credentials/file"
    profile = "profile-name-in-credentials-file"
    bucket = "bucket-name"
    key = "whatever-your-keyname-is"
    region = "us-east-1"
    dynamodb_table = "terraform.state"
    encrypt = "true"
  }
}

omit "dynamodb_table" line if you're not using that integration in your backend solution

rimiti commented 5 years ago

I solved this issue by creating environment variables:

export AWS_DEFAULT_REGION=xxxxx
export AWS_ACCESS_KEY_ID=xxxxx
export AWS_SECRET_ACCESS_KEY=xxxxx

This error only appear when your try to init. If you want to plan or apply you must have to pass your credentials as variables, like that:

terraform validate -var "aws_access_key=$AWS_ACCESS_KEY_ID" -var "aws_secret_key=$AWS_SECRET_ACCESS_KEY" -var "aws_region=$AWS_DEFAULT_REGION"

Note: Official documentation

Have a nice coding day! 🚀

sksinghvi commented 5 years ago

Setting AWS_PROFILE during init doesn't work for me. I get following error: Error configuring the backend "s3": NoCredentialProviders: no valid providers in chain. Deprecated. For verbose messaging see aws.Config.CredentialsChainVerboseErrors

I don't have credential file in .aws folder. Instead I have assumed role something like this in my config file [profile okta-sandbox-proxy] credential_process = aws-okta exec okta-sandbox -- cmd /c C:\tools\proxy-aws-okta-result.bat region = us-west-2 [profile okta-sandbox] aws_saml_url = home/amazon_aws/XXXXXXX/xxxx role_arn = arn:aws:iam::XXXXXXXXXX:role/VA-Role-Devops-L3 region = us-west-2

Not sure what the alternate solution is. Anybody can suggest the alternate solution to create s3 backend until main issue gets fixed.

sjpalf commented 5 years ago

@sksinghvi , I don't think that terraform will use your .aws\config file, but you should be able to get your setup (assumed role) to work by using a credentials file in your .aws folder, something like:


[okta-sandbox-proxy]
aws_access_key_id = xxxxxxxxxxxxxxxxxxxx
aws_secret_access_key = xxxxxxxxxxxxxxxxxxxx
region = us-west-2
[okta-sandbox]
role_arn = arn:aws:iam::XXXXXXXXXX:role/VA-Role-Devops-L3
source_profile = okta-sandbox-proxy
region = us-west-2```
sksinghvi commented 5 years ago

@sjpalf: I don't have awsaccess* . One more thing I forget to mention. If I don't create the backend and just use default workspace and local state it authenticates and works .

sksinghvi commented 5 years ago

got this working by doiung "aws-okta exec okta-sandbox -- terraform init" Thanks @sjpalf for looking into it

pkoch commented 5 years ago

Ran into this, had messed up my profile name on ~/.aws/credentials.

hashibot commented 5 years ago

Hello! :robot:

This issue relates to an older version of Terraform that is no longer in active development, and because the area of Terraform it relates to has changed significantly since the issue was opened we suspect that the issue is either fixed or that the circumstances around it have changed enough that we'd need an updated issue report in order to reproduce and address it.

If you're still seeing this or a similar issue in the latest version of Terraform, please do feel free to open a new bug report! Please be sure to include all of the information requested in the template, even if it might seem redundant with the information already shared in this issue, because the internal details relating to this problem are likely to be different in the current version of Terraform.

Thanks!

ghost commented 5 years ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.