Open glenjamin opened 7 years ago
I am trying to do something like this; getting the same "configuration cannot contain interpolations" error. While it seems like this is being worked on, I wanted to also ask if this is the right way for me to use access and secret keys? Does it have to be placed here so that I don't have to check the access and secret keys to github
terraform { backend "s3" { bucket = "ops" key = "terraform/state/ops-com" region = "us-east-1" encrypt = "true" access_key = "${var.aws_access_key}" secret_key = "${var.aws_secret_key}" } }
I have the same problem i.e. would love to see interpolations in the backend config. Now that we have "environments" in terraform, I was hoping to have a single config.tf with the backend configuration and use environments for my states. The problem is that I want to assume an AWS role based on the environment I'm deploying to. I can do this in "provider" blocks as the provider block allows interpolations so I can assume the relevant role for the environment I'm deploying to, however if I also rely on the role being set for the backend state management (e.g. when running terraform env select
) it doesn't work. Instead I have to use the role_arn
in the backend config which can't contain the interpolation I need.
I managed to get it working by using AWS profiles instead of the access keys directly. What I did though was not optimal; but in my build steps, I ran a bash script that called AWS configure that ultimately set the default access key and secret.
We want to archive something similar than @antonosmond. At the moment we use multiple environments prod/stage and want to upload tfstate files to S3.
## State Backend
terraform {
backend "s3" {
bucket = "mybucket"
key = "aws/${var.project}/${var.environment}"
region = "eu-central-1"
profile = "default"
encrypt = "true"
lock_table = "terraform"
}
}
In this case with above backend definition leads us to this Error:
Now if we try to hardcode it like this:
## State Backend
terraform {
backend "s3" {
bucket = "mybucket"
key = "aws/example/prod"
region = "eu-central-1"
profile = "default"
encrypt = "true"
lock_table = "terraform"
}
}
we get the following notification:
Do you want to copy only your current environment?
The existing backend "local" supports environments and you currently are
using more than one. The target backend "s3" doesn't support environments.
If you continue, Terraform will offer to copy your current environment
"prod" to the default environment in the target. Your existing environments
in the source backend won't be modified. If you want to switch environments,
back them up, or cancel altogether, answer "no" and Terraform will abort.
Is there a workaround for this problem at the moment, documentation for backend configuration does not cover working with environments.
Solved
seems my local test env was still running on terraform 0.9.1, after updating to latest version 0.9.2 it was working for me.
Do you want to migrate all environments to "s3"?
Both the existing backend "local" and the target backend "s3" support
environments. When migrating between backends, Terraform will copy all
environments (with the same names). THIS WILL OVERWRITE any conflicting
states in the destination.
Terraform initialization doesn't currently migrate only select environments.
If you want to migrate a select number of environments, you must manually
pull and push those states.
If you answer "yes", Terraform will migrate all states. If you answer
"no", Terraform will abort.
Hi, I'm trying to the the same as @NickMetz, I'm running terraform 0.9.3
$terraform version
Terraform v0.9.3
This is my code
terraform {
backend "s3" {
bucket = "tstbckt27"
key = "/${var.env}/t1/terraform.tfstate"
region = "us-east-1"
}
}
This is the message when I try to run terraform init
$ terraform init
Initializing the backend...
Error loading backend config: 1 error(s) occurred:
* terraform.backend: configuration cannot contain interpolations
The backend configuration is loaded by Terraform extremely early, before
the core of Terraform can be initialized. This is necessary because the backend
dictates the behavior of that core. The core is what handles interpolation
processing. Because of this, interpolations cannot be used in backend
configuration.
If you'd like to parameterize backend configuration, we recommend using
partial configuration with the "-backend-config" flag to "terraform init".
Is this expected behaviour on v0.9.3?
Are there any workarounds for this?
In case it's helpful to anyone, the way I get around this is as follows:
terraform {
backend "s3" {}
}
data "terraform_remote_state" "state" {
backend = "s3"
config {
bucket = "${var.tf_state_bucket}"
lock_table = "${var.tf_state_table}"
region = "${var.region}"
key = "${var.application}/${var.environment}"
}
}
All of the relevant variables are exported at the deployment pipeline level for me, so it's easy to init with the correct information for each environment.
terraform init \
-backend-config "bucket=$TF_VAR_tf_state_bucket" \
-backend-config "lock_table=$TF_VAR_tf_state_table" \
-backend-config "region=$TF_VAR_region" \
-backend-config "key=$TF_VAR_application/$TF_VAR_environment"
I don't find this ideal, but at least I can easily switch between environments and create new environments without having to edit any terraform.
@gsirvas @umeat To archive multiple environment with the same backend configuration it is not necessary to use variables/interpolation .It is expected that is not possible to use variables/interpolation in backend configuration see comment from @christofferh.
Just write it like this:
terraform {
backend "s3" {
bucket = "tstbckt27"
key = "project/terraform/terraform.tfstate"
region = "us-east-1"
}
}
Terraform will split and store environment state files in a path like this:
env:/${var.env}/project/terraform/terraform.tfstate
@NickMetz it's trying to do multiple environments with multiple backend buckets, not a single backend. You can't specify a different backend bucket in terraform environments. In my example you could still use terraform environments to prefix the state file object name, but you get to specify different buckets for the backend.
Perhaps it's better to just give accross account access to the user / role which is being used to deploy your terraform. Deploying your terraform to a different account, but using the same backend bucket. Though it's fairly reasonable to want to store the state of an environment in the same account that it's deployed to.
@umeat in that case you are right, it is not possible at the moment to use different backends for each environment. It would be more comfortable to have a backend mapping for all environments what is not implemented yet.
Perhaps a middle ground would be to not error out on interpolation when the variable was declared in the environment as TF_VAR_foo
? Though this might require making such variables immutable? (Which is fine for my use case; not sure about others.)
I also would like to be able to use interpolation in my backend config, using v 0.9.4, confirming this frustrating point still exists. In my use case i need to reuse the same piece of code (without writing a new repo each time i'd want to consume it as a module) to maintain multiple separate statefiles.
Same thing for me. I am using Terraform v0.9.4.
provider "aws" {
region = "${var.region}"
}
terraform {
backend "${var.tf_state_backend}" {
bucket = "${var.tf_state_backend_bucket}"
key = "${var.tf_state_backend_bucket}/terraform.tfstate"
region = "${var.s3_location_region}"
}
}
Here is the error Output of terraform validate
:
Error validating: 1 error(s) occurred:
* terraform.backend: configuration cannot contain interpolations
The backend configuration is loaded by Terraform extremely early, before
the core of Terraform can be initialized. This is necessary because the backend
dictates the behavior of that core. The core is what handles interpolation
processing. Because of this, interpolations cannot be used in backend
configuration.
If you'd like to parameterize backend configuration, we recommend using
partial configuration with the "-backend-config" flag to "terraform init".
I needs dis! For many features being developed, we want our devs to spin up their own infrastructure that will persist only for the length of time their feature branch exists... to me, the best way to do that would be to use the name of the branch to create the key for the path used to store the tfstate (we're using amazon infrastructure, so in our case, the s3 bucket like the examples above).
I've knocked up a bash script which will update TF_VAR_git_branch every time a new command is run from an interactive bash session.
This chunk of code would be so beautiful if it worked:
terraform {
backend "s3" {
key = "project-name-${var.git_branch}.tfstate"
...
}
}
Every branch gets its own infrastructure, and you have to switch to master to operate on production. Switching which infrastructure you're operating against could be as easy as checking out a different git branch. Ideally it'd be set up so everything named "project-name-master" would have different permissions that prevented any old dev from applying to it. It would be an infrastructure-as-code dream to get this working.
@NickMetz said...
Terraform will split and store environment state files in a path like this:
env:/${var.env}/project/terraform/terraform.tfstate
Your top-level structure looks nice and tidy for traditional dev/staging/prod ... sure:
env:/prod/project1/terraform/terraform.tfstate
env:/prod/project2/terraform/terraform.tfstate
env:/staging/project1/terraform/terraform.tfstate
env:/staging/project2/terraform/terraform.tfstate
env:/dev/project1/terraform/terraform.tfstate
env:/dev/project2/terraform/terraform.tfstate
But what if you want to stand up a whole environment for project-specific features being developed in parallel? You'll have a top-level key for each story branch, regardless of which project that story branch is in...
env:/prod/project1/terraform/terraform.tfstate
env:/prod/project2/terraform/terraform.tfstate
env:/staging/project1/terraform/terraform.tfstate
env:/staging/project2/terraform/terraform.tfstate
env:/story1/project1/terraform/terraform.tfstate
env:/story2/project2/terraform/terraform.tfstate
env:/story3/project2/terraform/terraform.tfstate
env:/story4/project1/terraform/terraform.tfstate
env:/story5/project1/terraform/terraform.tfstate
It makes for a mess at the top-level of the directory structure, and inconsistency in what you find inside each story-level dir structure. Full control over the paths is ideal, and we can only get that through interpolation.
Ideally I'd want my structure to look like "project/${var.git_branch}/terraform.tfstate", yielding:
project1/master/terraform.tfstate
project1/stage/terraform.tfstate
project1/story1/terraform.tfstate
project1/story4/terraform.tfstate
project1/story5/terraform.tfstate
project2/master/terraform.tfstate
project2/stage/terraform.tfstate
project2/story2/terraform.tfstate
project2/story3/terraform.tfstate
Now, everything you find for a given project is under its directory... so long as the env is hard-coded at the beginning of the remote tfstate path, you lose this flexibility.
Microservices are better versioned and managed discretely per component, rather than dumped into common prod/staging/dev categories which might be less applicable on a per-microservice basis, each one might have a different workflow with different numbers of staging phases leading to production release. In the example above project1 might not even have staging... and project2 might have unit/regression/load-testing/staging phases leading to production release.
you'd think at the very least you'd be allowed to use ${terraform.env}
...
In Terraform 0.10 there will be a new setting workspace_key_prefix
on the AWS provider to customize the prefix used for separate environments (now called "workspaces"), overriding this env:
convention.
I know a +1 does not add much but yeah, need this too to have 2 different buckets, since we have 2 AWS accounts.
I was hoping to do the same thing as described in #13603 but the lack of interpolation in the terraform block prevents this.
+1
I think this would also be useful for https://github.com/hashicorp/terraform/issues/18632
Specifically, following the structure:
environments/
|-- dev/ # dev configuration
|-- dev.tf
|-- secret.auto.tfvars
|-- prod/ # prod configuration
|-- prod.tf
|-- secret.auto.tfvars
resources/ # shared module for elements common to all environments
|-- main.tf
If i have a secret.auto.tfvars
file in both dev and prod with different credentials they don't actually get used for the init and my ~/.aws/credentials
file appears to be used instead - which I think can catch you out easily (you can run the commands in an account you didn't intend to).
+1
Same issue with etcd:
Error loading backend config: 1 error(s) occurred:
* terraform.backend: configuration cannot contain interpolations
The backend configuration is loaded by Terraform extremely early, before
the core of Terraform can be initialized. This is necessary because the backend
dictates the behavior of that core. The core is what handles interpolation
Initializing the backend...
processing. Because of this, interpolations cannot be used in backend
configuration.
If you'd like to parameterize backend configuration, we recommend using
partial configuration with the "-backend-config" flag to "terraform init".
ERROR: Job failed: exit code 1
Why must I hard code these values... WHY?!?!?!
Facing the same issue even for version 0.11.10 terraform.backend: configuration cannot contain interpolations It doesn't seem a good option to specify creds twice once in variables and again in config. Can we get any update on this as this is open from almost a year.
I used workspaces to create my dev and prod environments. Now I need to store state for them in different aws accounts. What kind of workaround do you recommend? I just need to pass one variable to my backend config... somehow...
The best workaround for this I've ended up with takes advantage of the fact that whatever you pass into init
gets rememebered by terraform.
So instead of terraform init
, I use a small wrapper script which grabs these variables from somewhere (like a .tf.json file used elsewhere, or an environment variable perhaps) and then does the call to init
along with the correct -backend-config
flags.
No value within the terraform block can use interpolations.
So sad
As @glenjamin said, while interpolation from terraform variables isn't possible, this is supported "outside" of the terraform configuration proper by using partial configuration of a backend and arguments to terraform init
.
As an example, you can have a configuration containing
terraform {
backend "s3" {}
}
and providing your configuration as command-line flags to init:
terraform init \
-backend-config="bucket=MyBucket" \
-backend-config="region=us-east-1" \
-backend-config="key=some/key"
it's also possible to use a simplified HCL configuration file to provide this data, i.e. having a file "myconfig.hcl":
bucket = "MyBucket"
region = "us-east-1"
key = "some/key"
and then running terraform init -backend-config=myconfig.hcl
If you're running terraform init
from a shell or a shell-based wrapper, it's also possible to set variables via TF_VAR_
environment variables and then also reuse those for the backend configuration. i.e. if I had some external wrapper that exported TF_VAR_bucket=MyBucket
and TF_VAR_environment=preproduction
, I could run init via a wrapper like:
terraform init \
-backend-config="bucket=$TF_VAR_bucket" \
-backend-config="region=us-east-1" \
-backend-config="key=terraform/applications/hello-world/${TF_VAR_environment}"
@jantman I understand your comment and this is what we do on our side, we wrap execution of terraform with another layer of scripts.
But it would be nicer if this could work OOTB so that we provide only environment vars and then executeinit
, plan
etc. without thinking about the partial backend setup parameters from CLI.
Thus, I still think that there is place for a nicer framework solution, although a workaround exists.
Facing the same issue for version 0.11.11
terraform.backend: configuration cannot contain interpolations
Can't use variables in terraform backend config block.
I just found myself reading this issue thread here after trying the following config:
backend "local" {
path = "${path.module}/terraform.tfstate.d/${terraform.workspace}/network.tfstate"
}
My goal is to use different state files to manage networking, security and app infrastructure.
The workaround with -backend-config
passed to init
would perhaps work locally but is it safe and convenient to use it in a team? I'm afraid of human error being an aftermath of using this workaround.
Also would find it super useful if the var
data was loaded before the backend started initializing. Would like to leverage different keys for state storage... but this makes it a lot more difficult. My config looks like:
terraform {
backend "s3" {
bucket = "xyz-${var.client}"
key = "tfstate"
region = "ap-southeast-2"
role_arn = "${var.workspace_iams[terraform.workspace]}"
}
}
Obviously it doesn't work - please load var definitions, it would be super helpful.
The best workaround for this I've ended up with takes advantage of the fact that whatever you pass into
init
gets rememebered by terraform.So instead of
terraform init
, I use a small wrapper script which grabs these variables from somewhere (like a .tf.json file used elsewhere, or an environment variable perhaps) and then does the call toinit
along with the correct-backend-config
flags.
We're in a similar situation; in our case, we want role_arn
to be a variable, because different people call Terraform with different roles (because different people have different privs). In our wrapper scripts right now, we have to pass different arguments to terraform init
than to e.g. terraform plan
, because it's -backend-config
for one and -var
for another.
Not a showstopper, but annoying -- it makes the wrappers more complicated and less readable and maintainable.
In my CI/CD pipeline (GitLab), I use envsubst
(gettext
package):
envsubst < ./terraform_init_template.sh > ./terraform_init.sh
This injects the environment variables.
Hi All,
I used a workaround for this and created folder structure as below -Terraform | |-VM (This is a module which is called by main.tf to create a VM |- vm.tf | - variable.tf |- dev | - backend.tfvars | - test | - backend.tfvars
| - uat
| - backend.tfvars
| - backend.tf (Note - this is a normal backend file in the root folder, however the values are overwritten by the environment specific backend.tfvars file while running terraform init)
| - main.tf
| - variable.tf
I used the following command to run the terraform init
Init - terraform init -backend-config=./
This has helped me creating environment specific tfstate file remotely. Hope this is helpful Thanks
Using -backend-config
didn't work for my terraform remote block, so I ended up using sed
with terraform's _override.yml
behaviour.
server.tf
terraform {
backend "remote" {
hostname = "app.terraform.io"
token = "##TF_VAR_terra_token##"
organization = "myorg"
workspaces {
name = "myworkspace"
}
}
}
... replace vars, and run terraform init.
sed "s/##TF_VAR_terra_token##/$TF_VAR_terra_token/g" server.tf > server_override.tf
terraform init
So I understand that the backend dictates the behavior of the core, and that's why the backend is loaded before the core. In theory the interpolation behavior could be different for stuff like data resources. But since vars are considered static, maybe some limited interpolation could be added just for vars in the backend? That would buy us some more breathing room than the current CLI workarounds.
Jeez this product is loaded with hundreds of workarounds.
Properly modularizing Terraform adoption and making it as easy as possible for my developers to use would be a lot easier if I could just read the .tfvars file for variables so that I'm not asking people to dive into the .tf files. I'm absolutely baffled by why it was decided to forego variable interpolation from .tfvars files in the init blocks.
just ran into this today. we've got a fairly complex provider+backend config for all our AWS environments, having variables in these config sections would allow us to have one shared set of code to deal with it. as things stand, we've got to duplicate that code everywhere, and then update it everywhere any time we make minor changes.
this feels like a really significant design flaw which needs to be corrected. doubly so if there's anything else that prevents us from putting all this config in a module and just loading that module (which doesn't seem possible now because of how provider module inheritance works).
This is an example why infrastructure-as-code should be in a real programming language.
@richardARPANET there is boto3
for AWS or azure-sdk-for-python
but anyway it would be nice to have parametrized backends configuration :)
Can't use variables in backend config this is a real pain. I have multiple components and environment with teams working within their workspace. I can't use ${terraform.workspace}
withing the key. It's really unpleasant to hard code config values we loose flexibility of having infrastructure as code.
If you'd like to parameterize backend configuration, we recommend using
partial configuration with the "-backend-config" flag to "terraform init".
I know there may be some sort of workaround, but its a pain.
@harbindersingh have you looked at overrides?
since I've started using environments, I'm having the same issue and with v0.12.6, it says:
Error: Variables not allowed
on state.tf line 4, in terraform: 4: bucket = var.be_state_bucket
Variables may not be used here.
Can put together steps as the sensible workaround for the time being pls?
This feature is really required, loosing the flexibility here :(
Is there any answer for the above i am also facing the same issue of interpolation i need to pass variables to s3 backend any help...
I'm working around this issue with the terraform options -backend-config=backends/my-env.tf where backends/my-env.tf is like:
bucket = "my-s3-remote-state-bucket"
key = "terraform"
region = "<region>"
dynamodb_table = "myproj-tfstate-lock"
and my backend declaration looks like:
terraform {
backend "s3" {
}
}
Terraform Version
v0.9.0
Affected Resource(s)
terraform backend config
Terraform Configuration Files
Expected Behavior
Variables are used to configure the backend
Actual Behavior
Steps to Reproduce
terraform apply
Important Factoids
I wanted to extract these to variables because i'm using the same values in a few places, including in the provider config where they work fine.