Open jamisonhyatt opened 7 years ago
would love defaults :)
terraform {
backend "s3" {
bucket = "tf-remote-state"
key = "terraform.tfstate"
region = "ap-southeast-1"
dynamodb_table = "tf-locks"
}
}
data "terraform_remote_state" "shared" {
backend = "s3"
environment = "shared"
config {
bucket = "tf-remote-state"
key = "terraform.tfstate"
region = "ap-southeast-1"
}
}
is a bit repetitive
In my case, I would like to have a data source that uses the same backend, but with a different data source. Being able to do something like:
data "terraform_remote_state" "other" {
workspace = "other"
}
would be great.
I ran into a similar issue. In my case I'd like the terraform_remote_state
data to default to the remote's configured organization. As there doesn't seem to a way to expose the remote backend's organization akin to terraform.workspace
to the terraform_remote_state
block.
terraform {
backend "remote" {}
}
data "terraform_remote_state" "main" {
backend = "remote"
config = {
#organization = "..."
workspaces = {
name = "main-${terraform.workspace}"
}
}
}
Results in:
The given configuration is not valid for backend "remote": attribute "organization" is required.
Edit: Perhaps a generic backend.config....
lookup would be effective here.
hi, i would very much have this too :)
I've built a module that can do this: Invicton-Labs/backend-config/null
It retrieves all of the backend config values, and can optionally use them to retrieve the remote state (for the exact same config as the backend). You should be able to do something similar with it (get all of the backend config values using this module) and then modify only certain fields. Something like:
module "backend_config" {
source = "Invicton-Labs/backend-config/null"
}
data "terraform_remote_state" "state" {
backend = module.backend_config.backend.type
config = module.backend_config.backend.config
workspace = "myworkspace"
}
Or, if you want to use a different bucket but keep everything else the same:
data "terraform_remote_state" "state" {
backend = module.backend_config.backend.type
config = merge(module.backend_config.backend.config, {
bucket = "mybucket" // due to the merge, this will overwrite the value from the backend
})
workspace = module.backend_config.workspace
}
The process of using S3 backend, with environments, for deployments referencing other remote state, leave something to be desired. You end up configuring the backend twice for, what seems like an obvious use case. I admit I could be mistaken about this being an obvious use case.
The use case I have in mind is 1, or few, buckets for an organization dedicated to terraform state storage, lets say for a non-production AWS account.
If my ECS-Cluster terraform needs to import the lambda_SNS.tfstate in their respective environments, I effectively have to configure the backend twice, in 2 different ways. One requires configuration via tfvars, the other through -backend-config...for the identical values. I think this is unnecessary for the likely use case, which is someone importing the state file which matches their current terraform environment. It also seems to me that S3 state storage in buckets is likely consolidated around a few purpose driven buckets, versus many buckets. Having a separate bucket for every single terraform deployment doesn't seem natural.
I think the terraform_remote_state could be simplified by the
environment
andbucket
defaulting to the current backend config values for those items. I am not advocating for the removal of theenvironment
andbucket
, as it's important to be able to import backends from anywhere.This provides simplicity to what seems to me like a normal use case.
say we have a file
backends/dev-qa.tf
with the following contentsThen the backend configuration & terraform_remote_state would look like:
Running the following commands, would allow both our backend and terraform_remote_state to respect the appropriate bucket and env configuration, without additional variable configurations with the same values ("my-infra-tfstate" and "dev-vpc2")