Closed Xerkus closed 1 year ago
Hi ,
I am trying to implement a similar thing in my project. Have you got any solution for it?
This module creates an s3 bucket and dynamodb table that can be reused across all terraform root modules (directories). I'm unsure I understand how it's currently constrained.
Are you referring to the local file that's created by this module?
Honestly this file is more of an example backend file that can probably be turned into an output instead of a local file because it's confusing.
The generated backend file is a convenience and is deprecated. We are not going to enhance it.
We recommend you use the workspace_key_prefix=<root module name>
setting to store the state for each root module in the same backend. You can add this manually to copies of the generated backend configuration file or write a script to do it.
I'm unsure I understand how it's currently constrained.
Sorry, missed your comment. It is not, rather my lack of understanding of terraform at the time. This module does not do everyting I needed but also does not prevent adding it on top.
backend file that can probably be turned into an output instead of a local file That was the approach I taken eventually.
We recommend you use the
workspace_key_prefix=<root module name>
setting to store the state for each root module in the same backend.
This does not apply to default workspace and won't have any effect. Dynamically changing workspace prefix to switch between root modules is risky IMO, considering state file key would be the same between prefixes. This won't work too well when backend config is used in different repositories.
What I wanted in this issue and what I really wanted turned out to be somewhat different. It was definitely not just rendering another config file.
I needed consistent state files naming to use in the same bucket and I needed to provide granular access to those state files to reflect different permission boundaries. Eg. web service application modules configuring own ECR does not need access to state of the module deploying Nomad or state of other service.
To solve it I created local module that:
@Nuru do you think this revised improvement will be in scope? Should I make new issue and provide initial submodule implementation? Since my country invaded neighbors last year I do not manage anything and as such don't use terraform. I will dump this on you to maintain but won't be using myself.
@Xerkus Thank you very, very much for your suggestion about using a different backend for each deployment. It has inspired conversation among our architecture team.
You may have misunderstood my suggestion about workspace_key_prefix
. We recommend a separate workspace_key_prefix
for each root module (what Cloud Posse calls "components") and then a separate workspace under that prefix for each deployment of the component, and never using the default workspace. So you might have workspace_key_prefix = "nomad_cluster"
and then under that one backend, have workspaces like xerkus-uw2-na
and/or xerkus-uw2-na-client
.
We are considering your idea of dropping workspaces and instead using a separate backend for every deployment, each with its own key
but all in the same S3 bucket. It does seem like it might make access control easier.
However, in any case, this module, terraform-aws-tfstate-backend
, is going to limit itself to deploying an S3 bucket and DynamoDB table (and possibly replicating them), and become agnostic about how you store state in the S3 bucket. We will not be adding anything like your proposal to this module.
Cloud Posse customers use Atmos to generate backend configurations, and you are welcome to use it, too (it is free and open source), or you can use a Terraform module as you have done. To the extent we want to adopt or support something like your proposal, we will do that by adding such capability to Atmos, so no need to do further work on this PR or to open a new one. We will take it from here. Thank you for offering.
@Xerkus I suppose you are talking about IAM roles for different slices of TF state. As @nitrocode mentioned, this module creates an S3 bucket and a Dynamo table, which can be used in many different situations including splitting TF state into diff subfolders in the bucket. But having diff IAM permissions to those S3 folders/subfolders is definetely not what this module does corrently. @Nuru what do you think about this ?
Describe the Feature
Terraform S3 backend allows multiple state files to be stored in the same S3 bucket and with same DynamoDB table. I would like to have a convenience feature provided by this module to generate multiple terraform backend config files at once with different values for different slices of the infrastructure.
Expected Behavior
Accept list of options for additional backend config files for which backend config files are render as output and/or local files.
Use Case
Hashicorp recommends splitting terraform config into separate root modules to manage logically grouped slices of infrastructure independently. Eg slice managing infrastructure wide concerns like networking, Vault and Consul clusters would be separate from infrastructure for one application which would also be separate from infrastructure for another application.
For such slices of the infrastructure it would be preferable to use same S3 bucket and lock table. I think it makes sense to manage backends for those slices within same module
Describe Ideal Solution
Additional input for the module that probably looks something like this:
Alternatives Considered
My own template file resource that duplicates behavior of "terraform_backend_config" in this module could do the same.
Probably, better approach to the one I suggested would be to extract backend config template into submodule of this module to allow independent backend file generation. This approach will take more effort but it would also be better from maintenance perspective, I think.
Additional Context
Sample HCL for how this feature could be used: