cloudposse / terraform-aws-tfstate-backend

Terraform module that provision an S3 bucket to store the `terraform.tfstate` file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption.
https://cloudposse.com/accelerate
Apache License 2.0
408 stars 177 forks source link

Add support for multiple terraform backend config files #90

Closed Xerkus closed 1 year ago

Xerkus commented 3 years ago

Describe the Feature

Terraform S3 backend allows multiple state files to be stored in the same S3 bucket and with same DynamoDB table. I would like to have a convenience feature provided by this module to generate multiple terraform backend config files at once with different values for different slices of the infrastructure.

Expected Behavior

Accept list of options for additional backend config files for which backend config files are render as output and/or local files.

Use Case

Hashicorp recommends splitting terraform config into separate root modules to manage logically grouped slices of infrastructure independently. Eg slice managing infrastructure wide concerns like networking, Vault and Consul clusters would be separate from infrastructure for one application which would also be separate from infrastructure for another application.

For such slices of the infrastructure it would be preferable to use same S3 bucket and lock table. I think it makes sense to manage backends for those slices within same module

Describe Ideal Solution

Additional input for the module that probably looks something like this:

 terraform_backend_extra_configs = [
  {
    # required. Can uniqueness be validated between all values?
    # using context for the default key value probably better not to be supported 
    terraform_state_file = "alternate.tfstate"

    # terraform version, region, bucket, dynamodb and encrypt values are same as for "terraform_backend_config"

    # controls local file output, creates file if path not empty
    terraform_backend_config_file_path = "../alternate-path"
    terraform_backend_config_file_name = "backend.tf"

    # omitted values should default to vars used by current "terraform_backend_config" template
    # role_arn = ""
    # profile = ""
    # namespace = ""
    # stage = ""
    # environment = ""
    # name = ""

    # optionally specify namespace, stage, environment and name via context.
    context = module.alternate_backend_label.context
  }
]

Alternatives Considered

My own template file resource that duplicates behavior of "terraform_backend_config" in this module could do the same.

Probably, better approach to the one I suggested would be to extract backend config template into submodule of this module to allow independent backend file generation. This approach will take more effort but it would also be better from maintenance perspective, I think.

Additional Context

Sample HCL for how this feature could be used:

module "terraform_state_backend" {
  source = "cloudposse/tfstate-backend/aws"
  # Cloud Posse recommends pinning every module to a specific version
  # version     = "x.x.x"
  context = module.this.context

  terraform_backend_config_file_path = "."
  terraform_backend_config_file_name = "backend.tf"
  force_destroy                      = false

  terraform_backend_extra_configs = [
    {
      # required. Can uniqueness be validated between all values?
      terraform_state_file = "${module.eg_app_dev_tfstate_backend_label.id}.tfstate"

      # terraform version, region, bucket, dynamodb and encrypt values are same as for "terraform_backend_config"

      # controls local file output, creates file if path not empty
      terraform_backend_config_file_path = "../app/dev"
      terraform_backend_config_file_name = "backend.tf"

      # omitted values default to vars used by current "terraform_backend_config" template
      # role_arn = ""
      # profile = ""
      # namespace = ""
      # stage = ""
      # environment = ""
      # name = ""
      role_arn = aws_iam_role.eg_app_dev_backend.arn

      # optionally specify namespace, stage, environment and name via context?
      context = module.eg_app_dev_backend_label.context
    }
  ]
}

module "eg_app_dev_backend_label" {
  source  = "cloudposse/label/null"
  # version     = "x.x.x"

  environment = "dev"

  context = module.this.context
}

module "eg_app_dev_tfstate_backend_label" {
  source  = "cloudposse/label/null"
  # version     = "x.x.x"

  delimiter = "/"

  context = module.eg_app_dev_label.context
}

resource "aws_iam_role" "eg_app_dev_backend" {
  assume_role_policy = ""
}

resource "aws_iam_policy" "eg_app_dev_backend" {
  name        = module.eg_app_dev_backend_label.id
  description = "Grants access to Terraform S3 backend store bucket and DynamoDB locking table"
  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect   = "Allow"
        Action   = "s3:ListBucket",
        Resource = module.terraform_state_backend.s3_bucket_arn
      },
      {
        Effect   = "Allow"
        Action   = ["s3:GetObject", "s3:PutObject"]
        Resource = "${module.terraform_state_backend.s3_bucket_arn}/${module.eg_app_dev_tfstate_backend_label.id}.tfstate"
      },
      {
        Effect = "Allow"
        Action = [
          "dynamodb:GetItem",
          "dynamodb:PutItem",
          "dynamodb:DeleteItem"
        ]
        Resource = module.terraform_state_backend.dynamodb_table_arn
      },
    ]
  })
  tags = module.eg_app_dev_backend_label.tags
}

resource "aws_iam_role_policy_attachment" "eg_app_dev_backend" {
  policy_arn = aws_iam_policy.eg_app_dev_backend.arn
  role = aws_iam_role.eg_app_dev_backend.id
}
SwapnaP83 commented 2 years ago

Hi ,

I am trying to implement a similar thing in my project. Have you got any solution for it?

nitrocode commented 2 years ago

This module creates an s3 bucket and dynamodb table that can be reused across all terraform root modules (directories). I'm unsure I understand how it's currently constrained.

Are you referring to the local file that's created by this module?

https://github.com/cloudposse/terraform-aws-tfstate-backend/blob/107da1504b7e7fd32a536cfae59602d67d654b39/main.tf#L275-L280

Honestly this file is more of an example backend file that can probably be turned into an output instead of a local file because it's confusing.

Nuru commented 1 year ago

The generated backend file is a convenience and is deprecated. We are not going to enhance it.

We recommend you use the workspace_key_prefix=<root module name> setting to store the state for each root module in the same backend. You can add this manually to copies of the generated backend configuration file or write a script to do it.

Xerkus commented 1 year ago

I'm unsure I understand how it's currently constrained.

Sorry, missed your comment. It is not, rather my lack of understanding of terraform at the time. This module does not do everyting I needed but also does not prevent adding it on top.

backend file that can probably be turned into an output instead of a local file That was the approach I taken eventually.

We recommend you use the workspace_key_prefix=<root module name> setting to store the state for each root module in the same backend.

This does not apply to default workspace and won't have any effect. Dynamically changing workspace prefix to switch between root modules is risky IMO, considering state file key would be the same between prefixes. This won't work too well when backend config is used in different repositories.


What I wanted in this issue and what I really wanted turned out to be somewhat different. It was definitely not just rendering another config file.

I needed consistent state files naming to use in the same bucket and I needed to provide granular access to those state files to reflect different permission boundaries. Eg. web service application modules configuring own ECR does not need access to state of the module deploying Nomad or state of other service.

To solve it I created local module that:

Excerpt from tfstate submodule PoC (expand) This is local PoC module I used to achieve what I need. It turned a bit too fine-grained but worked pretty well. ```terraform // modules/tfstate-backend-s3-extra/main.tf locals { terraform_backend_config_template = coalesce( var.terraform_backend_config_file_template, "${path.module}/templates/terraform.tf.tpl" ) terraform_state_file = coalesce(var.terraform_state_file, "${module.tfstate_key_label.id_full}.tfstate") terraform_backend_config = templatefile( local.terraform_backend_config_template, { region = var.s3_bucket_region bucket = var.s3_bucket_name dynamodb_table = var.dynamodb_table encrypt = var.encrypt # for now one of the two must be set role_arn = coalesce(var.role_arn, one(aws_iam_role.terraform_backend[*].arn)) profile = var.profile terraform_state_file = local.terraform_state_file namespace = module.this.namespace environment = module.this.environment stage = module.this.stage name = module.this.name } ) } module "tfstate_key_label" { source = "cloudposse/label/null" version = "0.25.0" delimiter = "/" context = module.this.context } data "aws_iam_policy_document" "tfstate_full" { statement { effect = "Allow" actions = ["s3:ListBucket"] resources = [var.s3_bucket_arn] } statement { effect = "Allow" actions = ["s3:GetObject", "s3:PutObject"] resources = [ "${var.s3_bucket_arn}/${local.terraform_state_file}", "${var.s3_bucket_arn}/env:/*/${local.terraform_state_file}" ] } statement { effect = "Allow" actions = [ "dynamodb:GetItem", "dynamodb:PutItem", "dynamodb:DeleteItem" ] resources = [var.dynamodb_table_arn] } } module "tfstate_policy_full_label" { source = "cloudposse/label/null" version = "0.25.0" context = module.this.context attributes = concat(module.this.attributes, ["full"]) } resource "aws_iam_policy" "tfstate_full" { name = module.tfstate_policy_full_label.id description = "Grants access to a specific state file in a Terraform S3 backend store bucket and DynamoDB locking table" policy = data.aws_iam_policy_document.tfstate_full.json tags = module.this.tags } data "aws_iam_policy_document" "tfstate_read" { statement { effect = "Allow" actions = ["s3:ListBucket"] resources = [var.s3_bucket_arn] } statement { effect = "Allow" actions = ["s3:GetObject"] resources = [ "${var.s3_bucket_arn}/${local.terraform_state_file}", "${var.s3_bucket_arn}/env:/*/${local.terraform_state_file}" ] } statement { effect = "Allow" actions = [ "dynamodb:GetItem", "dynamodb:PutItem", "dynamodb:DeleteItem" ] resources = [var.dynamodb_table_arn] } } module "tfstate_policy_read_label" { source = "cloudposse/label/null" version = "0.25.0" context = module.this.context attributes = concat(module.this.attributes, ["read"]) } resource "aws_iam_policy" "tfstate_read" { name = module.tfstate_policy_read_label.id description = "Grants access to a specific state file in a Terraform S3 backend store bucket and DynamoDB locking table" policy = data.aws_iam_policy_document.tfstate_read.json tags = module.this.tags } data "aws_caller_identity" "current" {} data "aws_iam_policy_document" "assume_role" { statement { actions = ["sts:AssumeRole"] effect = "Allow" principals { identifiers = [data.aws_caller_identity.current.account_id] type = "AWS" } } } resource "aws_iam_role" "terraform_backend" { count = var.role_enabled ? 1 : 0 name = module.this.id assume_role_policy = data.aws_iam_policy_document.assume_role.json tags = module.this.tags } resource "aws_iam_role_policy_attachment" "terraform_backend" { count = var.role_enabled ? 1 : 0 role = aws_iam_role.terraform_backend[0].name policy_arn = aws_iam_policy.tfstate_full.arn } ``` ```terraform locals { backends_extra = [ { namespace = "xerkus" environment = "gbl" stage = "na" name = "aws_oidc" attributes = [] }, { namespace = "xerkus" environment = "uw2" stage = "na" name = "network" attributes = [] }, { namespace = "xerkus" environment = "uw2" stage = "na" name = "nomad_cluster" attributes = ["server"] }, { namespace = "xerkus" environment = "uw2" stage = "na" name = "nomad_cluster" attributes = ["client"] }, { namespace = "xerkus" environment = "uw2" stage = "na" name = "app_sample" attributes = ["ecr"] }, { namespace = "xerkus" environment = "uw2" stage = "na" name = "app_sample" attributes = ["vault"] } ] backends_extra_map = { for label in module.backend_label : label.id => label.context } } module "backend_label" { for_each = { for i, v in local.backends_extra : i => v } source = "cloudposse/label/null" version = "0.25.0" namespace = each.value.namespace environment = each.value.environment stage = each.value.stage name = each.value.name attributes = each.value.attributes } module "terraform_backend_extra" { for_each = local.backends_extra_map source = "../../modules/tfstate-backend-s3-extra" namespace = each.value.namespace environment = each.value.environment stage = each.value.stage name = each.value.name attributes = concat(each.value.attributes, ["tfstate"]) s3_bucket_region = var.aws_region s3_bucket_name = module.tfstate_backend_aws.s3_bucket_id s3_bucket_arn = module.tfstate_backend_aws.s3_bucket_arn dynamodb_table = module.tfstate_backend_aws.dynamodb_table_name dynamodb_table_arn = module.tfstate_backend_aws.dynamodb_table_arn role_enabled = true } ``` That produced list backend config files in output with content this, which could be used with default or named workspace: ```terraform terraform { backend "s3" { region = "us-west-2" bucket = "bucket-used-for-tfstate" key = "xerkus/uw2/na/nomadcluster/server/tfstate.tfstate" dynamodb_table = "xerkus-gbl-na-tfbackend-lock" profile = "" role_arn = "arn:aws:iam::123456:role/xerkus-uw2-na-nomadcluster-server-tfstate" encrypt = "true" } } ```

@Nuru do you think this revised improvement will be in scope? Should I make new issue and provide initial submodule implementation? Since my country invaded neighbors last year I do not manage anything and as such don't use terraform. I will dump this on you to maintain but won't be using myself.

Nuru commented 1 year ago

@Xerkus Thank you very, very much for your suggestion about using a different backend for each deployment. It has inspired conversation among our architecture team.

You may have misunderstood my suggestion about workspace_key_prefix. We recommend a separate workspace_key_prefix for each root module (what Cloud Posse calls "components") and then a separate workspace under that prefix for each deployment of the component, and never using the default workspace. So you might have workspace_key_prefix = "nomad_cluster" and then under that one backend, have workspaces like xerkus-uw2-na and/or xerkus-uw2-na-client.

We are considering your idea of dropping workspaces and instead using a separate backend for every deployment, each with its own key but all in the same S3 bucket. It does seem like it might make access control easier.

However, in any case, this module, terraform-aws-tfstate-backend, is going to limit itself to deploying an S3 bucket and DynamoDB table (and possibly replicating them), and become agnostic about how you store state in the S3 bucket. We will not be adding anything like your proposal to this module.

Cloud Posse customers use Atmos to generate backend configurations, and you are welcome to use it, too (it is free and open source), or you can use a Terraform module as you have done. To the extent we want to adopt or support something like your proposal, we will do that by adding such capability to Atmos, so no need to do further work on this PR or to open a new one. We will take it from here. Thank you for offering.

aknysh commented 1 year ago

@Xerkus I suppose you are talking about IAM roles for different slices of TF state. As @nitrocode mentioned, this module creates an S3 bucket and a Dynamo table, which can be used in many different situations including splitting TF state into diff subfolders in the bucket. But having diff IAM permissions to those S3 folders/subfolders is definetely not what this module does corrently. @Nuru what do you think about this ?