Open gwright99 opened 6 months ago
Aggregating ideas for how to do so:
Another idea: use Terraform workspaces
This approach will:
Although the state files would be separate, the project files are not segregated/differentiated like how git branches can differ. As per this Spacelift article on managing variables in a Terraform workspace, there would likely be multiple tfvars files:
vars_dev.tfvars
vars_test.tfvars
vars_stage.tfvars
vars_prod.tfvars
These files could be specifically used during deployment:
# Example - using a specific tfvars file
$ terraform apply -var-file=vars_dev.tfvars
This could be baked into the Makefile solution we use:
...
apply-dev:
@terraform apply -var-file=vars_dev.tfvars
apply-test:
@terraform apply -var-file=vars_test.tfvars
...
# OR resolve workspace dynamically
apply-agnostic:
@terraform apply -var-file=vars_$(terraform workspace show).tfvars
Given how the project is structured (_exclusive focus on terraform.tfvars and related secrets in SSM, with all project files checked into git_), this approach could work pretty well.
TF Objects could be modified to behave differently based on env (i.e. workspace) or include an environment descriptor in their tags. Examples:
locals {
instance_type = terraform.workspace == “prod” ? “t2.large” : “t2.micro”
}
variable "name_tag" {
type = string
description = "Name of the EC2 instance"
default = "EC2"
}
resource "aws_instance" "my_vm" {
ami = var.ami //Ubuntu AMI
instance_type = var.instance_type
tags = {
Name = format("%s_%s", var.name_tag, terraform.workspace)
}
}
As per education materials re: TF Workspaces, introducing workspaces in the Terraform workflow increases the risk of human error.
Workspaces are generally meant to be temporary, but this approach would use them as a permanently fixture.
This approach may not be the best solution to manage multiple staging environments if organizations want these environments to be strictly separate (rather than a multi-environment-spanning monorepo).
The more reading I do about Workspaces, the more I see them referred to as a "a bad idea that will blow up in your face". An alternative, more physically-separated solution could use folders to segregate environments:
/
├── environments/
│ ├── staging/
│ │ ├── provider.tf
│ │ ├── variables.tf
│ │ └── main.tf
│ └── prod/
│ ├── providers.tf
│ ├── variables.tf
│ / └── app.tf
└── modules/
├── ec2/
├── vpc/
│ ├── main.tf
│ └── variables.tf
└── application
Workspaces is definitely not suitable for production. The state for different environments all live in the same state file, it's a recipe for disaster if the state is broken in ANY of the environments.
This is the proper way to do it as per terraform common/best practices. I'm working on a POC google cloud terraform deployment at the moment and I'm following that pattern.
/
├── environments/
│ ├── staging/
│ │ ├── provider.tf
│ │ ├── variables.tf
│ │ └── main.tf
│ └── prod/
│ ├── providers.tf
│ ├── variables.tf
│ / └── app.tf
└── modules/
├── ec2/
├── vpc/
│ ├── main.tf
│ └── variables.tf
└── application
This solution was originally designed with the idea that customers with multiple environments would run a dedicated installer repo for each environment (i.e. a repo for DEV, and another for PROD).
This approach worked for some customers, but did not align with other implementations where the client team wanted to manage all of their deployments out of a single repo.
We should consider retrofitting the project structure to allow 2+ deployments to be run out of a single repo instance.