hashicorp / terraform

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is a source-available tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.
https://www.terraform.io/
Other
41.69k stars 9.41k forks source link

Feature: Conditionally load tfvars/tf file based on Workspace #15966

Open atkinchris opened 6 years ago

atkinchris commented 6 years ago

Feature Request

Terraform to conditionally load a .tfvars or .tf file, based on the current workspace.

Use Case

When working with infrastructure that has multiple environments (e.g. "staging", "production"), workspaces can be used to isolate the state for different environments. Often, different variables are needed per workspace. It would be useful if Terraform could conditionally include or load variable file, depending on the workspace.

For example:

application/
|-- main.tf // Always included
|-- staging.tfvars // Only included when workspace === staging
|-- production.tfvars // Only included when workspace === production

Other Thoughts

Conditionally loading a file would be flexible, but possibly powerfully magic. Conditionally loading parts of a .tf/.tfvars file based on workspace, or being able to specify different default values per workspace within a variable, could be more explicit.

apparentlymart commented 6 years ago

Hi @atkinchris! Thanks for this suggestion.

We have plans to add per-workspace variables as a backend feature. This means that for the local backend it would look for variables at terraform.d/workspace-name.tfvars (alongside the local states) but in the S3 backend (for example) it could look for variable definitions on S3, keeping the record of the variables in the same place as the record of which workspaces exist. This would also allow more advanced, Terraform-aware backends (such as the one for Terraform Enterprise) to support centralized management of variables.

We were planning to prototype this some more before actually implementing it, since we want to make sure the user experience makes sense here. With the variables stored in the backend we'd probably add a local command to update them from the CLI so that it's not necessary to interact directly with the underlying data store.

At this time we are not planning to support separate configuration files per workspace, since that raises some tricky questions about workflow and architecture. Instead, we plan to make the configuration language more expressive so that it can support more flexible dynamic behavior based on variables, which would then allow you to use the variables-per-workspace feature to activate or deactivate certain behaviors without coupling the configuration directly to specific workspaces.

These items are currently in early planning stages and so no implementation work has yet been done and the details may shift along the way, but this is a direction we'd like to go to make it easier to use workspaces to model differences between environments and other similar use-cases.

atkinchris commented 6 years ago

Awesome, look forward to seeing how workspaces evolve.

We'll keep loading the workspace specific variables with -var-file=staging.tfvars.

b-dean commented 6 years ago

@apparentlymart is there another github issue that is related to these plans? Something we could subscribe to?

I'm interested in this because we currently have a directory in our repo with env/<short account nickname>-<workspace>.tfvars files and it's a little bit of a pain to have to remember to mention them all the time when doing plans, etc (although it's immediately obvious when you forget it on the plan and nothing looks like you expect, could be dangerous to forget it on apply though).

If these were kept in some backend-specific location, that would be great!

et304383 commented 6 years ago

We just want to reference a different VPC CIDR block based on my workspace. Is there any other workaround that could get us going today?

apparentlymart commented 6 years ago

A few common workarounds I've heard about are:

et304383 commented 6 years ago

@apparentlymart thanks. I think option one is best. 3 doesn't work as we create the VPC with terraform in the same workspace.

james-lawrence commented 6 years ago

@apparentlymart what is the estimated timeline for this functionality, could it be stripped down to just the tfvars and not the dynamic behaviour based on variables? It sounds like you have a pretty solid understanding of how the tfvars being loaded for a particular workspace is going to work.

apparentlymart commented 6 years ago

Hi @james-lawrence,

In general we can't comment on schedules and timelines because we work iteratively, and thus there simply isn't a defined schedule for when things get done beyond our current phase of work.

However, we tend to prefer to split up the work by what subsystem it relates to in order to reduce context-switching, since non-trivial changes to Terraform Core tend to require lots of context. For example, in 0.11 the work was focused on the module and provider configuration subsystems because that allowed the team to reload all the context on how modules are loaded, how providers are inherited between modules, etc and thus produce a holistic design.

The work I described above belongs to the "backends" subsystem, so my guess (though definitely subject to change along the way) is that we'd try to bundle this work up with other planned changes for backends, such as the ability to run certain operations on a remote system, ability to retrieve outputs without disclosing the whole state, etc. Unfortunately all I can say right now is that we're not planning to look at this right now, since our current focus is on the configuration language usability and work is already in progress in that area which we want to finish (or, at least, reach a good stopping point) before switching context to backends.

non7top commented 6 years ago

That becomes quite hard to manage when you are dealing with multiple aws accounts and terraform workspaces

ura718 commented 6 years ago

Can anyone explain what the difference is between terraform.tfvars and variables.tf file, when to use one over the other? And do you need both or just one is good enough?

non7top commented 6 years ago

[variables].tf has definitions and default values, .tfvars has overriding values if needed You can have single .tf file and several tfvars files each defining different environment

matti commented 6 years ago

Yet another workaround (based on the @apparentlymart 's "first" workaround) that allows you to have workspace variables in different files (easier to diff). When you add new workspaces you only need to a) add the file b) add it to the list in the merge. This is horrible, but works.

workspace1.tf

locals {
  workspace1 = {
    workspace1 = {
      project_name = "project1"
      region_name  = "europe-west1"
    }
  }
}

workspace2.tf

locals {
  workspace2 = {
    workspace2 = {
      project_name = "project2"
      region_name  = "europe-west2"
    }
  }
}

main.tf

locals {
  workspaces = "${merge(local.workspace1, local.workspace2)}"
  workspace  = "${local.workspaces[terraform.workspace]}"
}

output "project_name" {
  value = "${local.workspace["project_name"]}"
}

output "region_name" {
  value = "${local.workspace["region_name"]}"
}
mhfs commented 6 years ago

Taking @matti's strategy a little further, I like having default values and only customize per workspace as needed. Here's an example:

locals {
  defaults = {
    project_name = "project-default"
    region_name  = "region-default"
  }
}

locals {
  staging = {
    staging = {
      project_name = "project-staging"
    }
  }
}

locals {
  production = {
    production = {
      region_name  = "region-production"
    }
  }
}

locals {
  workspaces = "${merge(local.staging, local.production)}"
  workspace  = "${merge(local.defaults, local.workspaces[terraform.workspace])}"
}

output "workspace" {
  value = "${terraform.workspace}"
}

output "project_name" {
  value = "${local.workspace["project_name"]}"
}

output "region_name" {
  value = "${local.workspace["region_name"]}"
}

When in workspace staging it outputs:

project_name = project-staging
region_name = region-default
workspace = staging

When on workspace production it outputs:

project_name = project-default
region_name = region-production
workspace = production
tilgovi commented 6 years ago

I've been thinking about using Terraform in automation and doing something like -var-file $TF_WORKSPACE.tfvars.

farman022 commented 6 years ago

can someone please give example/template of "Terraform to conditionally load a .tfvars or .tf file, based on the current workspace." Even old way is worked for me. I just wanted to run multiple infra from a single directory.

landon9720 commented 6 years ago

@farman022 Just use the -vars-file command line option to point to your workspace-specific vars file.

bborysenko commented 6 years ago

Like @mhfs strategy but with one merge:

locals {

  env = {
    defaults = {
      project_name = "project_default"
      region_name = "region-default"
    }

    staging = {
      project_name = "project-staging"
    }

    production = {
      region_name = "region-production"
    }
  }

  workspace = "${merge(local.env["defaults"], local.env[terraform.workspace])}"
}

output "workspace" {
  value = "${terraform.workspace}"
}

output "project_name" {
  value = "${local.workspace["project_name"]}"
}

output "region_name" {
  value = "${local.workspace["region_name"]}"
}
menego commented 6 years ago

locals {

 context_variables = {
    dev = {
        pippo = "pippo-123"
    }
    prod = {
        pippo = "pippo-456"
    }
  }

  pippo = "${lookup(local.context_variables[terraform.workspace], "pippo")}"
}

output "LOCALS" {
  value = "${local.pippo}"
}
ahsannaseem commented 5 years ago

is this feature added in v0.11.7 I tried creating terraform.d with qa.tfvars and prod.tfvars. then select workspace qa. On apply plan it seems that it is not detecting qa.tfvars.

mildwonkey commented 5 years ago

No, this hasn't been added yet (current version is v0.11.8).

While we try to follow up with issues like this in Github, sometimes things get lost in the shuffle - you can always check the Changelog for updates.

hussfelt commented 5 years ago

This is a resource that I have used a couple of times as a reference to setup a Makefile wrapping terraform, maybe some of you find it useful: https://github.com/pgporada/terraform-makefile

gudata commented 5 years ago

My first thouthgs were that workspaces are great for managing environments but then I found in the docs that they are not recommended. Is this still valid or the context is other?

In particular, organizations commonly want to create a strong separation between multiple deployments of the same infrastructure serving different development stages (e.g. staging vs. production) or different internal teams. In this case, the backend used for each deployment often belongs to that deployment, with different credentials and access controls. Named workspaces are not a suitable isolation mechanism for this scenario.

https://www.terraform.io/docs/state/workspaces.html

beingamarnath commented 5 years ago

As @gudata pointed out, even I was under the impression, managing multiple environments with workspaces are ideal. But after going through the document i'm totally confused now. How do we manage multiple environments with multi region deployment with Terraform OSS.? How do we structure our terraform modules and tfvars of multiple environments

atkinchris commented 5 years ago

Is this still valid or the context is other

@gudata & @beingamarnath - workspaces are still a very valid way to manage multiple environments in many scenarios.

However, workspaces share a backend. If you require isolation between environments, either as a process requirement or consequence of having your environments in different accounts, you may not be able to share a backend. In this case, workspaces are not suitable, as you would need to init the backend for each environment.

How do we structure our terraform modules and tfvars of multiple environments

@beingamarnath - Terraform provides workspaces as a mechanism to share code between multiple environments, without having to have multiple backends. It does not provide, or dictate, a structure or convention for your tfvars or variable files. This is the essence of this issue - to introduce convention for common use cases.

non7top commented 5 years ago

Just in case someone is not aware, there is a wrapper tool, called atlantis, which greatly helps managing different environments with terraform.

ndobbs commented 5 years ago

Does anyone have any updates on where this feature is? I know 0.12 is a huge priority but this functionally would be very useful for anyone using workspaces. There is the potential for human error by passing the wrong -var-file from a completely different workspace.

agrrh commented 5 years ago

@ndobbs We're using following construction:

terraform plan -var-file="var/$(tf workspace show).tfvars"

It's a handy way to evade errors. Of course, it's only suitable in case each workspace possess dedicated .tfvars file.

JustinGrote commented 5 years ago

If you can't trust people to remember to add the var-file addendum per @agrrh's method, here's a way to "bake it in" and will always work regardless. This is as close to a "native" implementation as it's going to get since it doesn't require any wrappers or special inputs to terraform.

The catch is that you have to write your "tfvars" as a simple json file, and reference the resulting "tfvars" as local.tfenv.variable, but the benefits are that it works even if the file isn't there, and lets you set intelligent defaults that are selectively overriden by your vars via merge().

In 0.12 you could theoretically config entire objects this way, since you can do maps of objects with the new type system, but I haven't done any significant testing around that.

main.tf

locals {
  default_settings = {
    numServers = 5
    numDatabases = 2
  }

  tfsettingsfile = "tfenv-${terraform.workspace}.json"
  #Workaround for https://github.com/hashicorp/terraform/issues/21395
  tfsettingsfilecontent = fileexists(local.tfsettingsfile) ? file(local.tfsettingsfile) : "{}"
  tfenvsettings = jsondecode(local.tfsettingsfilecontent)
  tfenv = merge(local.default_settings, local.tfenvsettings)
}

output "my_tf_env" {
  value = local.tfenv
}

tfenv-dev.json

{
    "numServers" : 2
}

Running Terraform

>terraform workspace show
default

>terraform apply --auto-approve

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

my_tf_env = {
  "numDatabases" = 2
  "numServers" = 5
}

>terraform workspace select dev
Switched to workspace "dev".

>terraform apply --auto-approve

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

my_tf_env = {
  "numDatabases" = 2
  "numServers" = 2
}
elimeitner commented 5 years ago

You can write your vars as a json file under you workspace:

main.tf.json { "myvarname" : "myvalue" }

and then call it like this:

locals { vars = jsondecode(file("./terraform.tfstate.d/${terraform.workspace}/main.tf.json")) }

module "mymodule" { source = "../../mysource/mysource" myvar = local.vars.myvarname }

JustinGrote commented 4 years ago

As of 0.12.2, you can also use YAML for this purpose, which is generally better for config settings. While you can get crazy with a lot of nested maps and lists, generally recommend keeping it a flat "ini-style" if at all possible.

Here is my "terragrunt" style setup, used typically with the free app.terraform.io but works with any backend that supports workspaces (including local). Requires no wrappers!

image

Main.tf

locals {
  #You have to initialize any settings you plan to use to avoid a "This object does not have an attribute named" error. You can also use conditionals, this is generally easier
  default_tfsettings = {
    server_name = "mydefaultservername"
    number_of_servers = 1
    additional_setting_i_didnt_override = true
  }

  tfsettingsfile = "./environments/${terraform.workspace}/tfsettings.yaml"
  tfsettingsfilecontent = fileexists(local.tfsettingsfile) ? file(local.tfsettingsfile) : "NoTFSettingsFileFound: true"
  tfworkspacesettings = yamldecode(local.tfsettingsfilecontent)
  tfsettings = merge(local.default_tfsettings, local.tfworkspacesettings)
}

output "servername" {
    value = local.tfsettings.server_name
}

output "numberOfServers" {
    value = local.tfsettings.number_of_servers
}

output "additional_setting_i_didnt_override" {
    value = local.tfsettings.additional_setting_i_didnt_override
}

environments/test/tfsettings.yaml

server_name: testserver

environments/production/tfsettings.yaml

server_name: productionserver
number_of_servers: 200

Result

TFYamlSettingsDemo

robins35 commented 4 years ago

I also would like a good way to manage environments. Right now I have it split in two directories, staging/ and production/, but the modules are all identical. It's just the vars.tf file that differs between those environments.

I was thinking of just passing in an environment, and then using maps to get my variables that way, but that seems like a very easy way to accidentally clobber the production stack if someone accidentally puts the wrong env in there.

Also how do we differentiate between the different providers.tf files? I don't want to accidentally apply the staging changes to production because it's using the google.json file specified in the providers.tf file. I need a way to explicitly include variable files, and the provider.tf file, rather than all this implicit magic.

maxgio92 commented 4 years ago

Hi @robins35 , could you provide some examples of your use case? Thanks

JustinGrote commented 4 years ago

@robins35 have a look at my workaround above to use terraform workspaces and have a per workspace variable file.

On Wed, Aug 14, 2019, 2:38 AM maxgio notifications@github.com wrote:

Hi @robins35 https://github.com/robins35 , could you provide some examples of your use case? Thanks

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/hashicorp/terraform/issues/15966?email_source=notifications&email_token=ADUNKUTHJEYFHHPYEUJE2P3QEPHC5A5CNFSM4DY7KTNKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4IIKAY#issuecomment-521176323, or mute the thread https://github.com/notifications/unsubscribe-auth/ADUNKURGUKU2FS4ZB5ZF7VDQEPHC5ANCNFSM4DY7KTNA .

robins35 commented 4 years ago

@JustinGrote I used your technique and set my project up that way.

I'm running into a problem however. This is a trimmed version of my setup: https://gist.github.com/robins35/e9b40dc8d34ee3fcb11a3f70a65b37fb

To access the data in tfsettings.yaml in main.tf, I have to do local.tfsettings.gce_project_name, whereas before I only had to do var.gce_project_name.

Is there a way to merge all of the settings in tfsettings.yaml into the main variable scope that's available in main.tf? Is there any way for me to just do var.gce_project_name, without having to declare every variable in vars.tf like this?

variable gce_project_name {
  default = "${local.tfsettings.gce_project_name}"
}

Also I noticed you said your setup emulates terragrunt, is terragrunt also a valid way to solve this issue?

Another thing I was wondering, what if I have slight differences in my main.tf file between environments, is there anyway to conditionally change those setups without rewriting the whole module?

Thanks.

JustinGrote commented 4 years ago

Terragrunt (and other wrappers) will import them as variables. You can also make a very simply wrapper using powershell or bash that looks at your workspace and will conditionally load a TFVAR file with a command line argument.

My solution requires no wrappers at all, which is essential in some environments. Instead of the local block, you could create a module that does the same thing and has inputs and outputs, but even in that case your outputs would have to map to your inputs, so you'd still have to define those "defaults".

For the "slight changes", we follow the GitLab Flow branching strategy and keep our terraforms for each environment on separate branches, which cherry-pick from master. https://about.gitlab.com/2014/09/29/gitlab-flow/

philomory commented 4 years ago

One thing that would be nice would be if terraform.workspace was recognized as not really a variable, so at least you could do something like

variable "foo" {
  default = {
    test = "example1"
    prod = "example2"
  }[terraform.workspace]
}

Right now if you try that you get an error, "Variables may not be used here".

The advantage of this style (if it worked) would be that it allows you to specify a map of different defaults for different workspaces, but the variable value is always a string (or whatever value is appropriate), so if you do want to override it on the command line, you can just specify a final value instead of a map whose key is your current workspace name.

JustinGrote commented 4 years ago

I believe there are plans in a future release to allow per tfvars files for workspaces, my solution is a "right now" workaround.

On Thu, Aug 15, 2019 at 7:02 PM Adam Gardner notifications@github.com wrote:

One thing that would be nice would be if terraform.workspace was recognized as not really a variable, so at least you could do something like

variable "foo" { default = { test = "example1" prod = "example2" }[terraform.workspace] }

Right now if you try that you get an error, "Variables may not be used here".

The advantage of this style (if it worked) would be that it allows you to specify a map of different defaults for different workspaces, but the variable value is always a string (or whatever value is appropriate), so if you do want to override it on the command line, you can just specify a final value instead of a map whose key is your current workspace name.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/hashicorp/terraform/issues/15966?email_source=notifications&email_token=ADUNKUT5DFHVMQNMCO4X3FTQEYDDJA5CNFSM4DY7KTNKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4NOSNI#issuecomment-521857333, or mute the thread https://github.com/notifications/unsubscribe-auth/ADUNKUWJZYHSSNEJIDDV74LQEYDDJANCNFSM4DY7KTNA .

sushilchaudhari commented 4 years ago
  • Create a map in a named local value whose keys are workspace names and whose values are the values that should vary per workspace. Then use another named local value to index that map with terraform.workspace to get the appropriate value for the current workspace.

Hi @apparentlymart, I am trying to use the above workaround of creating a map per workspace. I have multiple variables which user can input based on the environment.

My variables.tf is like below

variable "vpc_cidr" {
  description = "CIDR for VPC"
  type = map
  default     = {
    qa      = "10.0.0.0/16"
    demo    = "10.0.0.0/16"
    beta    = "10.0.0.0/16"
    prod    = "10.0.0.0/16"
    default = "10.1.0.0/16"
  }
}

variable "app_subnets" {
  description = "CIDR for app subnets"
  type = map
  default     = {
    qa      = ["10.0.1.0/24","10.0.2.0/24"]
    demo    = ["10.0.1.0/24","10.0.2.0/24"]
    beta    = ["10.0.1.0/24","10.0.2.0/24"]
    prod    = ["10.0.1.0/24","10.0.2.0/24"]
    default = ["10.1.101.0/24","10.1.102.0/24"]
  }
}

I am using below code to get the to index that map with terraform.workspace to get the appropriate value for the current workspace.

cidr = var.vpc_cidr[terraform.workspace]
app_subnets         = var.app_subnets[terraform.workspace]

It works great when I add these map in variables.tf. But when I try to add the respective values in terraform.tfvars, terraform gives an error.

variables.tf

variable "vpc_cidr" {
  description = "CIDR for VPC"
  type = map
  default     = {
    default = "10.1.0.0/16"
  }
}

variable "app_subnets" {
  description = "CIDR for app subnets"
  type = map
  default     = {
    default = ["10.1.101.0/24","10.1.102.0/24"]
  }
}

terraform.tfvars

app_subnets = {
   qa      = ["10.0.1.0/24","10.0.2.0/24"]
   demo    = ["10.0.1.0/24","10.0.2.0/24"]
   beta    = ["10.0.1.0/24","10.0.2.0/24"]
   prod    = ["10.0.1.0/24","10.0.2.0/24"]
 }

When I run terraform plan for defaulf workspace,the error which I am getting is

var.app_subnets is map of tuple with 4 elements
The given key does not identify an element in this collection value.

My observation shouldn't it look into default value of the variable mentioned in variables.tf instead of throwing an error?

ajaquith commented 4 years ago

@JustinGrote Nice solution. I pushed it a little further to load the defaults from files as well, and tweaked it slightly to make it more Ansible-like so that both tools can share the YAML-formatted variables. From my variables.tf:

locals {
   all_ipv4         = "0.0.0.0/0"
   default_file     = "./env_vars/default/main.yml"
   default_content  = fileexists(local.default_file) ? file(local.default_file) : '"NoSettingsFileFound: true"
   default_vars     = yamldecode(local.default_content)
   env_file         = "./env_vars/${terraform.workspace}/main.yml"
   env_content      = fileexists(local.env_file) ? file(local.env_file) : "NoSettingsFileFound: true"
   env_vars         = yamldecode(local.env_content)
   vars             = merge(local.default_vars, local.env_vars)
}

In my main.tf, variables are referenced as, for example, local.vars.ansible_user, which looks nice and intuitive. YAML configs are stored in /env_vars/default/main.yml and /env_vars/${terraform.workspace}/main.yml.

On the Ansible side, I do something like this:

- include_vars:
  file: "env_vars/default/main.yml"
- set_fact:
  ec2_environment: "{{ lookup('file', '{{ playbook_dir }}/.terraform/environment') }}"
- include_vars:
  file: "env_vars/{{ ec2_environment }}/main.yml"

...where the ec2_environment variable is read out of Terraform's local config. This allows Ansible to inherit defaults in the same way as Terraform.

Anyway, thanks for the tip!

yeswps commented 4 years ago

Not sure if a native feature has been developed. I personally like to keep the tfvars file as flat as possible. Adding additional map made the entire code harder to manage.

I've created a simple bash script, which detects current workspace name, then try to look for the corresponding tfvars to apply:

#!/bin/bash
workspace=$(terraform workspace show)
echo "Current workspace is $workspace"
tfvars_file="$workspace.tfvars"
if test -f $tfvars_file; then
    echo "Found $tfvars_file, applying..."
    terraform apply -var-file=$workspace.tfvars
else
    echo "Cannot find $tfvars_file, will not apply"
fi
JustinGrote commented 4 years ago

@yeswps wrappers are easy, the problem is that now you need everyone in the team to know about them and use them. This Issue is about doing this at scale across multiple developers who are probably just going to run "terraform" and not be aware of a wrapper script.

EDIT: Also wrappers don't work in Terraform Cloud/Enterprise :)

davisford commented 4 years ago

As of 0.12.2, you can also use YAML for this purpose, which is generally better for config settings. While you can get crazy with a lot of nested maps and lists, generally recommend keeping it a flat "ini-style" if at all possible.

@JustinGrote this is pretty slick, but I'm running into another issue.

I'm using the TF VPC and EKS modules.

For most of these items, overriding is fairly simple with a YAML file. But where it becomes problematic for me is when I want to specify additional_security_group_ids in the EKS worker_groups. Example of the TF:

    {
      instance_type        = "t3.medium"
      asg_max_size         = 10
      asg_desired_capacity = 5
      autoscaling_enabled  = true
      tags = [{
        key                 = "app"
        value               = "api"
        propagate_at_launch = true
      }]
      additional_security_group_ids = [aws_security_group.api_worker_group.id]
    }

Now, I'll look at the definition of api_worker_group:

resource "aws_security_group" "api_worker_group" {
  name   = "${local.cluster_name}-api-wg"
  vpc_id = module.vpc.vpc_id
}

Well, that's an AWS resource that circularly references the vpc id which I don't have in the YAML. The problem with the YAML or JSON approach is that it can't reference other parts of the TF vars.

It gets worse when I want to specify the workers_additional_policies array in the EKS module. The policies I have for one workspace are quite extensive list of Terraform AWS resources. I'm not seeing an easy way to transpose that to YAML aside from filling out literally dozens and dozens of individual fields, thus also causing my default_tfsettings definition to explode.

EDIT : so I can transpose worker groups to YAML like this:

worker_groups:
    - instance_type: "t3.medium"
      asg_max_size: 10
      asg_desired_capacity: 5
      autoscaling_enabled: true
      tags:
        - key: "app"
          value: "api"
          propagate_at_launch: true

But I have no way to define additional_security_group_ids in the YAML. Perhaps I can do a merge here?

jakubgs commented 4 years ago

Pretty sure this has already been answered sufficiently in https://github.com/hashicorp/terraform/issues/15966#issuecomment-381168714. I don't really see why this is still open.

tristanmorgan commented 4 years ago

Related and slightly hacky, In Terraform Cloud set an environment variable (not terraform var) called TF_CLI_ARGS to '-var-file=your-enviroment-name.tfvars' and the plan and apply will use it.

dcow commented 4 years ago

Pretty sure this has already been answered sufficiently in #15966 (comment). I don't really see why this is still open.

Not really. You should be able to pin a var-file to a workspace in such a way as to avoid entering a big list of blessed workspaces in your terraform.

Ideal workflow:

$ terraform workspace new foo -var-file ephemeral/tfvars

Then any future uses of -var-file (if needed) in commands like plan or apply, simply merges in the other vars on top.

The main point is that if you follow a workspace equals environment philosophy, things should be totally and completely isolated. You shouldn't have things concerning other workspaces poisoning your terraform.

fewbits commented 4 years ago

It would be nice to have some feature like initializing variable values per environment, like:

variable "my_variable" {
  type = string

  default = "generic_value"
  workspace.dev = "specific_value_for_dev_workspace"
}

I started using Terraform recently, and when I saw default inside of the variable syntax in the docs, I had the impression it was related to the workspace.

fewbits commented 4 years ago

locals {

context_variables = { dev = { pippo = "pippo-123" } prod = { pippo = "pippo-456" } }

pippo = "${lookup(local.context_variables[terraform.workspace], "pippo")}" }

output "LOCALS" { value = "${local.pippo}" }

@menego, I liked this approach.

This way we can define variable values per workspace in the same file and also ensure that terraform plan will fail if there is not a value defined for the current workspace. And as a bonus: it's also possible to force terraform fail running with the default workspace (demanding the user to choose a specific/valid workspace).

andrew-sumner commented 4 years ago

I ended up with something similar to @fewbits but keep the environment specific variables in separate yaml files under an "env" folder, eg:

\env
    \dev.yaml
    \test.yaml
    \prod.yaml

An example of one of these files would be:

setting1: value1
setting2: value2

In variables.tf I load the variables using:

locals {
  env = merge(
    yamldecode(file("env/${terraform.workspace}.yaml")),
    { "environment_name" = terraform.workspace }
  )
}

NOTE: adding environment_name is optional, it's something I like to have available

I can then reference them using local.env.setting1 and local.env.environment_name

Assuming you don't create a file called default.yaml you will get an error if someone tries to use the default workspace.

jeffmccollum commented 4 years ago

Related and slightly hacky, In Terraform Cloud set an environment variable (not terraform var) called TF_CLI_ARGS to '-var-file=your-enviroment-name.tfvars' and the plan and apply will use it.

With Terraform Enterprise, using this will error. Use TF_CLI_ARGS_plan instead as this will only use the -var-file for plan, instead of every terraform command.

dinvlad commented 4 years ago

Could someone confirm if the above solution with using a YAML/JSON file, basically prevents us from using variable declarations for these values? I.e. all environment-specific values are accessed through locals now, while only "shared" values can still be accessed via variables?