hashicorp / terraform

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is a source-available tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.
https://www.terraform.io/
Other
42.47k stars 9.51k forks source link

Feature: Conditionally load tfvars/tf file based on Workspace #15966

Open atkinchris opened 7 years ago

atkinchris commented 7 years ago

Feature Request

Terraform to conditionally load a .tfvars or .tf file, based on the current workspace.

Use Case

When working with infrastructure that has multiple environments (e.g. "staging", "production"), workspaces can be used to isolate the state for different environments. Often, different variables are needed per workspace. It would be useful if Terraform could conditionally include or load variable file, depending on the workspace.

For example:

application/
|-- main.tf // Always included
|-- staging.tfvars // Only included when workspace === staging
|-- production.tfvars // Only included when workspace === production

Other Thoughts

Conditionally loading a file would be flexible, but possibly powerfully magic. Conditionally loading parts of a .tf/.tfvars file based on workspace, or being able to specify different default values per workspace within a variable, could be more explicit.

andrew-sumner commented 4 years ago

@dinvlad Yes, that is the case. You could put some logic in to use a variable value if it exists but that defeats the purpose of environment specific variable files.

dinvlad commented 4 years ago

I see, thanks. FWIW, I've preferred to use a simple linking trick to get the benefits of .tfvars files:

ln -sf "env/${PROJECT}.tfvars" "terraform.tfvars"

as part of a custom terraform-init.sh script that also initializes the backend bucket in the same cloud ${PROJECT}.

This way, the 2 are linked together (so a developer can't inadvertently mix them), and the only caveat is we have to ask the team to use this custom init script instead of a standard terraform init. But it avoids the need to use workspaces (obviously, this only works for the case when we have one environment per project).

johnstrickler commented 4 years ago

@dinvlad Can you expand further on your linking trick?

epomatti commented 4 years ago

I've seen options to decode and merge local variables with .yaml and .json files.

But is it possible to decode or merge a .tfvars file?

epomatti commented 4 years ago

Since I'm using Terraform Cloud I had to use this variable TFC_WORKSPACE_NAME

variable "TFC_WORKSPACE_NAME" {
  type = string
}

locals {
  env = merge(
    yamldecode(file("env/${var.TFC_WORKSPACE_NAME}.yaml"))
  )
}

resource "azurerm_resource_group" "group" {
  name     = local.env.group
  location = local.env.location
}
pecigonzalo commented 4 years ago

@dinvlad Can you expand further on your linking trick?

Terraform automatically loads terraform.tfvars or any $NAME.auto.tfvars file, so you can use a symlink from a var file to a "linked" file locally with one of those names on initialization and avoid having to pass -var-file=path.


While this (I particulatly use this or the TF_CLI_ARGS one) and some of the other are really clever it really breaks a lot of functionality.

Using maps in vars

While this is a fully valid functionality and keeps all the new TF12 types and etc, it really code that looks a lot more complex, as all values now have to be a map and all assignments have to be a lookup in the map.

Using YAML/JSON and loading to locals

This is great, as it avoids a lot of what I mentioned for the maps in vars, but now you cant take advantage of type checking, var definitions, etc. This is really exploiting locals to get vars, which while great is IMO a hacky solution.

Symlink/TF_CLI_ARGS/other scripting

This is the current solution I use, as it has netted me the best results, but as said its hard to sync across all devs as all require this wrapper or bootstrap script. You can use dotenv to automatically assign TF_CLI_ARGS but you have no easy/clean way to tell it your on X workspace when you switch without some script magic that again, everyone has to have. The problem with TF_CLI_ARGS is that it is a bit broken, and you to set each command you want to set args to instead of being able to set them top level at TF_CLI_ARGS because otherwise some commands break. You can use zsh or other shell hooks to automatically set those based on the output of terraform workspace show, but again you have to sync all your devs and CI on that script.


This has been open for a while and there have been a couple of good solutions presented here that will simplify all workflows. IMO, something like https://github.com/hashicorp/terraform/issues/15966#issuecomment-582095800 would be perfect or even something like what its done with the $NAME.auto.tfvars, like having $WORKSPACE_NAME.workspace.tfvars.

cvemula1 commented 4 years ago

I've existing prod infrastructure which has one variable.tf file and I'm trying to separate it another dev environment which will use the same TF modules as prod but will be different variable files. Now I've

*dev.tfvars
*prod.tfvars

i'm trying to run for DEV:

terraform apply -input=false $DEV_PLAN -var-file="dev.tfvars"

FOr PROD:

terraform apply -input=false $PLAN -var-file="prod.tfvars"

The plan looks good but I'm worried about a single state file which defined as S3 backed.

If I run dev apply will it affect my existing state file in s3 bucket? which can cause errors during prod deployment?

maxgio92 commented 4 years ago

@cvemula1 that's a bit out of scope... Anyway, if I understood your point I think that you should segregate envs with different workspaces or explicitely different resources in the same workspace. https://www.terraform.io/docs/state/workspaces.html

fewbits commented 4 years ago

Now I'm using this approach:

variables.tf:

variable "azure_location" {
  type = map
  default = {
    "dev" = "East US 2"
    "qa" = "East US 2"
    "prod" = "Brazil South"
  }
}

resource-groups.tf:

resource "azurerm_resource_group" "my-resource-group" {
  name     = "MY-RESOURCE-GROUP"
  location = var.azure_location[terraform.workspace]
}

This way, when I execute terraform workspace select prod, I get the variables associated with terraform.workspace => prod.

I don't know if this is the best approach, though.

chrisfowles commented 4 years ago

@fewbits - I've done a similar pattern in the past. I think it's probably better to use locals instead of variable defaults for this though - unless you really explicitly want to be able to override it from outside the module.

kylewin commented 4 years ago

Hi @fewbits How can you specify the workspace in terraform block when using your pattern ? ( I'm using Terraform cloud. )

terraform {
  required_version = ">= 0.13.0"

  backend "remote" {
    hostname     = "app.terraform.io"
    organization = "my org"

    workspaces {
      name = "dainfra-dev"  <<< I mean here, how to dynamically load workspace name here if using your pattern ?

From the Doc of Terraform Cloud, I can use "prefix = dainfra-" here to apply the code to whole 3 dainfra envs dev, stg, prod. 

But the interpolation of terraform.workspace will always return "default" and cannot use your pattern.
    }
  }
}
sereinity commented 4 years ago

@kyledakid in all my TF projects I put :

locals {
  workspace = var.workspace != "" ? var.workspace : terraform.workspace
}

For my terraform cloud remote runs I define a variable workspace with the current workspace name in it. And I always refer to the workspace with local.workspace.

With this, I can reuse my code for remote and local runs,

kylewin commented 4 years ago

Yes exactly. I found this somewhere in Medium after posting the above question to you. Thank you @sereinity !

fewbits commented 4 years ago

Hi @fewbits How can you specify the workspace in terraform block when using your pattern ? ( I'm using Terraform cloud. )

terraform {
  required_version = ">= 0.13.0"

  backend "remote" {
    hostname     = "app.terraform.io"
    organization = "my org"

    workspaces {
      name = "dainfra-dev"  <<< I mean here, how to dynamically load workspace name here if using your pattern ?

From the Doc of Terraform Cloud, I can use "prefix = dainfra-" here to apply the code to whole 3 dainfra envs dev, stg, prod. 

But the interpolation of terraform.workspace will always return "default" and cannot use your pattern.
    }
  }
}

Hi @kyledakid. I do not use Terraform Cloud (I just use terraform CLI commands in CI/CD pipeline).

@sereinity, nice hint.

github-usr-name commented 3 years ago

Avoiding the need to modify main.tf just for a new workspace (and therefore allowing non-SCM'd local workspaces by completely decoupling the workspace settings from the source code):

main.tf

locals {
  workspace_yaml_file = "env/${terraform.workspace}.yaml"
  cluster = {
    nodes = (
      coalescelist(
        var.nodes,
        fileexists(local.workspace_yaml_file)
        ? yamldecode(file(local.workspace_yaml_file))
        : []
      )
    ),
    ssh_authorized_key = var.ssh_public_key_cicd,
    // ....
  }
}

If an explicit -var switch is used to set nodes then that value is selected by the coalescelist function; if not, then it will look for a file matching the workspace/environment name and attempt to decode it as yaml. If that fails then an empty array is returned, which triggers an error from coalescelist. This could obviously be tweaked to use whatever data type you need.

Don't think I've seen a previous solution which completely decouples the workspaces from main.tf - for example, @mhfs otherwise great solution and @bborysenko extension of it both require main.tf having knowledge of the available workspaces (e.g., workspaces = "${merge(local.staging, local.production)}")

github-usr-name commented 3 years ago

locals {
  workspace = var.workspace != "" ? var.workspace : terraform.workspace
}

Nice tip @sereinity , I'm stealing it :+1:

joakimhellum commented 3 years ago

@epomatti

But is it possible to decode or merge a .tfvars file?

There is a feature request for a tfvarsdecode function here: https://github.com/hashicorp/terraform/issues/25584

There is also is an experimental "tfvars" provider in the registry that should allow this:

provider "tfvars" {}

data "tfvars_file" "example" {
  filename = "${terraform.workspace}.tfvars"
}

output "variables" {
  value = data.tfvars_file.example.variables
}

We have been experimenting with this for a while, but not sure if it's a pattern we really want to use. The only advantage we found so far is that by having the tfvars_file data source (or a future tfvarsdecode function), we can simplify our root modules by not having any variables, since we make the variables part of the configuration and not something the developers need to specify for each run. But in most cases we could achieve the same with data-only modules, which is less of a hack.

For us the biggest challenge is that Terraform Cloud workspace is not the same as a Terraform OSS workspace. We hope any new workspace features means less difference between the two.

soostdijck commented 3 years ago

Is there any news about a proper implementation yet? I'm still using my workaround, which is OK but the IDE's of those that use one do not understand it.

imarkvisser commented 3 years ago

This would be a great feature.

infogulch commented 3 years ago

The tfvars_file provider is a pretty neat solution (well, assuming Hashicorp doesn't do anything with this themselves; I don't have my hopes up). The source repo innovationnorway/terraform-provider-tfvars was archived by the original authors after publishing in Jan 2021, but I think with just one enhancement it would be perfect:

The provider should accept as arguments any number of variable blocks, that have the same arguments as the standard variable declaration. While loading the file, it should validate the type of each value matches the type declared in the data block, and support other things like validations and defaults etc. With this you'd virtually be able to cut/paste normal variable declarations into the tfvars_file data block for any variables that are workspace-specific. You'd still have to set one variable per workspace: the name of the environment to look up the file from, but that's much better than N. E.g.:

Before:

variable "subnet" {
    description = "Subnet space of this environment, e.g. 10.1.0.0/16"
    type = string
}

# use it like var.subnet

After:

data "tfvars_file" "env" {
  filename = "../variables/${var.environment_name}.tfvars"  

  variable {
      name = "subnet"
      description = "Subnet space of this environment, e.g. 10.1.0.0/16"
      type = string
  }
}

# use it like data.tfvars_file.env.variables.subnet
joakimhellum commented 3 years ago

@infogulch We experimented with the innovationnorway/tfvars provider for a while, but it felt too much of a hack in the end, since it only solved the issue of workspace variables, when we really needed "workspaces as code" (see Re-imagining Terraform Workspaces). While I no longer work for that organization, I'm happy to implement the features you suggested and continue experimenting with the tfvars provider. Personally I would prefer to see tfvarsdecode function added in Terraform.

infogulch commented 3 years ago

@joakimhellum Thanks for the reference to that blog post, it was an interesting read. I may just be failing to see it, but I don't understand what the workspaces idea offers that couldn't be replicated by invoking a module declared in a sibling directory; with the benefit that it doesn't introduce any new constructs into the language and seems to already handle some of the edge cases that this design ran into. Maybe I don't understand what you mean by "workspaces as code" and how invoking a separate module is deficient for your use-case.

I think the idea of a tfvarsdecode function is neat, but my concern with it is the same as with the initial implementation of tfvars_file provider: That there's no way to declare/restrict the types of values that are returned. Not that parsing itself could cause problems, but passing arbitrary types to expressions might cause unexpected results. Maybe I'm just being a paranoid ninny but I generally prefer very strict checks on all inputs to programs I write (HCL2 is turing complete and definitely counts as a program). 🐘

I would be happy to collaborate with you to review/test enhancements to the implementation of tfvars_file as I have time. Of course, my organization is still evaluating different strategies to solve this problem so I can't guarantee a user. 😄 Maybe a more complete example could help advance the conversation on this issue.

soostdijck commented 3 years ago

For me one of the main issues is that using -var-file will be forgotten/unknown to my colleagues, and they'll come asking me why it's broken. It makes more sense to simply specify the var file to be loaded from inside the TF code. It saves typing and only seems to make sense.

I'm not sure why this is not just added to the roadmap by hashicorp, or how we can get it on there.

bpoland commented 3 years ago

We ended up using yaml variables files that get loaded into a local variable, like this:

locals {
  env = yamldecode(file("env.${terraform.workspace}.yaml"))
}

A little less robust since you can't enforce the presence of specific values in the yaml file, but it does work. You can also merge together several yaml files (e.g. if you want global defaults with the option to override for a specific module/directory)

arash-bizcover commented 3 years ago

You guys just need Terragrunt What Hashicorp develop in 3 years those guys pack and push in a week

rokcarl commented 3 years ago

I use the following: terraform plan -var-file "$(terraform workspace show).tfvars". This means I can use the same command, no matter the workspace selected.

But it is kinda sad that after 4 years this is still not implemented. I mean, "How hard can it be?" 🙂

edantes-1845 commented 2 years ago

I use the following: terraform plan -var-file "$(terraform workspace show).tfvars". This means I can use the same command, not matter the workspace selected.

But it is kinda sad that after 4 years this is still not implemented. I mean, "How hard can it be?" slightly_smiling_face

I use it too. It is a good decision

infogulch commented 2 years ago

terraform plan -var-file "$(terraform workspace show).tfvars"

This is a great idea for most. Ironically this is the one solution that can't work for Terraform Enterprise customers because the terraform cli is invoked by the TFE node. Funny when you pay only for it to be worse 🤦

rauerhans commented 2 years ago

terraform plan -var-file "$(terraform workspace show).tfvars"

very clever, but it's possible to get it wrong if you forget to automatically get the tf workspace, but after going through the whole thread here, I'll roll with this, thanks!

pecigonzalo commented 2 years ago

terraform plan -var-file "$(terraform workspace show).tfvars"

This is a great idea for most. Ironically this is the one solution that can't work for Terraform Enterprise customers because the terraform cli is invoked by the TFE node. Funny when you pay only for it to be worse 🤦

@infogulch What if you do -var-file "foo.tfvars" via TF_CLI_ARGS for each workspace?

terraform plan -var-file "$(terraform workspace show).tfvars"

very clever, but it's possible to get it wrong if you forget to automatically get the tf workspace, but after going through the whole thread here, I'll roll with this, thanks!

@rauerhans could you elaborate?

rokcarl commented 2 years ago

It could be dangerous if you think you're on dev, run this command, but you're actually on the prod workspace so you apply to production. That's why I have Oh My Zsh and it always shows me which workspace I'm on before running any Terraform command.

raman-nbg commented 2 years ago

This is my second day of writing TF script for a multi-staging setup and I think I should switch to a different tool. For me it looks like that there is no clean solution available for using different tfvars files per workspace (with Terraform Cloud). The workarounds described here only apply to running/applying TF locally.

Why isn't there any option in the TF Cloud UI where I can specify which tfvars files should be used? This seems so simple...

matti commented 2 years ago

@raman-nbg yes, do it before you have massive set of terraform written. I wish I was you.

thomas-riccardi commented 2 years ago

@raman-nbg in TFC we ended up using -var-file (from https://github.com/hashicorp/terraform/issues/15966#issuecomment-366012578) with the TF_CLI_ARGS env var (https://www.terraform.io/cli/config/environment-variables#tf_cli_args-and-tf_cli_args_name): TF_CLI_ARGS_plan=-var-file=staging.env.tfvars It works good enough.

(TF_CLI_ARGS_plan instead of TF_CLI_ARGS for TFC as it does a plan, saved to file, then apply from file)

Something similar is documented here: https://support.hashicorp.com/hc/en-us/articles/4416764686611-Using-Terraform-Variable-Definition-Files-in-Terraform-Cloud-Enterprise

paololazzari commented 2 years ago

Will this feature ever be added? It's been almost 5 years since this issue was opened

ghost commented 2 years ago

Will this feature ever be added? It's been almost 5 years since this issue was opened

I don't think so, haha

yukari1414 commented 2 years ago

still waiting... 😂

crw commented 2 years ago

This issue is not currently prioritized. It does rank highly on our list of most requested issues, however that does not guarantee it will be addresses in the near future. Thanks for your interest and your patience.

oniGino commented 2 years ago

here is yet another workaround structure

locals {
  workspaces = {
    workspace1= {
      key1 = "value1"
      key2 = "value2"
     }
     workspace2 = {
      key1 = "foo1"
      key2 = "foo2"
     }
  }
  ws = local.workspaces[terraform.workspace]
}

Now all workspace specific values can be references as local.ws.key1 local.ws.key2 or local.ws[key1]

added bonus you get an error trying to run in a workspace that isn't defined in locals

github-usr-name commented 2 years ago
> This issue is not currently prioritized. It does rank highly on our list of most requested issues
                              ^^^ I do not think this word means what you think.
                                  The issue is _clearly_ a high priority for your customers.
github-usr-name commented 2 years ago

@oniGino Reasonable approach, though without a bit of juggling it has the disadvantage of coupling the settings for all possible environments into a single file. I tend to use this pattern quite a lot in various languages - it's essentially an poor-man's DI ;)

philomory commented 2 years ago

@github-usr-name Although I do not work for Hashicorp, I can almost guarantee you that they know exactly what "prioritized" means; in this context, "not currently prioritized", as in, "we have not assigned this issue a priority in our backlog/work queue".

briceburg commented 2 years ago

Somehow inventing a whole new language (HCL v1) over the adoption of yaml/json or CUE /jsonet took "priority" over sensible features like this. I find it strange that golang-friendly devs would not want to create conventions around such a common feature; the language itself preaches idioism and "readability"... my sad $0.02

matti commented 2 years ago

Not currently prioritized = this issue does not affect terraform cloud

On 24. Aug 2022, at 22.56, github-usr-name @.***> wrote:

 This issue is not currently prioritized. It does rank highly on our list of most requested issues ^^^ I do not think you know what this word means. The issue is clearly a high priority for your customers.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.

Bessonov commented 2 years ago

Somehow inventing a whole new language (HCL v1) over the adoption of yaml/json or CUE /jsonet took "priority" over sensible features like this. I find it strange that golang-friendly devs would not want to create conventions around such a common feature; the language itself preaches idioism and "readability"... my sad $0.02

Your comment is off-topic for reason that this issue has nothing to do with configuration language, but with how deviations can be introduced.

AFAIK HCL is used in multiple HashiCorp products and therefore on it's own it makes perfectly sense. But I'm in cohort who says, that a declarative language is a bad idea for infrastructure management or for every dynamic task. Of course, there are changes since terraform 0.12 which made terraform usable for most use cases.

Back to you comment. Your suggestions would make it even worse. yaml/json/jsonnet are more dysfunctional than hcl. And cue was introduced at the end of 2018, long after hcl was used in production world wide. I never used cue (and don't plan to do that), but at first glance there is no real benefit for HashiCorp and(!) the community, but a bunch of disadvantages.

Therefore, if switch to another language, then, probably, the best choice could be a general purpose language, like pulumi did it.

nitrocode commented 1 year ago

It would be very nice if this was built into Terraform.

NOTE: that terraform.workspace is unavailable to variable validation blocks so those cannot be used for this.

Assumptions

assumptions If there are consistent workspace names such as `ue1-prod`, `ue1-dev`, etc and have inputs such as ```hcl # ue1-prod.tfvars short_region = "ue1" env = "prod" ``` ```hcl # ue1-dev.tfvars short_region = "ue1" env = "dev" ``` ``` terraform workspace new ue1-dev terraform workspace new ue1-prod terraform workspace select ue1-dev ```

Option 1: consistent workspaces with a local check

local check `main.tf` ```hcl variable "short_region" { type = string } variable "env" { type = string } locals { check_workspace = { terraform.workspace = "some-good-value-doesn't-matter" }["${var.short_region}-${var.env}"] } ``` If you tried to select `ue1-prod` workspace and use `ue1-dev.tfvars` by mistake, you'll try to pass in `dev` for the `env` and then the `check_workspace` map `local` will only contain `ue1-prod` so the lookup will try to find `ue1-dev` in the map which will fail. It would only succeed, and be an unused local, only if the workspace matched the naming convention provided by the inputs.

Returns

$ terraform plan -var-file="ue1-prod.tfvars"

│ Error: Invalid index
│
│   on main.tf line 12, in locals:
│   12:   }["${var.short_region}-${var.env}"]
│     ├────────────────
│     │ terraform.workspace is "ue1-dev"
│     │ var.env is "prod"
│     │ var.short_region is "ue1"
│
│ The given key does not identify an element in this collection value.

Option 2: consistent workspaces with a null_resource check

null_resource check ```hcl resource "null_resource" "workspace_check" { lifecycle { precondition { condition = contains(split("-", terraform.workspace), var.short_region) error_message = "The selected workspace \"${terraform.workspace}\" does not have the correct short_region \"${var.short_region}\"" } precondition { condition = contains(split("-", terraform.workspace), var.env) error_message = "The selected workspace \"${terraform.workspace}\" does not have the correct env \"${var.env}\"" } } } ```

Returns

$ terraform plan -var-file="ue1-prod.tfvars"

│ Error: Resource precondition failed
│
│   on main.tf line 16, in resource "null_resource" "workspace_check":
│   16:       condition     = contains(split("-", terraform.workspace), var.env)
│     ├────────────────
│     │ terraform.workspace is "ue1-dev"
│     │ var.env is "prod"
│
│ The selected workspace "ue1-dev" does not have the correct env "prod"

Option 3: terraform wrapper (shell script or atmos)

terraform wrapper We hit a similar problem with clients and developed a tool called [atmos](https://github.com/cloudposse/atmos) to get around this limitation. 1. define tfvars via yaml (we call it a stack) 2. define a root terraform module (we call it a component) 3. run `atmos terraform plan example --stack uw2-dev` 4. deep merge `uw2-dev.yaml` and then generate tfvars file 5. create or select a workspace (which is derived from the yaml stack) i.e. `uw2-dev` 6. run the terraform plan ```yaml # stacks/uw2-dev.yaml components: terraform: example: vars: # override the value of var.hello hello: world ``` ```hcl # components/terraform/example/main.tf variable "hello" { default = "hello" } output "hello" { value = var.hello } ``` ```sh $ brew install atmos $ wget https://raw.githubusercontent.com/cloudposse/atmos/master/atmos.yaml $ atmos terraform plan example --stack uw2-dev ``` The `atmos` command will then create the tfvars json in `terraform/components/example/uw2-dev-example.tfvars.json` ```json { "hello": "world" } ``` The `atmos` command will then run the following ```sh cd components/terraform/example terraform init # if the workspace doesn't exist terraform workspace new uw2-dev # if the workspace exists terraform workspace select uw2-dev # finally terraform plan -var-file uw2-dev-example.tfvars.json ```

Returns no error since the error is prevented if exclusively using the terraform wrapper.

iateadonut commented 1 year ago

can I +1 this feature request?

I have a workspace 'production' and i was really surprised when I dropped a terraform.tfvars file into terraform.tfstate.d/production/ and it didn't automatically read from it.

greyvugrin commented 1 year ago

This doesn't solve the direct terraform integration part, but this script makes it easier to not schlep around -var-file=$(terraform workspace show).tfvars for every command in the meantime.

https://gist.github.com/greyvugrin/d7e43b4834796101c6c328718a1b7250

# Replaces stuff like:
# - terraform plan -var-file=$(terraform workspace show).tfvars
# - terraform import -var-file=$(terraform workspace show).tfvars aws_s3_bucket.bucket MY_BUCKET
# with:
# - ./tf.sh dev plan
# - ./tf.sh dev import aws_s3_bucket.bucket MY_BUCKET
michael-mcmasters commented 9 months ago

I was able to do this with Terraform Cloud by adding an environment variable to the workspace: Key: TF_CLI_ARGS Value: -var-file "dev.tfvars"

Now when I run terraform apply it turns into terraform apply -var-file dev.tfvars. It will do this for all terraform commands. (See here: https://developer.hashicorp.com/terraform/cli/config/environment-variables#tf_cli_args-and-tf_cli_args_name)

It would be better if we could apply this in the code itself though.

DavidGamba commented 9 months ago

One hard reason to support conditionally loading tf files is that moved blocks don't allow variables. That means that one can't do refactors in a CI environment, only locally and the moved blocks can't be committed to version control.

As for working with multiple workspaces, I have a tool that automatically sets TF_DATA_DIR and TF_WORKSPACE to allow you to work with multiple workspaces in different terminals bt