Closed mbarr-hrp closed 5 months ago
Hi @mbarr-hrp, Thanks for reaching out. Do I understand it right that you'd like to use a real S3 backend and run terraform/tflocal to switch back and forth between a mocked local and a live dev/stage/prod workspace while your statefile is stored remotely? If so it's not trivial as tflocal out of the box not supporting workspace separation. The assumption is that the users are using tflocal locally and the state file is not stored remotely, maximum in Localstack as a test.
So unfortunately no documented resource for such use case however it can be a good blog post, I keep it in mind! 🙏
And from this point down it's only my opinion how to resolve your situation, so tl;dr sorry in advance 🙂
By taking leverage of Terraforms override files behaviour (the one we use with tflocal too), you can always override tflocal's behaviour by defining desired resources, backends, outputs. You can find more information about this in the official Terraform docs here. Relevant parts:
If more than one override file defines the same top-level block, the overriding effect is compounded, with later blocks taking precedence over earlier blocks. Overrides are processed in order first by filename (in lexicographical order) and then by position in each file.
The presence of a block defining a backend (either cloud or backend) in an override file always takes precedence over a block defining a backend in the original configuration. That is, if a cloud block is set within the original configuration and a backend block is set in the override file, Terraform will use the backend block specified in the override file upon merging. Similarly, if a backend block is set within the original configuration and a cloud block is set in the override file, Terraform will use the cloud block specified in the override file upon merging.
Regarding an easy switch between tflocal and terraform depeding on the used workspace, I'd control it with a small shell script and depending on the current workspace I'd set env variables and use the right binary.
Hope this helps, if you have any question please let us know.
The answer turned out to be to just do tflocal init -reconfigure, or tf init -reconfigure, as needed.
Is it possible to use s3 backend storage with both a local workspace, and other workspaces that are actually real, shared storage on AWS s3?
The goal here is to easily test something locally, then test / deploy to "real" dev/staging/prod.
It looks like tf detects the change in the endpoints as a backend change, (backend info is stored in .terraform/terraform.tfstate), and even if you just use the single, default workspace, it won't let you change back and forth between tflocal & terraform (w/no overrides).
Is there any kind of documentation / posts on suggested developer workflows?