Closed aoggz closed 4 years ago
It appears that runway only supports terraform backends of type s3, currently.
There shouldn't be anything Runway is doing that prevents the use of any backend.
Yes, the terraform_backend_config
config option is focused around using the s3 backend (given the values it supports) but the backend-*.tfvars
have no restrictions. When one of these files is found, it is the same as using terraform init -reconfigure -backend-config=backend-*.tfvars
Have you tried the configuration described here https://www.terraform.io/docs/backends/types/remote.html#using-cli-input but placing the contents of the backend.hcl
file described into your backend-*.tfvars
file? What is terraform saying the issue is when trying to use it this way?
As for supporting terragrunt, here is our stance as of April of this year: https://github.com/onicagroup/runway/issues/226#issuecomment-613021171
So it looked like your suggestion to use backend-*.tfvars
worked for getting the backend parameters correct, but runway failed shortly thereafter:
[runway] infrastructure:workspace currently set to default; switching to my-awesome-workspace...
workspaces not supported
Terraform enterprise/cloud workspace do not allow you to switch workspaces: only default
is permitted/relevant in the given cloud workspace.
That said, there are other issues that I'm realizing. We've leaned heavily on using runway's variable & interpolation in favor of using *.tfvars
or other module-specific variable syntax. Terraform cloud/enterprise does not support specifying those variables via CLI. They must be specified via *.auto.tfvars
files or specified in the cloud/enterprise workspace manually (or at least before you try to use the given workspace for a plan
/apply
).
While I was able to overcome the reported shortcoming (thank you!), it looks like there's more to it than I initially realized. At least as far as that output I shared above goes.
The ideal use case for my team is:
terraform_backend_config
config option is updated to support the new types required for the remote
backend.terraform init
is run for backends of type remote
*.auto.tfvars
file is created based on the parameters
I provide in the runway.yml
/runway.variables.yml
files, where *
is the runway environment/terraform cloud workspace name.This is pretty opinionated, I know.
Thanks for helping flesh out what the feature could look like. Admittedly, I am not a Terraform expert (or much of a user) but this seems like a fairly sane approach.
Is this something that you'd be willing to accept a PR for? I could take a stab at it!
PRs are always welcome for bug fixes, planned/approved features, etc.
For this one, it may be better to let me take care of it. I have a few thoughts on removing restrictions on values passed to terraform_backend_config
to open it up for future needs rather than just supporting the remote backend.
The other items I will likely turn into toggleable options to allow their use at the users discretion.
I should be able to start on this later today and will let you know when I have a usable build if you would like to beta test it.
Example implementation using a working POC of this feature: https://github.com/ITProKyle/terraform-cloud-test
It still has a ways to go before it's ready for release or production use but its in a place where deploy/destroy/plan are all functional and can be tested.
A few notes on some of the above points:
terraform_backend_config
config option updated to support the new types required for the remote backend
hostname
and organization
are working here but I'm not seeing a way to supply workspaces
via the -backend-config
CLI option. Open to any ideas on how to achieve this but I'm only seeing it as being passed via a file in the docs.
*.auto.tfvars
file is created based on the parameters I provide in therunway.yml
file, where*
is the runway environment/terraform cloud workspace name
This is working and requires a new option to enabled.
I opted to go with runway-parameters
for the file name in place of the environment/workspace name since Terraform isn't concerned about what the files are named, it loads all *.auto.tfvars
. This file is also deleted after the module finishes.
Terraform enterprise/cloud workspace do not allow you to switch workspaces
From my testing, switching workspaces is supported when initiating locally but the cloud runners always see the workspace as being default
. also, default
cannot be used from the CLI.
Using different workspaces is achieved by using this in the backend config:
workspaces {
prefix = "terraform-cloud-test-"
}
Then, Runway's standard workspace management can be used. Using this, the Runway deploy environment of lab
is equivalent to erraform-cloud-test-lab
in Terraform cloud.
Hey @ITProKyle. Thanks for working on this so quickly!
On passing workspaces
through the CLI: can we provide a complex object in the runway configuration? That could work on the runway side. I'm with you on the Terraform side of that equation...I've yet to see anything in any documentation indicating how this could be passed to the Terraform.
I tried out your POC repo with a few different backend configurations and ran into some issues. For now, I just started with keeping the configuration within the Terraform module itself to see if runway can work with the new CLI paradigm required for the remote
backend.
main.tf
using workspaces.prefix
Below is the output that I get. I only have one remote workspace (in TFE) configured that matches the pattern, yet terraform still presents an interactive prompt as part of the init
command to select the remote workspace.
➜ pipenv run runway plan
Configured modules
1: api
2: infrastructure
Enter number of module to run (or "all") [all]: 2
[runway] deploy environment "aoggz" is explicitly defined in the environment
[runway] if not correct, update the value or unset it to fall back to the name of the current git branch or parent directory
[runway]
[runway]
[runway] deployment_1:processing deployment (in progress)
[runway] deployment_1:processing regions sequentially...
[runway]
[runway] deployment_1.infrastructure:processing module in us-east-1 (in progress)
[runway] infrastructure:init (in progress)
[runway] backend tfvars file not found -- looking for one of: backend-aoggz-us-east-1.tfvars, backend-aoggz.tfvars, backend-us-east-1.tfvars, backend.tfvars
Initializing modules...
...
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
The currently selected workspace (default) does not exist.
This is expected behavior when the selected workspace did not have an
existing non-empty state. Please enter a number to select a workspace:
1. aoggz
Enter a value:
After I select the workspace, runway
does appear to do its thing as expected: it's creating the runway-parameters.auto.tfvars.json
, and starting to refresh state. I didn't get further than that due to an issue with missing AWS credentials, but I'm guessing that's because I don't know how to use pipenv
.
main.tf
using workspaces.name
Runway errs when it tries to determine if the current workspace matches the expected one.
➜ pipenv run runway plan
Configured modules
1: api
2: infrastructure
Enter number of module to run (or "all") [all]: 2
[runway] deploy environment "aoggz" is explicitly defined in the environment
[runway] if not correct, update the value or unset it to fall back to the name of the current git branch or parent directory
[runway]
[runway]
[runway] deployment_1:processing deployment (in progress)
[runway] deployment_1:processing regions sequentially...
[runway]
[runway] deployment_1.infrastructure:processing module in us-east-1 (in progress)
[runway] infrastructure:init (in progress)
[runway] backend tfvars file not found -- looking for one of: backend-aoggz-us-east-1.tfvars, backend-aoggz.tfvars, backend-us-east-1.tfvars, backend.tfvars
Initializing modules...
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
* provider.archive: version = "~> 1.3"
* provider.null: version = "~> 2.1"
* provider.random: version = "~> 2.3"
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
workspaces not supported
Traceback (most recent call last):
File "/Users/aoggz/.local/share/virtualenvs/dir-XUMrL-mE/bin/runway", line 33, in <module>
sys.exit(load_entry_point('runway', 'console_scripts', 'runway')())
File "/Users/aoggz/.local/share/virtualenvs/dir-XUMrL-mE/lib/python3.8/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/Users/aoggz/.local/share/virtualenvs/dir-XUMrL-mE/lib/python3.8/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/Users/aoggz/.local/share/virtualenvs/dir-XUMrL-mE/src/runway/runway/_cli/main.py", line 34, in invoke
return super(_CliGroup, self).invoke(ctx)
File "/Users/aoggz/.local/share/virtualenvs/dir-XUMrL-mE/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/Users/aoggz/.local/share/virtualenvs/dir-XUMrL-mE/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/aoggz/.local/share/virtualenvs/dir-XUMrL-mE/lib/python3.8/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/Users/aoggz/.local/share/virtualenvs/dir-XUMrL-mE/lib/python3.8/site-packages/click/decorators.py", line 21, in new_func
return f(get_current_context(), *args, **kwargs)
File "/Users/aoggz/.local/share/virtualenvs/dir-XUMrL-mE/src/runway/runway/_cli/commands/_plan.py", line 28, in plan
Runway(ctx.obj.runway_config,
File "/Users/aoggz/.local/share/virtualenvs/dir-XUMrL-mE/src/runway/runway/core/__init__.py", line 106, in plan
self.__run_action('plan', deployments if deployments is not None else
File "/Users/aoggz/.local/share/virtualenvs/dir-XUMrL-mE/src/runway/runway/core/__init__.py", line 192, in __run_action
components.Deployment.run_list(action=action,
File "/Users/aoggz/.local/share/virtualenvs/dir-XUMrL-mE/src/runway/runway/core/components/_deployment.py", line 335, in run_list
cls(context=context,
File "/Users/aoggz/.local/share/virtualenvs/dir-XUMrL-mE/src/runway/runway/core/components/_deployment.py", line 198, in plan
return self.__sync('plan')
File "/Users/aoggz/.local/share/virtualenvs/dir-XUMrL-mE/src/runway/runway/core/components/_deployment.py", line 298, in __sync
self.run(action, region)
File "/Users/aoggz/.local/share/virtualenvs/dir-XUMrL-mE/src/runway/runway/core/components/_deployment.py", line 218, in run
Module.run_list(action=action,
File "/Users/aoggz/.local/share/virtualenvs/dir-XUMrL-mE/src/runway/runway/core/components/_module.py", line 303, in run_list
cls(context=context,
File "/Users/aoggz/.local/share/virtualenvs/dir-XUMrL-mE/src/runway/runway/core/components/_module.py", line 175, in plan
return self.run('plan')
File "/Users/aoggz/.local/share/virtualenvs/dir-XUMrL-mE/src/runway/runway/core/components/_module.py", line 204, in run
inst[action]()
File "/Users/aoggz/.local/share/virtualenvs/dir-XUMrL-mE/src/runway/runway/module/terraform.py", line 429, in plan
self.run('plan')
File "/Users/aoggz/.local/share/virtualenvs/dir-XUMrL-mE/src/runway/runway/module/terraform.py", line 412, in run
re.M).search(self.terraform_workspace_list()):
File "/Users/aoggz/.local/share/virtualenvs/dir-XUMrL-mE/src/runway/runway/module/terraform.py", line 339, in terraform_workspace_list
workspaces = subprocess.check_output(
File "/usr/local/opt/python@3.8/Frameworks/Python.framework/Versions/3.8/lib/python3.8/subprocess.py", line 411, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/usr/local/opt/python@3.8/Frameworks/Python.framework/Versions/3.8/lib/python3.8/subprocess.py", line 512, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/Users/aoggz/.tfenv/versions/0.12.24/terraform', 'workspace', 'list']' returned non-zero exit status 1.
Thanks for the feedback.
can we provide a complex object in the runway configuration ... I'm with you on the Terraform side of that equation
yes. there are no restrictions on the data types being defined in the Runway config file. Some fields we do type validation on after parsing but that does not the backend config option.
How to properly provide it to the Terraform CLI is the blocker.
I only have one remote workspace (in TFE) configured that matches the pattern, yet terraform still presents an interactive prompt as part of the
init
command to select the remote workspace.
Thanks, I had not noticed this because I had a persistent .terraform
directory in the project I am working out of. When this is present, it does not present the prompt.
It looks like I can potentially get around this by setting TF_WORKSPACE=<workspace>
in the environment but I'll have to see how that impacts the existing functionality (e.g. workspace does not exist yet and needs to be created).
edit: setting this env var breaks workspaces.name
I didn't get further than that due to an issue with missing AWS credentials
This is a limitation of using Terraform cloud/enterprise. The same way that -var-file
can't be used, environment variables also do not carry over so they would need to be setup as environment variables on the Terraform cloud/enterprise side.
Another option would be to pass them as variables into the Terraform module but to do this they would need to be written to the auto.tfvars file. While already possible if they are stored as environment variables using the env
lookup to populate values in parameters
, this pattern would result in those secrets being written as plain text to both the file and almost certainly stdout as a log message (dependent on log level being used).
Configuring backend in main.tf using workspaces.prefix Runway errs when it tries to determine if the current workspace matches the expected one
actually this is happening when Runway is trying to figure out if it should can just switch to the needed workspace or if it needs to be created.
I hadn't started much testing along the lines of using workspaces.name
but did add the terraform_workspace
option in preparation to working with it. The option allows for explicitly defining the Terraform workspace to use.
I ran a few tests to try it and found:
TF_WORKSPACE
will break thisdefault
I am not completely sold on the use of workspaces.name
alongside Runway since it effectively locks you into one workspace. Using workspaces.prefix
functions in a much more predictable way in the context of Runway. Thats not to say I don't think it should be supported if possible but that the configuration options should be bias towards workspaces.prefix
.
updated https://github.com/ITProKyle/terraform-cloud-test to include example uses of both workspaces.name
and workspaces.prefix
. I also update the lock file to reference a new commit that should now autodetect when a remote backend is being used and deal with it appropriately depending on the use of name
or prefix
.
through a bit of digging, I was able to find this issue: https://github.com/hashicorp/terraform/issues/21830. It would appear that not being able to specify workspaces
is a bug in v0.12 - it had worked in v0.11. The issue has been open for over a year at this point so theres no telling if it will be fixed but hopefully it will be fixed for v0.13.
I'm not sure if adding support for providing it as a CLI argument for v0.11 only is worth while but I can do it if it would be useful.
Is your feature request related to a problem? Please describe. It appears that runway only supports terraform backends of type s3, currently. My organization has acquired Terraform Enterprise, and we are working on migrating our Terraform state storage from S3 to Terraform Enterprise. Since we heavily use runway, this lack of support is causing us trouble.
Describe the solution you'd like I've thought of a few potential solutions to this problem:
terraform_backend_config
property (and the backend config file feature) to support remote backend properties.terraform init
. I don't think this is inline with the mission ofrunway
, but I'm guessing it would be a quicker/easier change.backend[-*].tfvars
files, runway used a templatizedbackend[-*].hcl
file, this feature request likely wouldn't be required and further flux in the backend space wouldn't require rework and runway (disclaimer: I assume this is the case, as I've not done a deep dive on this area of the runway codebase)terragrunt
as a module type that runway can deploy.Describe alternatives you've considered A few things I've tried to get around this limitation:
backend-config
arg to terraform (despite the warning in the docs)backend[-*].tfvars
file with the best guesses at appropriate variable namesterraform init
for terraform modules before runningrunway deploy