Open dashaun opened 5 years ago
@dashaun something that's always annoyed me about the Hashicorp HCL syntax is that you can't tell if something is an array or not at a glance. In this case looks like workspaces is not an array (despite the 's'), but an object with either a name
or a prefix
to specify multiple workspaces. Try:
backend_config:
hostname: app.terraform.io
organization: xxx
token: {{terraform_token}}
workspaces:
name: azure-dashaun-cloud # note there's no leading `-`
backend_config:
hostname: app.terraform.io
organization: dashaun
token: {{environment_terraform_token}}
workspaces:
name: azure-dashaun-cloud
Same result:
2019/06/28 16:39:14 terraform init command failed.
Error: exit status 1
Output: Initializing modules...
- module.infra
Getting source "../modules/infra"
- module.ops_manager
Getting source "../modules/ops_manager"
- module.pas
Getting source "../modules/pas"
- module.certs
Getting source "../modules/certs"
- module.isolation_segment
Getting source "../modules/isolation_segment"
Initializing the backend...
Error configuring the backend "remote": 1 error occurred:
* workspaces: should be a list
Looking at the Terraform code, workspaces
really doesn't look like a list: https://github.com/hashicorp/terraform/blob/445df6b1321a009166c3b5a380883af6ddd0a5b9/backend/remote/backend.go#L121.
You can try setting source.env.TF_LOG: DEBUG
in your pipeline to get more verbose log output.
2019/06/28 21:06:23 terraform init command failed.
Error: exit status 1
Output: 2019/06/28 21:06:23 [INFO] Terraform version: 0.11.14
2019/06/28 21:06:23 [INFO] Go runtime version: go1.12.4
2019/06/28 21:06:23 [INFO] CLI args: []string{"/usr/local/bin/terraform", "init", "-input=false", "-get=true", "-backend=true", "-backend-config=/tmp/build/put/terraforming/terraforming-pas/resource_backend_config.json"}
2019/06/28 21:06:23 [DEBUG] Attempting to open CLI config file: /root/.terraformrc
2019/06/28 21:06:23 [DEBUG] File doesn't exist, but doesn't need to. Ignoring.
2019/06/28 21:06:23 [INFO] CLI command args: []string{"init", "-input=false", "-get=true", "-backend=true", "-backend-config=/tmp/build/put/terraforming/terraforming-pas/resource_backend_config.json"}
2019/06/28 21:06:23 [DEBUG] command: loading backend config file: /tmp/build/put/terraforming/terraforming-pas
Initializing modules...
- module.infra
Getting source "../modules/infra"
2019/06/28 21:06:23 [DEBUG] found "file:///tmp/build/put/terraforming/modules/infra" in ".terraform/modules/0f814bfabdf2cdd95d6d102ccff04b3a": true
2019/06/28 21:06:23 [TRACE] "file:///tmp/build/put/terraforming/modules/infra" stored in ".terraform/modules/0f814bfabdf2cdd95d6d102ccff04b3a"
- module.ops_manager
2019/06/28 21:06:23 [DEBUG] fetching module from file:///tmp/build/put/terraforming/modules/ops_manager
2019/06/28 21:06:23 [DEBUG] fetching "file:///tmp/build/put/terraforming/modules/ops_manager" with key "1.ops_manager;../modules/ops_manager"
Getting source "../modules/ops_manager"
2019/06/28 21:06:23 [DEBUG] found "file:///tmp/build/put/terraforming/modules/ops_manager" in ".terraform/modules/79729aee041ff4a4deb1a7babd4c86d2": true
2019/06/28 21:06:23 [DEBUG] fetching module from file:///tmp/build/put/terraforming/modules/pas
2019/06/28 21:06:23 [DEBUG] fetching "file:///tmp/build/put/terraforming/modules/pas" with key "1.pas;../modules/pas"
- module.pas
2019/06/28 21:06:23 [DEBUG] found "file:///tmp/build/put/terraforming/modules/pas" in ".terraform/modules/91e0a55085cb9beb24278dad147d7230": true
2019/06/28 21:06:23 [TRACE] "file:///tmp/build/put/terraforming/modules/pas" stored in ".terraform/modules/91e0a55085cb9beb24278dad147d7230"
Getting source "../modules/pas"
- module.certs
Getting source "../modules/certs"
2019/06/28 21:06:23 [DEBUG] found "file:///tmp/build/put/terraforming/modules/certs" in ".terraform/modules/7b1a0b61714bb5ea34fece79d9b7f06e": true
2019/06/28 21:06:23 [TRACE] "file:///tmp/build/put/terraforming/modules/certs" stored in ".terraform/modules/7b1a0b61714bb5ea34fece79d9b7f06e"
- module.isolation_segment
2019/06/28 21:06:23 [DEBUG] found "file:///tmp/build/put/terraforming/modules/isolation_segment" in ".terraform/modules/43d113ae07f4d22ef5a2f931bda6e905": true
2019/06/28 21:06:23 [TRACE] "file:///tmp/build/put/terraforming/modules/isolation_segment" stored in ".terraform/modules/43d113ae07f4d22ef5a2f931bda6e905"
Getting source "../modules/isolation_segment"
2019/06/28 21:06:23 [DEBUG] command: adding extra backend config from CLI
Initializing the backend...
2019/06/28 21:06:23 [DEBUG] command: no data state file found for backend config
2019/06/28 21:06:23 [DEBUG] New state was assigned lineage "97e6d3b2-7ba6-f1e8-c073-b7ad1b77b6ad"
2019/06/28 21:06:23 [DEBUG] plugin: waiting for all plugin processes to complete...
Error configuring the backend "remote": 1 error occurred:
* workspaces: should be a list
This is a new workspace, so that's the output that I would expect.
After a bit more digging I think you'll have to upgrade to Terraform 0.12+ to avoid the issue. Under the hood the terraform-resource converts the YAML in your pipeline config to JSON which is passed to terraform init -backend-config=config.json
. Unfortunately it looks like Terraform before version 0.12 didn't support lists of maps in JSON syntax for some reason: https://github.com/hashicorp/terraform/issues/19454. Upgrading to a later tag of the resource should fix it, I'm not seeing another workaround unfortunately.
I've upgraded to 0.12.3
The source.env.TF_LOG: DEBUG doesn't appear to work, so the only output I get
▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ Terraform Apply ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼
▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ Terraform Apply ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲
Failed To Run Terraform Apply!
2019/07/01 02:42:39 Apply Error: Error running `workspace list`: exit status 1, Output:
Its the same result with the dash and without.
@dashaun well we got a little farther at least. What happens if you run terraform workspace list
on your local machine with the same config? Looking at the resource code it seems to swallow the error here which makes it harder to see what's going on. I'll try to update the resource to at least print the error when I get a chance.
Not sure if this helps but I also ran across this issue with a project that was leveraging terraform-resource version latest. When the pipeline was last ran it was about 50 days ago and it succeed but then stopped working yesterday when we attempted to run the pipeline with a similar error as above. I did a little digging and found that starting with version 0.12.2
the pipeline will fail with the same error stated above but version 0.12.1
it will succeed.
My apologizes it would be version 0.11.14 so yes I agree with @ljfranklin something is different after upgrading to 0.12.x of terraform. I tested again and 0.12.1 through 0.12.4 result in the same error.
@dashaun if you're still having trouble with this I just pushed this commit to the latest
image which prints STDERR when the workspace list
command fails. It might give more helpful output now. Another user hit a similar issue: https://github.com/ljfranklin/terraform-resource/issues/97.
Not the OP, but struggling to get the remote backend with terraform cloud working at all. There seems to be an issue upstream, with a hack:
https://github.com/hashicorp/terraform/issues/21393
echo '1' | TF_WORKSPACE=$non_existing_workspace_suffix terraform init
I've tried various combinations, with just setting the workspace as a name vs prefix.
- name: terraform
type: terraform
source:
backend_type: remote
backend_config:
hostname: ((terraform_endpoint))
organization: ((terraform_org))
token: ((vault:kv/rft.terraform-token))
workspaces:
- prefix: ((terraform_workspace_prefix))-
vars:
creds: ((vault:aws/sts/ec2_admin))
env:
TF_IN_AUTOMATION: "true"
TF_INPUT: "false"
TF_LOG: "((tf_log_level))"
- name: terraform-plan
plan:
- get: pull-request
trigger: true
passed: [set-pipeline]
- put: terraform
params:
action: plan
plan_only: true
terraform_source: pull-request
# generate_random_name: true
env_name: one
delete_on_failure: true
on_failure:
put: pull-request
params:
path: pull-request
status: failure
What I'd really like to be doing is generate_random_name: true and then create the workspace on the fly, but for testing purposes made it static.
2020/04/21 09:10:20 terraform init command failed.
Error: exit status 1
Output:
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
The currently selected workspace (default) does not exist.
This is expected behavior when the selected workspace did not have an
existing non-empty state. Please enter a number to select a workspace:
1. one
2. rft-training-allan-test
3. two
4. yolo-
Enter a value:
Error: Failed to select workspace: input not a valid number
Added TF_WORKSPACE: one
- name: terraform-plan
plan:
- get: pull-request
trigger: true
passed: [set-pipeline]
- put: terraform
params:
action: plan
plan_only: true
terraform_source: pull-request
# generate_random_name: true
env_name: one
delete_on_failure: true
env:
TF_WORKSPACE: one
Results in:
▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ Terraform Plan ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼
▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ Terraform Plan ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲
Failed To Run Terraform Plan!
2020/04/21 09:14:38 Plan Error: Error running `workspace select`: exit status 1, Output:
The selected workspace is currently overridden using the TF_WORKSPACE
environment variable.
To select a new workspace, either update this environment variable or unset
it and then run this command again.
@allandegnan The resource assumes a workspace named default
already exists. Normally this workspace is always present, from the Terraform docs: 'Terraform starts with a single workspace named "default". This workspace is special both because it is the default and also because it cannot ever be deleted.' Did you somehow manually delete the default
workspace? If so I would try manually pushing an empty default
workspace to your backend and remove the TF_WORKSPACE
variable from your pipeline.
I had the same thought but couldn't get it to work either.
adegnan@laptop:~/Projects/blah/allan-test (ad/addingVersion)$ terraform workspace new default
default workspace not supported
You can create a new workspace with the "workspace new" command.
adegnan@laptop:~/Projects/blah/allan-test (ad/addingVersion)$ terraform workspace new prefix-allan-test-default
Created and switched to workspace "prefix-allan-test-default"!
You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration.
Returns:
2020/04/21 13:00:40 terraform init command failed.
Error: exit status 1
Output:
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Error loading state: default workspace not supported
You can create a new workspace with the "workspace new" command
I might have misunderstood the docs, but I think the default magic workspace only applies to local and not remote. In any event, I also added "default" via the GUI in app.terraform.io, and that didn't help the issue either.
So I forked the repo and made a small hacky change:-
https://github.com/ljfranklin/terraform-resource/compare/master...secureweb:bypassInitSelection
Unfortunately, my plan action error with the following:
▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ Terraform Plan ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼
Error: Saving a generated plan is currently not supported
The "remote" backend does not support saving the generated execution plan
locally at this time.
Error: Run variables are currently not supported
The "remote" backend does not support setting run variables at this time.
Currently the only to way to pass variables to the remote backend is by
creating a '*.auto.tfvars' variables file. This file will automatically be
loaded by the "remote" backend when the workspace is configured to use
Terraform v0.10.0 or later.
Additionally you can also set variables on the workspace in the web UI:
https://app.terraform.io/app/secureweb/prefix-allan-test-one/variables
▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ Terraform Plan ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲
Failed To Run Terraform Plan!
2020/04/21 13:34:43 Plan Error: Failed to run Terraform command: exit status 1
Errors are:
Manually setting the backend to local in terraform cloud (I don't really want to do this, because it sort of negates parts of the point of using TFE and means I can't generate workspaces on the fly, but whatever, it'll work for "now".
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_instance.example will be created
+ resource "aws_instance" "example" {
+ ami = "ami-7ad7c21e"
+ arn = (known after apply)
+ associate_public_ip_address = (known after apply)
+ availability_zone = (known after apply)
<snip>
+ iops = (known after apply)
+ kms_key_id = (known after apply)
+ volume_id = (known after apply)
+ volume_size = (known after apply)
+ volume_type = (known after apply)
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ Terraform Plan ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲
Failed To Run Terraform Plan!
2020/04/21 13:58:29 Plan Error: Failed to run Terraform command: exit status 1
That's with TF_LOG=trace, which isn't super helpful.
But looking in terraform cloud, I can see the following:
prefix-allan-test-one-plan
Terraform v0.12.24
Configuring remote state backend...
Initializing Terraform configuration...
Setup failed: Failed terraform init (exit 1): <nil>
Output:
2020/04/21 13:58:26 [DEBUG] Using modified User-Agent: Terraform/0.12.24 TFC/b6160e7930
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Checking for available provider plugins...
Provider "stateful" not available for installation.
A provider named "stateful" could not be found in the Terraform Registry.
This may result from mistyping the provider name, or the given provider may
be a third-party provider that cannot be installed automatically.
In the latter case, the plugin must be installed manually by locating and
downloading a suitable distribution package and placing the plugin's executable
file in the following directory:
terraform.d/plugins/linux_amd64
Terraform detects necessary plugins by inspecting the configuration and state.
To view the provider versions requested by each module, run
"terraform providers".
Error: no provider exists with the given name
Okay...?
Change prefix-allan-test-one-plan to local in terraform cloud.
Job passes, but the terraform resource, gets, but then when I try to refer to the resource elsewhere it doesn't return anything.
@allan-degnan-rft @allandegnan I don't have any experience using the Terraform Enterprise tooling, but I'd be open to a PR that fixes the issues you described. So far sounds like you'd need to fix the following:
default
workspace string in their pipeline config*.auto.tfvars
files instead of using --var-file
flagterraform apply
to run with a specific plan version?stateful
provider is an implementation detail of how the resource persists the generated planfile to the configured Terraform backend storage. You'd also have to make sure the get
functions conditionally skips the planfile downloadUnderstood.
For the record, hacking got me to part 3 (I could have sworn I already posted it), which generates this from Cloud:
https://app.terraform.io/app/secureweb/prefix-allan-test-one/runs/run-TxR75eeRoTBxdmXD
That said, according to the documentation plans are only speculative, which essentially means, I'd need to lock, plan, run any tests I want against, apply(with new plan), and unlock, hoping that my lock did the job.
:(
Guess you'd also have to implement the locking API calls in the resource. Not great. I'm happy to talk through any implementation ideas but does seem like a fair bit of work to support the Enterprise flow in the resource unfortunately.
Actually, thinking about it, we don't need to implement locking, probably just documentation.
Enterprise Workflow:
Hopefully I'll have time for a PR, unsure at the moment, but I figure at least discussing the problem helps anyone else driving by too. Will get back to you. :)
I tried to map the backend_config to one that works:
Then I get this response:
And if I add another line to workspaces like this:
I get this response:
This feels like a bug, but I'm not convinced.
I might have been staring at this same issue for too long.