Closed ivan-pinatti closed 6 years ago
Running terraform init under the folder examples/complete produces errors as well
...
Error getting plugins: 11 problems:
- output "role_name": must use splat syntax to access aws_iam_role.default attribute "name", because it has "count" set; use aws_iam_role.default.*.name to obtain a list of the attributes across all instances
- output "role_name": must use splat syntax to access aws_iam_role.default attribute "name", because it has "count" set; use aws_iam_role.default.*.name to obtain a list of the attributes across all instances
- module "jenkins": missing required argument "loadbalancer_certificate_arn"
- module "vpc": "region" is not a valid argument
- module "vpc": "availability_zones" is not a valid argument
- module 'jenkins': "private_subnet_ids" is not a valid output for module "vpc"
- module 'jenkins': "public_subnet_ids" is not a valid output for module "vpc"
- module "vpc": "region" is not a valid argument
- module "vpc": "availability_zones" is not a valid argument
- module 'jenkins': "private_subnet_ids" is not a valid output for module "vpc"
- module 'jenkins': "public_subnet_ids" is not a valid output for module "vpc"
Terraform version is 0.11.1 (latest)
Hey @ivan-pinatti! Sorry for the delay. Due to the holidays, we were a bit short staffed. @aknysh will be taking a look at this. The module is definitely functioning on 0.11.1
, (we've deployed it at several client sites), however, you are correct that the documention needs updating. We'll try to take care of that this week.
@ivan-pinatti @osterman I'm going to fix all the issues and docs. In summary, the issues fall into these two categories:
We removed subnets from https://github.com/cloudposse/terraform-aws-vpc
and separated it into two modules:
https://github.com/cloudposse/terraform-aws-vpc
https://github.com/cloudposse/terraform-aws-dynamic-subnets
But the Jenkins example uses the old version of https://github.com/cloudposse/terraform-aws-vpc
that outputs the private and public subnets.
I'll update the examples and README
to use the latest versions of the modules.
Regarding output "role_name": must use splat syntax to access aws_iam_role.default attribute "name"
Terraform made it a fatal error in 0.11.0
to reference a resource with count=0 even if that resource is not used.
https://github.com/hashicorp/terraform/issues/16726
But then switched it to warning in 0.11.1
https://github.com/hashicorp/terraform/blob/v0.11.1/CHANGELOG.md
Since the new feature was already rolled out, and since development for 0.12 depends on the foundational changes that enabled it, we decided to compromise with an opt-out mechanism in 0.11.1, so those with configurations containing problematic output expressions have a means to use 0.11.1 without first fixing all modules. We understand that this is not the most ideal migration path -- if we could do this over again we would've introduced the warning in one of the 0.10 point releases -- but we hope that this compromise is acceptable so that we can continue to make progress towards 0.12.
To enable that behavior, you need to set the variable TF_WARN_OUTPUT_ERRORS=1
They also mentioned they will make it an error again in 0.12
, so it's a temporarily fix anyway.
But if they fix ternaries (to short-circuit and not evaluate left and right part at the same time), we can work around the issue very easily https://github.com/hashicorp/hil/issues/50
I'm going to look at the issue in more details, but when you use 0.1.1
, please set TF_WARN_OUTPUT_ERRORS=1
Hi @osterman and @aknysh,
Thanks for being so detailed oriented, I really appreciate it.
Once you have fixed let me know that I will try to use with the current Terraform version.
Cheers!
Hi @ivan-pinatti
We updated README
and added new examples which reflect the latest versions of the modules.
New tag 0.2.9
was created in the master branch.
Please test and let us know if you have any questions.
Don't forget to set the variable TF_WARN_OUTPUT_ERRORS=1
if you are using TF 0.11.1
Hi @aknysh / @osterman,
It is not working yet.
Right out-of-the-box after a simple terraform init using the new_vpc_new_subnets example without any change will produce the error below
Initializing modules...
- module.jenkins
Getting source "../../"
- module.vpc
Getting source "git::https://github.com/cloudposse/terraform-aws-vpc.git?ref=master"
- module.subnets
Getting source "git::https://github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=master"
- module.jenkins.elastic_beanstalk_application
Getting source "git::https://github.com/cloudposse/terraform-aws-elastic-beanstalk-application.git?ref=tags/0.1.4"
- module.jenkins.elastic_beanstalk_environment
Getting source "git::https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment.git?ref=tags/0.3.3"
- module.jenkins.ecr
Getting source "git::https://github.com/cloudposse/terraform-aws-ecr.git?ref=tags/0.2.2"
- module.jenkins.efs
Getting source "git::https://github.com/cloudposse/terraform-aws-efs.git?ref=tags/0.3.3"
- module.jenkins.efs_backup
Getting source "git::https://github.com/cloudposse/terraform-aws-efs-backup.git?ref=tags/0.3.9"
- module.jenkins.cicd
Getting source "git::https://github.com/cloudposse/terraform-aws-cicd.git?ref=tags/0.5.1"
- module.jenkins.label_slaves
Getting source "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.3.1"
- module.jenkins.elastic_beanstalk_application.label
Getting source "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.3.1"
- module.jenkins.elastic_beanstalk_environment.label
Getting source "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.3.1"
- module.jenkins.elastic_beanstalk_environment.tld
Getting source "git::https://github.com/cloudposse/terraform-aws-route53-cluster-hostname.git?ref=tags/0.1.1"
- module.jenkins.ecr.label
Getting source "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.3.1"
- module.jenkins.efs.label
Getting source "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.3.1"
- module.jenkins.efs.dns
Getting source "git::https://github.com/cloudposse/terraform-aws-route53-cluster-hostname.git?ref=tags/0.1.1"
- module.jenkins.efs_backup.sns_label
Getting source "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.3.1"
- module.jenkins.efs_backup.datapipeline_label
Getting source "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.3.1"
- module.jenkins.efs_backup.resource_role_label
Getting source "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.3.1"
- module.jenkins.efs_backup.role_label
Getting source "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.3.1"
- module.jenkins.efs_backup.label
Getting source "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.3.1"
- module.jenkins.efs_backup.logs_label
Getting source "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.3.1"
- module.jenkins.efs_backup.backups_label
Getting source "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.3.1"
- module.jenkins.cicd.label
Getting source "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.3.1"
- module.jenkins.cicd.build
Getting source "git::https://github.com/cloudposse/terraform-aws-codebuild.git?ref=tags/0.6.1"
- module.jenkins.cicd.build.label
Getting source "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.3.1"
- module.vpc.label
Getting source "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.2.1"
- module.subnets.private_label
Getting source "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.2.2"
- module.subnets.private_subnet_label
Getting source "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.2.2"
- module.subnets.public_subnet_label
Getting source "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.2.2"
- module.subnets.public_label
Getting source "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.2.2"
The currently running version of Terraform doesn't meet the
version requirements explicitly specified by the configuration.
Please use the required version or update the configuration.
Note that version requirements are usually set for a reason, so
we recommend verifying with whoever set the version requirements
prior to making any manual changes.
Module: module.subnets
Required version: ~> 0.10.2
Current version: 0.11.1
It could be a simple constraint, so I downloaded the subnets module and changed the required version from
required_version = "~> 0.10.2"
to
required_version = ">= 0.10.2"
After this change the terraform init started to work as expected.
Then, I filled the variables in the main.tf and tried the next step, terraform plan, which started to throw the following error
Error: Error refreshing state: 1 error(s) occurred:
* module.subnets.provider.aws: No valid credential sources found for AWS Provider.
Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS Provider
This wasn't suppose to happen because the provider should be inherited from the root to the module, even so I manually inserted the provider into the subnets module just to check if it was going to work or not, however, it didn't! And threw a new error
Error: Error running plan: 4 error(s) occurred:
* module.subnets.aws_network_acl.public: aws_network_acl.public: value of 'count' cannot be computed
* module.subnets.aws_route_table_association.public_default: aws_route_table_association.public_default: value of 'count' cannot be computed
* module.subnets.aws_network_acl.private: aws_network_acl.private: value of 'count' cannot be computed
* module.subnets.aws_route_table.public: aws_route_table.public: value of 'count' cannot be computed
I'm using the latest version, 0.11.1 . Another important info, I've set the TF_WARN_OUTPUT_ERRORS=1 as asked.
Cheers,
BTW, I couldn't re-open this issue.
@ivan-pinatti let me know if everything's good now.
Thanks!
Hi @osterman,
Still not working.
I only fixed the first issue, there are others that require to be investigated yet.
Hi @osterman and @aknysh,
Did you have time to test it?
We still have to fix;
I will try to work on the number one item tomorrow.
@osterman could you please re-open the issue?
Cheers!
@aknysh what's the latest?
Issue 1. Provider (AWS) is not being inherited to modules
This becomes an issue if you place the keys into the provider like this:
provider "aws" {
region = "${var.region}"
access_key = "XXXXXXXXXXXXX"
secret_key = "XXXXXXXXXXXXX"
}
At the same time, the module terraform-aws-dynamic-subnets
has this code:
provider "aws" {
region = "${var.region}"
}
which throws the error:
module.subnets.provider.aws: No valid credential sources found for AWS Provider
.
Two possible fixes:
~/.aws/credentials
file (tested, it works)provider "aws"
from the terraform-aws-dynamic-subnets
module (it's not needed anyway; I'll create a PR for that)Issue 2. Subnets module throwing multiple count errors:
* module.subnets.aws_network_acl.public: aws_network_acl.public: value of 'count' cannot be computed
* module.subnets.aws_route_table_association.public_default: aws_route_table_association.public_default: value of 'count' cannot be computed
* module.subnets.aws_network_acl.private: aws_network_acl.private: value of 'count' cannot be computed
* module.subnets.aws_route_table.public: aws_route_table.public: value of 'count' cannot be computed
This started to happens after we separated terraform-aws-dynamic-subnets
from terraform-aws-vpc
(and probably with the new TF versions). TF does not know the outputs from terraform-aws-vpc
(which are used in count
in the terraform-aws-dynamic-subnets
module) before the module is actually created. This is a known issue:
I'm looking into it now.
A workaround could be to target terraform-aws-vpc
first to create it, then terraform-aws-dynamic-subnets
, and then terraform-aws-jenkins
. But this is not pretty.
terraform plan -target=module.vpc -out=tfplan
terraform apply tfplan
terraform plan -target=module.subnets -out=tfplan
terraform apply tfplan
terraform plan -target=module.jenkins -out=tfplan
terraform apply tfplan
Or, create the VPC and subnets in different modules, and use their attributes in terraform-aws-jenkins
(https://github.com/cloudposse/terraform-aws-jenkins/blob/master/examples/existing_vpc_existing_subnets/main.tf). This is not pretty as well for many cases.
This issue is different from the one where setting __TF_WARN_OUTPUT_ERRORS=1__ would help. This is to hide errors (generate warnings instead) when outputs
use resources that will never be created because of conditional logic.
I'm testing it right now, issue 1 appears to be fixed.
Issue 2 still present, I will try the workaround and think of some elegant solution, my initial thought would be to create a Makefile.
We are almost there, the steps that I did so you can reproduce it:
I've created the following Makefile
export TF_WARN_OUTPUT_ERRORS=1
# define phony targets
.PHONY: all vpc subnets jenkins
# define behavior for; make AND make all
all: jenkins
vpc:
terraform plan -target=module.vpc -out=tfplan
terraform apply tfplan
subnets: vpc
terraform plan -target=module.subnets -out=tfplan
terraform apply tfplan
jenkins: subnets
terraform plan -target=module.jenkins -out=tfplan
terraform apply tfplan
clean:
terraform destroy -target=module.jenkins -force || true
terraform destroy -target=module.subnets -force || true
terraform destroy -target=module.vpc -force || true
The first two stages, vpc and subnet, were created OK. Jenkins stage is throwing the following errors:
Error: Error applying plan:
3 error(s) occurred:
* module.jenkins.module.efs_backup.aws_cloudformation_stack.datapipeline: 1 error(s) occurred:
* aws_cloudformation_stack.datapipeline: ValidationError: Stack:arn:aws:cloudformation:us-east-1:123456789123:stack/cp-prod-jenkins-efs-backup-datapipeline/e44a8010-f7d2-11e7-9f23-500c2854b635 is in ROLLBACK_COMPLETE state and can not be updated.
status code: 400, request id: e9150e57-f7d7-11e7-be3e-bb8eb095f767
* module.jenkins.module.elastic_beanstalk_environment.aws_elastic_beanstalk_environment.default: 1 error(s) occurred:
* aws_elastic_beanstalk_environment.default: InvalidParameterValue: Environment named cp-prod-jenkins-eb-env is in an invalid state for this operation. Must be Ready.
status code: 400, request id: 97851631-77aa-4245-9c59-881c27d34d87
* module.jenkins.module.cicd.module.build.aws_codebuild_project.default: 1 error(s) occurred:
* aws_codebuild_project.default: [ERROR] Error creating CodeBuild project: InvalidParameter: 1 validation error(s) found.
- missing required field, CreateProjectInput.Environment.EnvironmentVariables[4].Value.
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
I could see that several resources were created properly, including EFS, R53 entry, Elastic BeanStalk and so on... The ones that weren't created at all:
@ivan-pinatti thank you for testing the module and for the Makefile.
For now it looks like the best solution to the problem because currently TF can't fix the issues with count
using the resources that have not been created yet.
Regarding CodePipeline/CodeBuild, what Jenkins repo are you deploying?
Can you please try to deploy this branch https://github.com/cloudposse/jenkins/tree/update-docker-add-groovy
The master branch was not updated to be used with terraform-aws-jenkins
(for our internal reasons), we tested everything with update-docker-add-groovy
branch. I believe CodePipeline/CodeBuild was not created because of that.
For this, please just change from
variable "github_branch" {
type = "string"
default = "master"
}
to
variable "github_branch" {
type = "string"
default = "update-docker-add-groovy"
}
Regarding DataPipeline
, it's managed by CloudFormation code in https://github.com/cloudposse/terraform-aws-efs-backup/blob/master/templates/datapipeline.yml. If any issue occurs during apply
(like with CodePipeline/CodeBuild), the pipelines will be in an invalid state and will not be re-created (since they are not managed by TF directly). This is another issue that we have to look into. For now, can you please go to the AWS Console and manually delete those two pipelines.
Once you change the Jenkins branch and delete the pipelines, can you apply
again?
Thanks for your help.
Keeping track:
Changed
Tried again with the modifications, the outcome was:
The Cloudformation stack with the description "AWS Elastic Beanstalk environment (Name: 'cp-prod-jenkins-eb-env' Id: 'e-anbgibe8xs')" stayed with the CREATE_IN_PROGRESS until the Terraform time limit was reached (20 min). The resource that was holding the creation was AWSEBInstanceLaunchWaitCondition.
And the following error occured:
Error: Error applying plan:
2 error(s) occurred:
* module.jenkins.module.elastic_beanstalk_environment.aws_elastic_beanstalk_environment.default: 1 error(s) occurred:
* aws_elastic_beanstalk_environment.default: Error waiting for Elastic Beanstalk Environment (e-anbgibe8xs) to become ready: timeout while waiting for state to become 'Ready' (last state: 'Launching', timeout: 20m0s)
* module.jenkins.module.cicd.module.build.aws_codebuild_project.default: 1 error(s) occurred:
* aws_codebuild_project.default: [ERROR] Error creating CodeBuild project: InvalidParameter: 1 validation error(s) found.
- missing required field, CreateProjectInput.Environment.EnvironmentVariables[4].Value.
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Just to let you know, my idea is to use it with Bitbucket. For now I'm just trying to make it work as-is and later on I will start doing modifications, perhaps for better flexibility I will propose some PRs to integrate with AWS CodeCommit and check some sync solution with Github/Bitbucket. In this manner it could be easily integrated to any major Git repository.
@ivan-pinatti thanks again. We are seeing the same issues as you with the latest TF and module versions. We are testing everything now, and will let you know ASAP when everything is working again. And we'd like to work with you on integrating with CodeCommit and Bitbucket.
@aknysh sounds awesome! Thanks for your support, I really appreciate it.
Meanwhile I'm analyzing your code to understand better how you guys architected the solution and how to best implement the Bitbucket new feature, please correct me if I'm wrong but we must work on the Terraform CI-CD module (https://github.com/cloudposse/terraform-aws-cicd?ref=tags/0.5.1 and I see two options;
Anyhow, I think we should open a new thread to discuss it further as it is another topic, let me hear your thoughts on this and if you already know the path to follow we can already start the issue OR create the fork.
@ivan-pinatti this PR fixes the remaining issues: https://github.com/cloudposse/terraform-aws-jenkins/pull/16; a single phase terraform plan
now works and make
is not necessary
@ivan-pinatti
We merged the last PR into the master branch.
All examples were tested and a single phase terraform plan
and terraform apply
work.
After all the resources get created, CodePipeline executes, builds the Docker image with Jenkins (using https://github.com/cloudposse/jenkins), stores it in the ECR repo, and then deploys it to Elastic Beanstalk.
Jenkins starts on Elastic Beanstalk.
Please test again, and let us know if any issues.
We merged the test branch https://github.com/cloudposse/jenkins/tree/update-docker-add-groovy into the master branch, you can use the master branch now https://github.com/cloudposse/jenkins/releases/tag/0.1.0
After you test it, let's start another thread for CodeCommit and Bitbucket integrations. Yes, it will involve changing https://github.com/cloudposse/terraform-aws-cicd module.
Thanks
@aknysh / @osterman, sorry guys but it still didn't work.
I'm using the new VPC example and what I did was;
After 20 min (timeout limit) it threw the error below;
Error: Error applying plan:
2 error(s) occurred:
* module.jenkins.module.cicd.module.build.aws_codebuild_project.default: 1 error(s) occurred:
* aws_codebuild_project.default: [ERROR] Error creating CodeBuild project: InvalidParameter: 1 validation error(s) found.
- missing required field, CreateProjectInput.Environment.EnvironmentVariables[4].Value.
* module.jenkins.module.elastic_beanstalk_environment.aws_elastic_beanstalk_environment.default: 1 error(s) occurred:
* aws_elastic_beanstalk_environment.default: Error waiting for Elastic Beanstalk Environment (e-piuy4fht9u) to become ready: timeout while waiting for state to become 'Ready' (last state: 'Launching', timeout: 20m0s)
It looks like the issue is related with ElasticBeanstalk health check. It couldn't validate the /login because the CodeBuild hadn't deployed and it wasn't deployed because there is a wait condition in CloudFormation from Beanstalk. My first impression that it is a circular dependency.
I also could check that DataPipeline was correctly deployed this time, however, CodeBuild still wasn't.
@ivan-pinatti
Strange, we deployed it many times (cold and warm deployment) and did not see the issue (although I see what you mean regarding the /login
health check URL).
Elastic Beanstalk environment is usually created in 3-5 minutes max with the sample Docker app,
and after that we are able to click on the URL and the the sample site.
At this time, CodePipeline would build and deploy the Jenkins image to EB.
Even if the sample site does not have the /login
URL, it does not prevent the EB environment from being created and started.
20 minutes timeout on EB usually means something is wrong with the VPC or subnets, and we experienced it a few times when we had wrong configuration.
Can you please check a few things:
Did you disable the NAT Gateways on private subnets? They need to be enabled because the EC2 server (Jenkins master) is in a private subnet, but it needs to be able to access the Internet. https://github.com/cloudposse/terraform-aws-jenkins/blob/master/examples/new_vpc_new_subnets/main.tf#L73
Did you deploy the master branch of https://github.com/cloudposse/jenkins repo?
Can you please destroy everything and try to plan/apply again?
Thanks
@aknysh,
Yes, I disabled the NAT gateway.
I had to disable it because it was trying to create one for each AZ and since I'm trying to deploy in US-EAST-1 it has 6, thus, it was reaching my EIP limit which is 5. Could we add an option to select how many AZs to use? 6 is overkill.
I will re-run with NAT true and deploying into us-west-2 as default, I will let you know if it worked in a few minutes.
@ivan-pinatti thanks, let us know how it went
To restrict the number of subnets/NAT gateways, you can do it now without modifying any code.
It's all controlled by var.availability_zones
variable.
https://github.com/cloudposse/terraform-aws-dynamic-subnets/blob/master/private.tf#L19 https://github.com/cloudposse/terraform-aws-dynamic-subnets/blob/master/public.tf#L19 https://github.com/cloudposse/terraform-aws-dynamic-subnets/blob/master/nat.tf#L15 https://github.com/cloudposse/terraform-aws-dynamic-subnets/blob/master/nat.tf#L2
So, for example, by providing just one AZ , only one public subnet, one private subnet, and one NAT gateway will be created.
@aknysh,
It worked!
Important notes;
These should be highlighted in the README.
I think we can consider this done now. o/
Besides, I will try to manually declare my AZs instead of using the data function and I will let you know my results so we can add it to the README too.
data "aws_availability_zones" "available" {}
@ivan-pinatti glad it finally worked for you :) thanks a lot for your help and for testing.
Regarding NAT gateways, they are mandatory because we place the EC2 servers into private subnets (which is a good practice). If we placed it into pubic subnets (for any other reasons), then NATs would not be required.
https://github.com/cloudposse/terraform-aws-jenkins/blob/master/main.tf#L40
Since NATs/subnets are created in a different module (terraform-aws-dynamic-subnets
in the example, but they could be created in any other module, even manually if needed), this feature is not directly related to terraform-aws-jenkins
module. But you are correct, we need to reflect it in README
(especially taking into account that we show that in the examples). We'll fix that.
Will close the issue for now. If you have any improvements or suggestions, please open new issues or PRs. Thanks again
Thank you guys for everything @aknysh and @osterman.
I just deployed into US-EAST-1 by changing two entries in the example that they were using the data source, basically I changed these lines;
https://github.com/cloudposse/terraform-aws-jenkins/blob/master/examples/new_vpc_new_subnets/main.tf#L17 https://github.com/cloudposse/terraform-aws-jenkins/blob/master/examples/new_vpc_new_subnets/main.tf#L65
to
availability_zones = ["${data.aws_availability_zones.available.names[0]}","${data.aws_availability_zones.available.names[1]}"]
Tomorrow I will try to use the slice function to have a more elegant solution and then I will do a PR in the examples with an option to choose how many AZs to use.
Thanks, @ivan-pinatti! Glad we finally got all those kinks worked out. Let us know if you run into any other issues.
I have found the cause of the cryptic error messages like the following:
* missing required field, CreateProjectInput.Environment.EnvironmentVariables[3].Value
In my case, this happened while using a Terraform CodeBuild template, but it can occur with other products that make the same assumption. Terraform includes all possible settings for a template in the plan, but the optional ones have empty ( "" ) values. When the plan is converted into an AWS CLI JSON format request, all lines with "" values are omitted. This is correct behavior in 95% of the cases, but if the template includes user defined environmental variables with default empty string values like the following:
"environmentVariables": [
{
"name": "SomeVarName,
"value": "",
"type": "PLAINTEXT"
}
],
then this practice produces a syntactically invalid setting
"environmentVariables": [
{
"name": "SomeVarName,
"type": "PLAINTEXT"
}
],
which causes AWS to issue the error message:
<div>
Parameter validation failed:
Missing required parameter in environment.environmentVariables[3]: "value"
</div>
The workaround until Hashicorp can fix this problem in Terraform is to include some non-empty trigger value, like "-", that the application takes as the equivalent of empty.
@rogerbrandtdev thanks for reporting what you uncovered!
Hi,
I've just cloned the repo to test it and I'm following the doc, however, it is asking for more variables than it is described.
My steps were:
It asked for the variables mentioned in the doc but then started asking ones that are not, one of them is the private subnet as follows:
If you could update the doc and also provide a terraform.tfvars file with pre-filled variables would be easier to use and understand, I did a similar approach in smaller proportions here:
https://github.com/therefore-ca/terraform-aws-r53/blob/master/terraform.tfvars
Thanks,