Closed eyalzek closed 8 years ago
i saw this same issue also related to a nested module on 0.7.0. I was able to do a terraform destroy
to accomplish what i needed to do. I realize this won't work for everybody though.
@tphummel how did you figure out which resource needs to be destroyed?
@eyalzek it worked in my case because I'm doing a PoC with a dedicated statefile. I wanted it to destroy everything the statefile knows about. In real life this probably won't work for people.
The destroy
command is pretty terrifying because you don't see the plan ahead of time.
Thinking about workarounds, could you use the new terraform state mv
command to get everything you want to destroy in a new statefile. Then run terraform destroy -state=./new.tfstate
@eyalzek as pointed out here, you can work around this by setting a default
value for the variables in the error output
Setting a default value to region
did the trick. I'm not sure I like setting a default region though, what has changed?
Agreed that this is very confusing. I don't believe it has anything to do with destroy. Nested modules though does both seem to make this more likely, as well as make this a bigger problem. And setting a default value has not helped in my case.
I also have this issue -- similar case, migrating from 0.6.x to 0.7.0.
well, upon further investigation for me, there is definitely a difference when declaring variables and using default and defualt. Withdrawing my comment . .
Is anyone able to share a reproduction case? I'd like to fix this but w/o a repro it'll be difficult. :
My case is very complex, as said above, I tried reproducing in a simpler case which didn't work..
Having a default value for the problem variable didn't solve it for me, however I referenced the actual resources inside the module (that don't reference the variable) for deletion and the plan/destroy worked fine.
terraform plan -destroy -target=module.my_module.resource_type.resource_name
The variable in question was a string used to filter an AWS AMI (var.initial_ami) - the default value was similar to 'myami*'
data "aws_ami" "ami_latest" {
most_recent = true
owners = ["x"]
filter {
name = "name"
values = ["${var.initial_ami}"]
}
}
We are having this problem at autodesk too. We have a module calling a module that calls a third module that generates a template data source. There is a group of 10 or so parameters that are passed all the way through to the template; each time binding to the same name. The third module work properly when used in isolation but not when called as a nested module.
+1 for this, having the same issue in a very similar case. Setting defaults is not possible (because one of the variables affected is region, which is used to assign AWS provider regions, and falling back to the default value there makes it so terraform can't find/destroy any of your resources unless they were built in the default region)
This issue has a good reproducible example: #8146
@mitchellh I think the example here is either related or the exact same bug: https://github.com/hashicorp/terraform/issues/8146#issue-170714381
Yep, we're having this at MadGlory too. It's the same issue that the AutoDesk team reported.
This issue seems to be resolved for me as of v0.7.7.
Great to hear @AirbornePorcine
I'm going to try the repro case in #8146 and centralize there and close this as a dup!
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Terraform Version
TF v0.7.0
Affected Resource(s)
Hard to pinpoint a resource, might be a nested module issue.
Terraform Configuration Files
I tried to reproduce this in a simple case, but I didn't manage to. Our project isn't huge, but it's quite big and it's tricky to share the config file here. We have a top level module that uses a directory (
src
for that matter) as a source with all the configuration. Besides that we have another directory (instance_src
) with configuration for creating an EC2 instance. inside thesrc
directory we have multiple modules that useinstance_src
as the source of configuration. Basically, while trying to adapt our codebase to TF 0.7, I tackled many errors which I tried to fix one after the other. Mostly converting variables to lists where needed and updating oldelements(split())
statements. Now when I'm trying to plan one of the top-level modules (e.gtf plan -target=module.staging
), I'm getting the following output:the first issue here, is that the error is thrown for each of our environments (top level modules) even though they are separated, one of them is even in a separate state file, but that still doesn't help.
I've been trying to figure out where is this error coming from to no avail, it is definitely declared and it has a default set. The weirdest part is when running with
TF_LOG=DEBUG
, I'm getting a lot of these errors among the output:the output
server_ids
is from the nested module (instance_src
). I have made sure thatvar.count
exists and declared, I have made sure all the types of variables are set correctly. I find this output very confusing.I realize this isn't the most informational issue out there, but I will gladly supply any config/information required if you could lead me in the right direction of what is needed.
References
https://github.com/hashicorp/terraform/issues/7378