Closed in4mer closed 5 years ago
Hi @in4mer,
This is a couple issues combined, and basically the same situation as #16762.
If reading the data source has to be delayed because it depends on a computed value, the inputs
and outputs
maps are also going to be computed. Indexing into a computed map should probably just yield an unknown result, but currently it's not allowed at all.
Also to note, is that if efs_enabled
if empty, then the efs_mount_target_dns_name
interpolation is going to have problems because it references a resource with a count of 0. This will be taken care of when we get short-circuit conditional evaluation.
Well, couple that with the inability to count = 0
a module, and you have pretty incredibly inflexible system.
This is also a structural workaround for not being able to put hashes into a ternary.
Then add in having a type-unsafe declarative language. The end result of everything is unvarnished frustration.
Thank you for commenting and linking to the first bug.
Updating this with the current 0.12 status, so we can tackle this in the new codebase.
A modified example of this issue using the new syntax and features would look like:
variable "enable_efs" {
default = true
}
resource "test_resource" "efs" {
count = var.enable_efs ? 1 : 0
required = "name"
required_map = {"a" = "b"}
}
data "null_data_source" "efs_ctag0" {
count = var.enable_efs ? 1 : 0
inputs = {
efs_partition = var.enable_efs
efs_mount_target_dns_name = var.enable_efs ? "none" : test_resource.efs.computed_from_required
}
}
data "null_data_source" "efs_ctag1" {
count = var.enable_efs ? 1 : 0
inputs = {
keys = var.enable_efs ? "efs_enabled" : join(",", keys(data.null_data_source.efs_ctag0.outputs))
values = var.enable_efs ? "no" : join(",", values(data.null_data_source.efs_ctag0.outputs))
}
}
data "null_data_source" "efs_tags" {
count = var.enable_efs ? 1 : 0
inputs = zipmap(split(",", data.null_data_source.efs_ctag1.outputs["keys"]), split(",", data.null_data_source.efs_ctag1.outputs["values"]))
}
Which now results in:
configuration for data.null_data_source.efs_tags[0] still contains unknown values during apply
(the split+zipmap functions do not effect the outcome, but I'll leave them in there to verify the interpolation function behavior of unknown in this case as well)
Oh, this does seem to be resolved, but the error isn't very helpful in this case. Since the referenced data sources have a count
defined, indexing the instances fixes the error:
e.g. data.null_data_source.efs_ctag1[0].outputs["keys"]
and data.null_data_source.efs_ctag1[0].outputs["values"]
Error: Unsupported attribute
on missing-attribute-inputs.tf line 29, in data "null_data_source" "efs_tags":
29: inputs = zipmap(split(",", data.null_data_source.efs_ctag1.outputs["keys"]), split(",", data.null_data_source.efs_ctag1.outputs["values"]))
|----------------
| data.null_data_source.efs_ctag1 is tuple with 1 element
This value does not have any attributes.
This requirement for the index to be present is new in Terraform 0.12: it's one of the situations where we had to accept some incompatibility in order to make room for a new feature, which in this case is the ability to treat the resource itself as a tuple of objects when count
is set.
This is one of the situations that the configuration upgrade tool should be able to detect and fix automatically. The fix to apply here is to index the resource using count.index
:
data "null_data_source" "efs_tags" {
count = var.enable_efs ? 1 : 0
inputs = zipmap(split(",", data.null_data_source.efs_ctag1[count.index].outputs["keys"]), split(",", data.null_data_source.efs_ctag1[count.index].outputs["values"]))
}
Although it won't be ready in time for Terraform 0.12, the planned for_each
feature (in #19291) will allow this to be written more succinctly once implemented:
# Won't work in 0.12.0, but planned for a subsequent release
data "null_data_source" "efs_tags" {
for_each = data.null_data_source.efs_ctag1
inputs = zipmap(split(",", each.value.outputs["keys"]), split(",", each.value.outputs["values"]))
}
... though indeed other 0.12 features should be able to tidy this example up considerably by removing the null_data_source
usage altogether in favor of local values and for
expressions.
Since this is now fixed in master and ready for inclusion in the forthcoming v0.12.0 release, I'm going to close it. We'll continue to use #19291 to track the for_each
feature, which has had groundwork laid during the v0.12 cycle but needs some additional post-release work to make it fully usable.
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Apologies in advance if this is a null provider bug; this seems more like a core functionality issue to me.
Terraform Version
Terraform Configuration Files
Debug Output
I have a logfile but can't figure out how to encrypt it using your pgp key. The TF docs are not sufficient.
Crash Output
No crash
Expected Behavior
Normal completion
Actual Behavior
Steps to Reproduce
I have been unable to create a sanitary, simplified test case which exhibits this behavior. I am also unable to include the entirety of the TF in question, or the log due to insufficient encryption instructions.
Important Factoids
References