Open sgran opened 3 years ago
It seems an issue managing tf modules dependencies, where the version
value seems modified during the rancher2_cluster
execution. Are module.launch_template
tasks finished before starting rancher2_cluster
tasks?
Yes. The DAG implies they must, in order to be evaluated correctly.
Hi, I'm running into this issue as well. I'm creating node groups with a launch template. My code has the same nested block as above, using eks_config_v2
.
When I update the launch template, the plan output gives me ~ version = 1 -> 0
, where I would expect to see version = (known after apply)
.
Confirming that this is still an issue that we have run into as well, any update causing an update to the launch_template
is causing an error like this.
terraform {
required_version = ">= 1.5"
required_providers {
rancher2 = {
source = "rancher/rancher2"
version = "3.2.0"
}
}
}
Initializing provider plugins...
- Finding hashicorp/null versions matching ">= 2.1.0"...
- Finding hashicorp/template versions matching "~> 2.2.0"...
- Finding rancher/rancher2 versions matching "3.2.0"...
- Finding hashicorp/aws versions matching ">= 2.0.0"...
- Finding hashicorp/tls versions matching ">= 2.0.0"...
- Finding hashicorp/local versions matching ">= 1.3.0"...
- Installing rancher/rancher2 v3.2.0...
- Installed rancher/rancher2 v3.2.0 (signed by a HashiCorp partner, key ID 2EEB0F9AD44A135C)
- Installing hashicorp/aws v5.48.0...
- Installed hashicorp/aws v5.48.0 (signed by HashiCorp)
- Installing hashicorp/tls v4.0.5...
- Installed hashicorp/tls v4.0.5 (signed by HashiCorp)
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
- Installing hashicorp/null v3.2.2...
- Installed hashicorp/null v3.2.2 (signed by HashiCorp)
- Installing hashicorp/template v2.2.0...
- Installed hashicorp/template v2.2.0 (signed by HashiCorp)
resource "rancher2_cluster" "eks" {
name = var.cluster_name
description = var.cluster_description
eks_config_v2 {
[...]
dynamic "node_groups" {
for_each = var.node
content {
name = node_groups.value.name
desired_size = node_groups.value.desired_size
max_size = node_groups.value.max_size
min_size = node_groups.value.min_size
launch_template {
id = aws_launch_template.eks_launch_template[node_groups.key].id
version = aws_launch_template.eks_launch_template[node_groups.key].latest_version
} # lt
} # content
} # dynamic
} # eks_config_v2
} # resource
resource "aws_launch_template" "eks_launch_template" {
for_each = var.node
name = "${local.prefix_for_names}_${each.value.name}_lt"
[...]
image_id = each.value.ami_id
instance_type = each.value.instance_type
# using a custom AMI, so we're providing b64'd `user_data`
user_data = base64encode(data.template_file.user_data.rendered)
update_default_version = true
[...]
}
node = {
node_group_1 = {
desired_size = 1
min_size = 1
max_size = 6
instance_type = "m5.xlarge"
name = "node_group_1"
ami_id = "ami-xxxxxxxxxxx"
[...]
}
node_group_2 = {
desired_size = 1
min_size = 1
max_size = 3
instance_type = "t3.small"
name = "node_group_2"
ami_id = "ami-xxxxxxxxxx"
[...]
}
}
│ Error: Provider produced inconsistent final plan
│
│ When expanding the plan for module.rancher-eks-cluster.rancher2_cluster.eks
│ to include new values learned so far during apply, provider
│ "registry.terraform.io/rancher/rancher2" produced an invalid new value for
│ .eks_config_v2[0].node_groups[1].launch_template[0].id: was
│ cty.StringVal(""), but now cty.StringVal("lt-xxxxxxxxxxxxxxxx").
│
│ This is a bug in the provider, which should be reported in the provider's
│ own issue tracker.
╵
╷
│ Error: Provider produced inconsistent final plan
│
│ When expanding the plan for module.rancher-eks-cluster.rancher2_cluster.eks
│ to include new values learned so far during apply, provider
│ "registry.terraform.io/rancher/rancher2" produced an invalid new value for
│ .eks_config_v2[0].node_groups[1].launch_template[0].version: was
│ cty.NumberIntVal(0), but now cty.NumberIntVal(1).
│
│ This is a bug in the provider, which should be reported in the provider's
│ own issue tracker.
│ Error: Provider produced inconsistent final plan │ │ When expanding the plan for module.eks[0].rancher2_cluster.this to include new values learned so far during apply, provider "registry.terraform.io/rancher/rancher2" produced an invalid new value for │ .eks_config_v2[0].node_groups[0].launch_template[0].version: was cty.NumberIntVal(0), but now cty.NumberIntVal(4). │ │ This is a bug in the provider, which should be reported in the provider's own issue tracker.
Given a rancher2_cluster with a nested block:
This can be triggered by changing the launch_template parameters in the external resource.