Closed lordmuffin closed 4 years ago
I'm running into the same thing on Terraform v0.13.4. I've yet to find a workaround that doesn't involved removing the worker_groups_launch_template
completely.
Maybe there is probably something wrong in your inputs ? Can you please share and format correctly your worker_groups_launch_template
variable.
@barryib The second map is what i've attempted to add and ran into the error posted by @lordmuffin
worker_groups_launch_template = [
{
instance_type = var.eks_workers.instance_type
asg_max_size = var.eks_workers.asg_max_size
cpu_credits = "unlimited"
root_encrypted = true
max_instance_lifetime = 604800
iam_instance_profile_name = aws_iam_instance_profile.worker_profile.name
pre_userdata = data.template_file.userdata.rendered
metadata_http_tokens = "required"
metadata_http_put_response_hop_limit = 1
tags = [
{
"key" = "k8s.io/cluster-autoscaler/enabled"
"value" = "true"
"propagate_at_launch" = true
},
{
"key" = "k8s.io/cluster-autoscaler/${var.application}",
"value" = "owned"
"propagate_at_launch" = true
}
]
},
{
instance_type = var.eks_workers.instance_type
asg_max_size = var.eks_workers.asg_max_size
cpu_credits = "unlimited"
root_encrypted = true
max_instance_lifetime = 604800
iam_instance_profile_name = aws_iam_instance_profile.worker_profile.name
pre_userdata = data.template_file.userdata.rendered
metadata_http_tokens = "required"
metadata_http_put_response_hop_limit = 1
tags = [
{
"key" = "k8s.io/cluster-autoscaler/enabled"
"value" = "true"
"propagate_at_launch" = true
},
{
"key" = "k8s.io/cluster-autoscaler/${var.application}",
"value" = "owned"
"propagate_at_launch" = true
}
]
}
]
Can you please tell me what are your Terraform and providers versions ?
@barryib The second map is what i've attempted to add and ran into the error posted by @lordmuffin
So in your case, you wanted to expand your workers groups from 1 to 2 ? Did you change maps order ?
Yep that's right! I expanded from 1 to 2 by just appending the second map to the list, so the order didn't change.
Here is the version info:
Terraform v0.13.4
+ provider registry.terraform.io/hashicorp/aws v3.9.0
+ provider registry.terraform.io/hashicorp/helm v1.3.1
+ provider registry.terraform.io/hashicorp/kubernetes v1.13.2
+ provider registry.terraform.io/hashicorp/local v1.4.0
+ provider registry.terraform.io/hashicorp/null v2.1.2
+ provider registry.terraform.io/hashicorp/random v2.3.0
+ provider registry.terraform.io/hashicorp/template v2.1.2
Thanks for the quick responses and helping out, its super appreciated!
Are you getting this error from the latest version of this module v13.0.0 ? If not, can please you test it with that version also ?
Thanks again for your help.
no problem!
I tested with the latest version (v13.0.0) and I got the same result.
OK. I'll do some tests with that release.
BTW, in pre_userdata = data.template_file.userdata.rendered
, where the data.template_file.userdata.rendered
is coming from ? Are you trying to access to this resource ?
If yes, I think you're creating a race condition where you depends on something that doesn't exist yet. Please, remove it and test again please. Furthermore, data.template_file.userdata.rendered
is created for worker group launch configuration. In your case, you're defining a worker group with launch template (worker_groups_launch_template=[]
)
If you want to generate dynamically your pre_userdata
, please use another template. Because the pre_userdata
is a part of your generated userdata.
pre_userdata = data.template_file.userdata.rendered
is coming from a data source that we defined:
data "template_file" "userdata" {
template = file("${path.module}/templates/pre_userdata.sh.tpl")
vars = {
region = data.aws_region.current.name
}
}
I tried to remove pre_userdata
and still ran into the same error as before.
Unfortunately, I was unable to reproduce this issue. I tried a lot of combinaison from the examples/launch_templates. Expanding from 1 to 2 and 3 without any error.
Here are my versions Terraform versions:
Terraform v0.13.4
+ provider registry.terraform.io/hashicorp/aws v3.9.0
+ provider registry.terraform.io/hashicorp/kubernetes v1.13.2
+ provider registry.terraform.io/hashicorp/local v1.4.0
+ provider registry.terraform.io/hashicorp/null v2.1.2
+ provider registry.terraform.io/hashicorp/random v2.3.0
+ provider registry.terraform.io/hashicorp/template v2.2.0
terraform-aws-eks vesion: 13.0.0
Please share all your code, and the exact error message.
I think i've found the issue in my code at least. I had a depends_on
set in the module for the instance profile i was referencing.
depends_on = [aws_iam_instance_profile.worker_profile]
After removing that, I was able to plan and apply without error.
Ok. Thanks for your feedback. Closing this issue.
I encountered something similar while using the module. If you have multiple worker groups defined via worker_groups
and you go from 2 to 3, then you see an error.
Error: Invalid index
on ...../workers.tf line 193, in resource "aws_launch_configuration" "workers":
193: user_data_base64 = base64encode(data.template_file.userdata.*.rendered[count.index])
|----------------
| count.index is 2
| data.template_file.userdata is tuple with 2 elements
The given key does not identify an element in this collection value
I did have a depends_on
attribute defined for this eks module, when I removed it the plan and apply worked fine. Definitely worth investigating why this is the case.
I am using Terraform v0.13.5 and v13.0.0
of the terraform-aws-modules/eks/aws
module. I tried the latest version of the module and still saw the error while planning and applying.
I am encountering the same issue and I do not have any depends_on attribute defined. I am trying to increase worker groups from 3 to 4. Can anyone please advice? Thanks in advance.
My deployment had this when calling for this EKS module:
depends_on = [
aws_iam_instance_profile.node_profile
]
So after removing this part, I was able to expend my node groups from 4 to 5 groups.
Initial deployment require depends_on
,
Expending node group require remove depends_on
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
I have issues
I have issues with expanding my existing code to allow more than 2 worker groups launch templates. It results in an index error. Not entirely sure if its a bug, but after reviewing the module, I believe it is.
I'm submitting a...
What is the current behavior?
We expanded this Tf code from 2 to 3 maps:
worker_groups_launch_template = [ { name = "kiam" instance_type = "m5.large" asg_max_size = 1 subnets = flatten(data.terraform_remote_state.vpc.outputs.eks_private_subnet_ids) pre_userdata = "${file("${path.root}/kiamserver.userdata")}" kubelet_extra_args = "--node-labels=node.kubernetes.io/role=kiam" iam_instance_profile_name = "${aws_iam_instance_profile.server-node-instanceprofile.name}" }, { name = "spot-1" spot_price = "0.096" instance_type = "m5.large" asg_desired_capacity= 3 asg_min_size = 3 asg_max_size = 5 subnets = flatten(data.terraform_remote_state.vpc.outputs.eks_private_subnet_ids) pre_userdata = "${file("${path.root}/kiamagent.userdata")}" kubelet_extra_args = "--node-labels=node.kubernetes.io/role=worker --node-labels=node.kubernetes.io/lifecycle=spot" suspended_processes = ["AZRebalance"] iam_instance_profile_name = "${aws_iam_instance_profile.server-node-instanceprofile.name}" }, { name = "spot-public-1" spot_price = "0.096" instance_type = "m5.large" asg_desired_capacity= 3 asg_min_size = 3 asg_max_size = 5 subnets = flatten(data.terraform_remote_state.vpc.outputs.eks_public_subnet_ids) kubelet_extra_args = "--node-labels=node.kubernetes.io/role=worker --node-labels=node.kubernetes.io/lifecycle=spot" suspended_processes = ["AZRebalance"] iam_instance_profile_name = "${aws_iam_instance_profile.server-node-instanceprofile.name}" } ]
This results in:
Error: Invalid index on .terraform/modules/runner-eks/workers_launch_template.tf line 273, in resource "aws_launch_template" "workers_launch_template": 273: data.template_file.launch_template_userdata.*.rendered[count.index], |---------------- | count.index is 2 | data.template_file.launch_template_userdata is tuple with 2 elements The given key does not identify an element in this collection value.
If this is a bug, how to reproduce? Please include a code sample if relevant.
What's the expected behavior?
I expected the module to simply create another launch template.
Are you able to fix this problem and submit a PR? Link here if you have already.
Environment details
Any other relevant info
I am looking for any insights here that would lead to an answer. We decided to use this module over running our own because of all the features, and its proven very useful! Kudos to the team.