Open mmoreno43 opened 4 weeks ago
we can probably add something to paper over this but it looks like the issue is that you are providing tags without values
SNOW_Asset_ID = ""
App_Type = ""
actually, I take that back - we could do something if these values were passed directly to the module, but these are provided via the default_tags
on the provider which we don't have access to those. you'll need to ensure you provide values for the associated keys supplied with tags
Could we make this a true/false input to deploy this or not? We create other AWS resources that are not affected by this but CloudFormation.
Or at least include a random string after the stack name. We invoke this module multiple times in our EKS module to time different groups of addons that need to be deployed to a cluster in a specific order, v1.18.0 breaks that for us with no way to override the stack name from matching the cluster name. We don't mind sharing telemetery but I don't think users only invoking this module once per cluster name is a safe assumption.
edit: Sorry, I guess I didn't read the initial issue closely enough. We're having a similar problem but with a name conflict on the telemetry cloudformation stack name. I'm happy to open a separate issue for that if it would be helpful.
Description
Hello, the latest release (1.18.0) has broke my terraform. It seems as though it does not like the tags I have in place in my default tags.
Versions
Module version [Required]: 1.18.0
Terraform version: 1.6.4
Provider version(s): aws v5.73.0 null v3.2.3 external v2.3.4 local v2.5.2 time v0.12.1 kubernetes v2.33.0 helm v2.16.1 tls v4.0.6 cloudinit v2.3.5
Reproduction Code [Required]
main.tf
providers.tf
terraform.tfvars
Steps to reproduce the behavior:
terraform init terraform plan terraform apply
Expected behavior
I expect that Terraform will successfully apply "module.eks_blueprints_addons.aws_cloudformation_stack.usage_telemetry[0]."
Actual behavior
Terraform fails during apply
Terminal Output Screenshot(s)
Additional context
We have been using this module on in our environment for a while across multiple clusters. Would it be possible to have a flag so that we can disable this if we don't want it so that we aren't pinned to version 1.17.0 and can receive future updates?