Closed gwohletz closed 2 years ago
Correction the only resources that required setting the tags to have a non-empty value were:
the re-updating of aws_autoscaling_group was simply occurring as a side-effect of the aws_launch_template needing to be updated.
also filed this https://github.com/hashicorp/terraform-provider-aws/issues/20371
Further correction the empty tags that showed up in "aws_iam_instance_profile" and "aws_iam_role" were not auto-generated by kops but rather as a result of a cloudLabel directive in our cluster yaml which i have since removed.
The following tags with blank values do appear (and cause problems in) aws_launch_template resource blocks:
Example of problematic aws_launch_template resource block
resource "aws_launch_template" "master-us-west-2-a-masters-XXX" {
block_device_mappings {
device_name = "/dev/sda1"
ebs {
delete_on_termination = true
encrypted = true
kms_key_id = "XXX"
volume_size = 64
volume_type = "gp2"
}
}
iam_instance_profile {
name = aws_iam_instance_profile.masters-XXX.id
}
image_id = "ami-XXX"
instance_type = "c5a.xlarge"
key_name = aws_key_pair.XXX.id
lifecycle {
create_before_destroy = true
}
metadata_options {
http_endpoint = "enabled"
http_put_response_hop_limit = 1
http_tokens = "optional"
}
monitoring {
enabled = false
}
name = "master-us-west-2-a.masters.XXX"
network_interfaces {
associate_public_ip_address = false
delete_on_termination = true
security_groups = [aws_security_group.masters-XXX.id, "sg-XXX"]
}
tag_specifications {
resource_type = "instance"
tags = {
"KubernetesCluster" = "XXX"
"Name" = "master-us-west-2-a.masters.XXX"
"environment" = "PROD"
"k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/kops-controller-pki" = ""
"k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role" = "master"
"k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/control-plane" = ""
"k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/master" = ""
"k8s.io/cluster-autoscaler/node-template/label/node.kubernetes.io/exclude-from-external-load-balancers" = ""
"k8s.io/role/master" = "1"
"kops.k8s.io/instancegroup" = "master-us-west-2-a"
"kubernetes.io/cluster/kube-us-west-2.XXX" = "owned"
}
}
tag_specifications {
resource_type = "volume"
tags = {
"KubernetesCluster" = "XXX"
"Name" = "master-us-west-2-a.masters.XXX"
"environment" = "PROD"
"k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/kops-controller-pki" = ""
"k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role" = "master"
"k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/control-plane" = ""
"k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/master" = ""
"k8s.io/cluster-autoscaler/node-template/label/node.kubernetes.io/exclude-from-external-load-balancers" = ""
"k8s.io/role/master" = "1"
"kops.k8s.io/instancegroup" = "master-us-west-2-a"
"kubernetes.io/cluster/kube-us-west-2.XXX" = "owned"
}
}
tags = {
"KubernetesCluster" = "XXX"
"Name" = "master-us-west-2-a.masters.XXX"
"environment" = "PROD"
"k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/kops-controller-pki" = ""
"k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role" = "master"
"k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/control-plane" = ""
"k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/master" = ""
"k8s.io/cluster-autoscaler/node-template/label/node.kubernetes.io/exclude-from-external-load-balancers" = ""
"k8s.io/role/master" = "1"
"kops.k8s.io/instancegroup" = "master-us-west-2-a"
"kubernetes.io/cluster/XXX" = "owned"
}
user_data = filebase64("${path.module}/data/aws_launch_template_master-us-west-2-a.masters.XXX_user_data")
}
example of same resource block manually edited to eliminate blank tag values that are causing trouble
resource "aws_launch_template" "master-us-west-2-a-masters-XXX" {
block_device_mappings {
device_name = "/dev/sda1"
ebs {
delete_on_termination = true
encrypted = true
kms_key_id = "XXX"
volume_size = 64
volume_type = "gp2"
}
}
iam_instance_profile {
name = aws_iam_instance_profile.masters-XXX.id
}
image_id = "ami-XXX"
instance_type = "c5a.xlarge"
key_name = aws_key_pair.XXX.id
lifecycle {
create_before_destroy = true
}
metadata_options {
http_endpoint = "enabled"
http_put_response_hop_limit = 1
http_tokens = "optional"
}
monitoring {
enabled = false
}
name = "master-us-west-2-a.masters.XXX"
network_interfaces {
associate_public_ip_address = false
delete_on_termination = true
security_groups = [aws_security_group.masters-XXX.id, "sg-XXX"]
}
tag_specifications {
resource_type = "instance"
tags = {
"KubernetesCluster" = "XXX"
"Name" = "master-us-west-2-a.masters.XXX"
"environment" = "PROD"
"k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/kops-controller-pki" = ""
"k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role" = "master"
"k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/control-plane" = ""
"k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/master" = ""
"k8s.io/cluster-autoscaler/node-template/label/node.kubernetes.io/exclude-from-external-load-balancers" = ""
"k8s.io/role/master" = "1"
"kops.k8s.io/instancegroup" = "master-us-west-2-a"
"kubernetes.io/cluster/kube-us-west-2.XXX" = "owned"
}
}
tag_specifications {
resource_type = "volume"
tags = {
"KubernetesCluster" = "XXX"
"Name" = "master-us-west-2-a.masters.XXX"
"environment" = "PROD"
"k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/kops-controller-pki" = ""
"k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role" = "master"
"k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/control-plane" = ""
"k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/master" = ""
"k8s.io/cluster-autoscaler/node-template/label/node.kubernetes.io/exclude-from-external-load-balancers" = ""
"k8s.io/role/master" = "1"
"kops.k8s.io/instancegroup" = "master-us-west-2-a"
"kubernetes.io/cluster/kube-us-west-2.XXX" = "owned"
}
}
tags = {
"KubernetesCluster" = "XXX"
"Name" = "master-us-west-2-a.masters.XXX"
"environment" = "PROD"
"k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/kops-controller-pki" = "1"
"k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role" = "master"
"k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/control-plane" = "1"
"k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/master" = "1"
"k8s.io/cluster-autoscaler/node-template/label/node.kubernetes.io/exclude-from-external-load-balancers" = "1"
"k8s.io/role/master" = "1"
"kops.k8s.io/instancegroup" = "master-us-west-2-a"
"kubernetes.io/cluster/XXX" = "owned"
}
user_data = filebase64("${path.module}/data/aws_launch_template_master-us-west-2-a.masters.XXX_user_data")
}
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
/kind bug
1. What
kops
version are you running? The commandkops version
, will display this information. kops 21.0 2. What Kubernetes version are you running?kubectl version
will print the version if a cluster is running or provide the Kubernetes version specified as akops
flag. kubernetes 1.17.173. What cloud provider are you using? aws
4. What commands did you run? What is the simplest way to reproduce this issue? update cluster XXXX --create-kube-config=false --target=terraform --out=XXXX
5. What happened after the commands executed? resulting terraform contains tags in the aws_launch_template (and other sections) with empty strings as the values, this causes terraform (version 0.12.31 and aws provider 3.51.0) to endlessly think it needs to update the launch templates tags no matter how many times you plan/apply
6. What did you expect to happen? update things one time and then have an empty terraform plan
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest. You may want to remove your cluster name and other sensitive information.8. Please run the commands with most verbose logging by adding the
-v 10
flag. Paste the logs into this report, or in a gist and provide the gist link here. plans continue to say the following for each launch template, note that is only the tags with empty value it thinks need an update.9. Anything else do we need to know?
If i hand edit kubernetes.tf and set these tags to have a value of "1" instead of "" things work correctly (After applying the plan subsequent "terraform plan" operation show no changes required). some resources such as ebs volumes seem to support tags with empty values others do not, tags on aws_iam_role, aws_iam_instance_profile, aws_autoscaling_group and aws_launch_template had to be changed from "" to "1" in order to make tf stop trying to change them on every run.