terraform-aws-modules / terraform-aws-eks

Terraform module to create Amazon Elastic Kubernetes (EKS) resources πŸ‡ΊπŸ‡¦
https://registry.terraform.io/modules/terraform-aws-modules/eks/aws
Apache License 2.0
4.41k stars 4.05k forks source link

Cycle error on destroy when updating from 12.0 to 12.2 #950

Closed jaimehrubiks closed 2 years ago

jaimehrubiks commented 4 years ago

I have issues

I see the following issue on destroy:

Error: Cycle: module.eks.aws_security_group_rule.cluster_https_worker_ingress[0] (destroy), module.eks.aws_security_group.workers[0] (destroy), module.eks.aws_eks_cluster.this[0] (destroy)

I'm submitting a...

Explanation

I'm not using any security groups variables apart from "worker_additional_security_group_ids", the issue seems to come after upgrading the module to 12.2 from 12.1 or 12.0 (probably 12.1 but not sure, sorry, I had ~> 12.0 set and don't remember when I did init)

Workaround

I downgraded to 12.1 and deleted the cluster successfully. Sadly, I have a few clusters I won't delete and I fear having further issues in the future.

Any thoughts?

milesarmstrong commented 4 years ago

We're seeing this too, I'm thinking its this change https://github.com/terraform-aws-modules/terraform-aws-eks/pull/933?

dpiddockcmp commented 4 years ago

Annoying that you have to say "yes" to destroy in order to trigger the error. Makes debugging really time consuming using the module. It doesn't appear to happen with terraform 0.12.9 (module minimum version) but does with 0.12.28 (latest). So this is a terraform bug they introduced at some point. I wouldn't hold out much hope of this being addressed in 0.12 what with 0.13 being near to release.

This is related to Terraform storing dependency information in the state file. This is not correctly updated when an apply is performed that is not causing modification to the resource. So module.eks.aws_security_group.workers[0] still thinks it depends on module.eks.aws_eks_cluster.this[0] even though it doesn't in the Terraform config:

cat terraform.tfstate | jq '.resources[] | select(.type=="aws_security_group" and .name=="workers").instances[0].dependencies'
[
  "module.eks.aws_cloudwatch_log_group.this",
  "module.eks.aws_eks_cluster.this",
  "module.eks.aws_iam_role.cluster",
  "module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy",
  "module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSServicePolicy",
  "module.eks.aws_security_group.cluster",
  "module.vpc.aws_subnet.private",
  "module.vpc.aws_vpc.this",
  "module.vpc.aws_vpc_ipv4_cidr_block_association.this",
  "random_string.suffix"
]

Compare to the security group rule created under 12.2.0:

cat terraform.tfstate | jq '.resources[] | select(.type=="aws_security_group" and .name=="workers").instances[0].dependencies'
[
  "module.vpc.aws_vpc.this",
  "random_string.suffix"
]

The only current solution is to break the cycle as seen by terraform. Manually delete the cluster_https_worker_ingress rule and then drop it from the state file:

terraform state show module.eks.aws_security_group_rule.cluster_https_worker_ingress[0]
aws ec2 revoke-security-group-ingress --group-id $security_group_id --source-group $source_security_group_id --port 443 --protocol tcp
terraform state rm module.eks.aws_security_group_rule.cluster_https_worker_ingress

I'll see if I can create a minimal working example and raise a Terraform issue.

mhulscher commented 4 years ago

Dealing with this as well. Not trying to judge here, but I am surprised that this was not caught by CI/CD.

dpiddockcmp commented 4 years ago

What CI/CD? πŸ˜… We're an unfunded open source community project. Bearing in mind that a single EKS create/destroy cycle takes in the region of 40 minutes. An upgrade can easily add another 30.

I did test the upgrade and destroy but happened to use terraform 0.12.9 for ensuring it still works with the minimum version. Didn't check for bugs in the latest version.

imw commented 4 years ago

Ran into this as well, though it appears to me that the interaction of terraform-aws-eks 12.2 and TF>= 0.12.20 may not be the entire story. So long as they are running with AWS provider ~> 2.70, I am able to successfully update the cluster. With AWS 3.0.0, updating the cluster runs into the cycle issue noted above. I've not yet had the opportunity to try simply deleting. Will update when I have the chance.

Update: Destruction appears to work as well, with TF==0.12.29, terraform-aws-eks==12.2, AWS provider==2.70

mhulscher commented 4 years ago

Did some testing. Given:

  1. install terraform-aws-eks 12.1
  2. upgrade to terraform-aws-eks 12.2
  3. destroy

Steps 1 and 2 work without issues.

Step 3 (destroy) throws the cycle error:

Error: Cycle: module.eks.module.eks.aws_eks_cluster.this[0] (destroy), module.eks.module.eks.aws_security_group_rule.cluster_https_worker_ingress[0] (destroy), module.eks.module.eks.aws_security_group.workers[0] (destroy)
dpiddockcmp commented 4 years ago

Yes, this is caused by a bug present in Terraform since 0.12.15.

Terraform never prunes dependency information from the state file leading to configuration drift. Unfortunately we've highlighted this bug by rearranging some of the dependencies in the module. More details in the Terraform issue: https://github.com/hashicorp/terraform/issues/25611

tnimni commented 3 years ago

@dpiddock assuming I want to upgrade to module version 13.0.0 and also upgrade to terraform 0.13 should I do something in advance to avoid this?

I don't want to have to downgrade to 0.12 when I need to delete a cluster or do it manually, I'm not sure downgrading to 0.12 is even possible

thank you

ingluife commented 3 years ago

I'm having the same issue here!

But, I have that error when I run terraform apply and. Then when I answer yes to apply the Plan I got this message:

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

Error: Cycle: module.eks_cluster.aws_launch_template.workers_launch_template[0], module.eks_cluster.random_pet.workers_launch_template[0], module.eks_cluster.random_pet.workers_launch_template[0] (destroy deposed c67826c4), module.eks_cluster.aws_launch_template.workers_launch_template[1] (destroy), module.eks_cluster.aws_iam_instance_profile.workers_launch_template[1] (destroy)

Tested:

More context:

Module definition:

module "eks_cluster" {
  source = "terraform-aws-modules/eks/aws"
  version = "12.1.0"

  .....

  cluster_name                    = data.terraform_remote_state.reg.outputs.network.dashname
  cluster_version                 = "1.17"
  subnets                         = [for subnet in data.terraform_remote_state.reg.outputs.network.subnets.public : subnet.id]
  vpc_id                          = data.terraform_remote_state.reg.outputs.network.vpc.id
  enable_irsa                     = true
  kubeconfig_name                 = data.terraform_remote_state.reg.outputs.network.dashname
  cluster_endpoint_private_access = true

  ....

  worker_groups_launch_template = [
    {
      name                    = "spot01",
      override_instance_types = ["m5dn.xlarge"]
      spot_instance_pools     = 2
      asg_max_size            = 3
      asg_desired_capacity    = 3
      spot_price              = "0.11"
      kubelet_extra_args      = "--node-labels=kubernetes.io/lifecycle=spot"
      public_ip               = true
      key_name                = "KeyName" ---------> Updated to change the KeyPair Name
      root_volume_size        = 30

      ....
    }
  ]
}

Thanks!

barryib commented 3 years ago

We solved a lot of issue about cycle error lastly. Can you please test this with the latest module's version ? See changelog for more info https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/CHANGELOG.md

ingluife commented 3 years ago

Hi @barryib Thanks for your answer. I have just tested with the latest module version v13.2.1 and the result was the same error.

...
Initializing modules...
Downloading terraform-aws-modules/eks/aws 13.2.1 for eks_cluster...
- eks_cluster in .terraform/modules/eks_cluster
- eks_cluster.fargate in .terraform/modules/eks_cluster/modules/fargate
- eks_cluster.node_groups in .terraform/modules/eks_cluster/modules/node_groups

Initializing the backend...

Initializing provider plugins...

The following providers do not have any version constraints in configuration,
so the latest version was installed.
...

After applying the plan I got:

...
Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

Error: Cycle: module.eks_cluster.local.auth_launch_template_worker_roles, module.eks_cluster.local.configmap_roles, module.eks_cluster.kubernetes_config_map.aws_auth[0], module.eks_cluster.aws_launch_template.workers_launch_template[0], module.eks_cluster.aws_autoscaling_group.workers_launch_template[0], module.eks_cluster.random_pet.workers_launch_template[0] (destroy deposed 3c546aca), module.eks_cluster.aws_launch_template.workers_launch_template[1] (destroy), module.eks_cluster.aws_iam_instance_profile.workers_launch_template[1] (destroy), module.eks_cluster.random_pet.workers_launch_template[0]
barryib commented 3 years ago

@ingluife Can you please share your plan output ? Why Terraform wants to destroy module.eks_cluster.aws_iam_instance_profile.workers_launch_template and module.eks_cluster.aws_launch_template.workers_launch_template ? Did you change your worker groups order in var.workers_launch_templates ? Or Removed something in that list ?

ingluife commented 3 years ago

Sure @barryib!

Did you change your worker groups order in var.workers_launch_templates ?:

Answer: No, I didn't.

Or Removed something in that list ?:

Answer: Yes, I changed the key_name on worker_groups_launch_template and updated the ingress on one of SG additional_security_group_ids.

Terraform Plan:

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
  ~ update in-place
  - destroy
+/- create replacement and then destroy

Terraform will perform the following actions:

  # aws_security_group.admin_ssh will be updated in-place
  ~ resource "aws_security_group" "admin_ssh" {
        arn                    = "arn:aws:ec2:XXXXXXXXX:...."
        description            = "nodes that can be connected to via ssh by administrators"
        egress                 = []
        id                     = "sg-ddddddd"
      ~ ingress                = [
            {
                cidr_blocks      = [
                    "x.x.x.x/32",
                ]
                description      = "Ingress1 "
                from_port        = 22
                ipv6_cidr_blocks = []
                prefix_list_ids  = []
                protocol         = "tcp"
                security_groups  = []
                self             = false
                to_port          = 22
            },
            {
                cidr_blocks      = [
                    "104.33.212.83/32",
                ]
                description      = "Ingress2"
                from_port        = 22
                ipv6_cidr_blocks = []
                prefix_list_ids  = []
                protocol         = "tcp"
                security_groups  = []
                self             = false
                to_port          = 22
            },
            {
                cidr_blocks      = [
                    "x.x.x.x/32",
                ]
                description      = "Ingress3"
                from_port        = 22
                ipv6_cidr_blocks = []
                prefix_list_ids  = []
                protocol         = "tcp"
                security_groups  = []
                self             = false
                to_port          = 22
            },
            {
                cidr_blocks      = [
                    "x.x.x.x/32",
                ]
                description      = "Ingress4"
                from_port        = 22
                ipv6_cidr_blocks = []
                prefix_list_ids  = []
                protocol         = "tcp"
                security_groups  = []
                self             = false
                to_port          = 22
            },
            {
                cidr_blocks      = [
                    "x.x.x.x/13",
                ]
                description      = "Ingress5"
                from_port        = 22
                ipv6_cidr_blocks = []
                prefix_list_ids  = []
                protocol         = "tcp"
                security_groups  = []
                self             = false
                to_port          = 22
            },
            {
                cidr_blocks      = [
                    "x.x.x.x/32",
                ]
                description      = "Ingress5"
                from_port        = 22
                ipv6_cidr_blocks = []
                prefix_list_ids  = []
                protocol         = "tcp"
                security_groups  = []
                self             = false
                to_port          = 22
            },
            {
                cidr_blocks      = [
                    "x.x.x.x/14",
                ]
                description      = "Ingress6"
                from_port        = 22
                ipv6_cidr_blocks = []
                prefix_list_ids  = []
                protocol         = "tcp"
                security_groups  = []
                self             = false
                to_port          = 22
            },
            {
                cidr_blocks      = [
                    "x.x.x.x/32",
                ]
                description      = "Ingress7"
                from_port        = 22
                ipv6_cidr_blocks = []
                prefix_list_ids  = []
                protocol         = "tcp"
                security_groups  = []
                self             = false
                to_port          = 22
            },
          - {
              - cidr_blocks      = [
                  - "x.x.x.x/32",
                ]
              - description      = "Ingress8"
              - from_port        = 22
              - ipv6_cidr_blocks = []
              - prefix_list_ids  = []
              - protocol         = "tcp"
              - security_groups  = []
              - self             = false
              - to_port          = 22
            },
          + {
              + cidr_blocks      = [
                  + "x.x.x.x/32",
                ]
              + description      = "Ingress9"
              + from_port        = 22
              + ipv6_cidr_blocks = []
              + prefix_list_ids  = []
              + protocol         = "tcp"
              + security_groups  = []
              + self             = false
              + to_port          = 22
            },
            {
                cidr_blocks      = [
                    "x.x.x.x/13",
                ]
                description      = "Ingress10"
                from_port        = 22
                ipv6_cidr_blocks = []
                prefix_list_ids  = []
                protocol         = "tcp"
                security_groups  = []
                self             = false
                to_port          = 22
            },
            {
                cidr_blocks      = [
                    "x.x.x.x/12",
                ]
                description      = "Ingress11"
                from_port        = 22
                ipv6_cidr_blocks = []
                prefix_list_ids  = []
                protocol         = "tcp"
                security_groups  = []
                self             = false
                to_port          = 22
            },
        ]
        name                   = "terraform-20xxxxxx"
        owner_id               = "xxxxxxxxxxxxxxxxx"
        revoke_rules_on_delete = false
        tags                   = {}
        vpc_id                 = "vpc-xxxxxxxxxxx"
    }

  # module.eks_cluster.aws_autoscaling_group.workers_launch_template[0] will be updated in-place
  ~ resource "aws_autoscaling_group" "workers_launch_template" {
        arn                       = "arn:aws:autoscaling:us-west-2:xxxxxxxxxxxxxxxxxxxx"
        availability_zones        = [
            "us-west-2a",
            "us-west-2b",
        ]
        default_cooldown          = 300
        desired_capacity          = 3
        enabled_metrics           = []
        force_delete              = false
        health_check_grace_period = 300
        health_check_type         = "EC2"
        id                        = "dev-uxxxxxxxxxx"
        load_balancers            = []
        max_instance_lifetime     = 0
        max_size                  = 3
        metrics_granularity       = "1Minute"
        min_size                  = 1
        name                      = "dev-uxxxxxxxx"
        name_prefix               = "dev-usw2-spot01"
        protect_from_scale_in     = false
        service_linked_role_arn   = "arn:aws:iam::xxxxxxxxxxxxxxxx:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling"
        suspended_processes       = [
            "AZRebalance",
        ]
      ~ tags                      = [
          - {
              - "key"                 = "Name"
              - "propagate_at_launch" = "true"
              - "value"               = "dev-usw2-spot01-eks_asg"
            },
          - {
              - "key"                 = "kubernetes.io/cluster/xxxxxxxxxx"
              - "propagate_at_launch" = "true"
              - "value"               = "owned"
            },
        ]
        target_group_arns         = []
        termination_policies      = []
        vpc_zone_identifier       = [
            "subnet-xxxxxxxxx",
            "subnet-xxxxxxxxxx",
        ]
        wait_for_capacity_timeout = "10m"

        mixed_instances_policy {
            instances_distribution {
                on_demand_allocation_strategy            = "prioritized"
                on_demand_base_capacity                  = 0
                on_demand_percentage_above_base_capacity = 0
                spot_allocation_strategy                 = "lowest-price"
                spot_instance_pools                      = 2
            }

            launch_template {
                launch_template_specification {
                    launch_template_id   = "lt-0d1xxxxx55dfb92cxxxxxxxxxx"
                    launch_template_name = "dev-uxxxxxxxxxxxxxx"
                    version              = "$Latest"
                }

                override {
                    instance_type = "m5dn.xlarge"
                }
            }
        }

      + tag {
          + key                 = "Name"
          + propagate_at_launch = true
          + value               = "dev-usw2-spot01-eks_asg"
        }
      + tag {
          + key                 = "kubernetes.io/cluster/xxxxxxxxxxxx"
          + propagate_at_launch = true
          + value               = "owned"
        }
    }

  # module.eks_cluster.aws_autoscaling_group.workers_launch_template[1] will be destroyed
  - resource "aws_autoscaling_group" "workers_launch_template" {
      - arn                       = "arn:aws:autoscaling:us-west-2:xxxxxxxxxxxxxxxx:autoScalingGroup:xxxxxxxxxxxxxxxxx:autoScalingGroupName/dev-uxxxxxxxxxxx" -> null
      - availability_zones        = [
          - "us-west-2a",
          - "us-west-2b",
        ] -> null
      - default_cooldown          = 300 -> null
      - desired_capacity          = 3 -> null
      - enabled_metrics           = [] -> null
      - force_delete              = false -> null
      - health_check_grace_period = 300 -> null
      - health_check_type         = "EC2" -> null
      - id                        = "dev-usw2-mxxxxxxxxxx" -> null
      - load_balancers            = [] -> null
      - max_instance_lifetime     = 0 -> null
      - max_size                  = 3 -> null
      - metrics_granularity       = "1Minute" -> null
      - min_size                  = 1 -> null
      - name                      = "dev-usw2-mxxxxxxxxxxxxxxx" -> null
      - name_prefix               = "dev-usw2-main01" -> null
      - protect_from_scale_in     = false -> null
      - service_linked_role_arn   = "arn:aws:iam::xxxxxxxxxxxxxxxx:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling" -> null
      - suspended_processes       = [] -> null
      - tags                      = [
          - {
              - "key"                 = "Name"
              - "propagate_at_launch" = "true"
              - "value"               = "dev-usw2-main01-eks_asg"
            },
          - {
              - "key"                 = "kubernetes.io/cluster/xxxxxxxxxx"
              - "propagate_at_launch" = "true"
              - "value"               = "owned"
            },
        ] -> null
      - target_group_arns         = [] -> null
      - termination_policies      = [] -> null
      - vpc_zone_identifier       = [
          - "subnet-xxxxxxxxxxxxxx",
          - "subnet-zxxxxxxxxxxxxxxxxx",
        ] -> null
      - wait_for_capacity_timeout = "10m" -> null

      - launch_template {
          - id      = "lt-xxxxxxx" -> null
          - name    = "dev-usw2-mxxxxxxxxxxxx" -> null
          - version = "$Latest" -> null
        }
    }

  # module.eks_cluster.aws_iam_instance_profile.workers_launch_template[1] will be destroyed
  - resource "aws_iam_instance_profile" "workers_launch_template" {
      - arn         = "arn:aws:iam::xxxxxxxxxxx:instance-profile/dev-uxxxxxxxxxx" -> null
      - create_date = "2020-07-16T21:34:08Z" -> null
      - id          = "dev-uxxxxxxxxxxx" -> null
      - name        = "dev-uxxxxxxxxxxxxxxxxxxxx" -> null
      - name_prefix = "dev-usw2" -> null
      - path        = "/" -> null
      - role        = "dev-uxxxxxxxxxxxxxxxxxxxxx" -> null
      - unique_id   = "AIPXXXXXXXXXXXXXX" -> null
    }

  # module.eks_cluster.aws_iam_policy.cluster_elb_sl_role_creation[0] will be created
  + resource "aws_iam_policy" "cluster_elb_sl_role_creation" {
      + arn         = (known after apply)
      + description = "Permissions for EKS to create AWSServiceRoleForElasticLoadBalancing service-linked role"
      + id          = (known after apply)
      + name        = (known after apply)
      + name_prefix = "dev-uxxxxxxxxx-elb-sl-role-creation"
      + path        = "/"
      + policy      = jsonencode(
            {
              + Statement = [
                  + {
                      + Action   = [
                          + "ec2:DescribeInternetGateways",
                          + "ec2:DescribeAccountAttributes",
                        ]
                      + Effect   = "Allow"
                      + Resource = "*"
                      + Sid      = ""
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
    }

  # module.eks_cluster.aws_iam_role_policy_attachment.cluster_AmazonEKSVPCResourceControllerPolicy[0] will be created
  + resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSVPCResourceControllerPolicy" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::xxxxxxx"
      + role       = "dev-uxxxxxxxxxxxxxxxxxxxx"
    }

  # module.eks_cluster.aws_iam_role_policy_attachment.cluster_elb_sl_role_creation[0] will be created
  + resource "aws_iam_role_policy_attachment" "cluster_elb_sl_role_creation" {
      + id         = (known after apply)
      + policy_arn = (known after apply)
      + role       = "dev-uxxxxxxxxxxxxxxxx"
    }

  # module.eks_cluster.aws_launch_template.workers_launch_template[0] will be updated in-place
  ~ resource "aws_launch_template" "workers_launch_template" {
        arn                     = "arn:aws:ec2:us-west-2:xxxxxxxxxxx:launch-template/lt-xxxxxxxxxxxxxxxxxx"
        default_version         = 1
        disable_api_termination = false
        ebs_optimized           = "true"
        id                      = "lt-xxxxxxxxxxxxx"
      ~ image_id                = "ami-0e7c64dcd14089a77" -> "ami-0865553b27df49930"
        instance_type           = "m4.large"
        key_name                = "KeyName"
      ~ latest_version          = 3 -> (known after apply)
        name                    = "dev-uxxxxxxxx-sxxxxxxxxxxxxxx"
        name_prefix             = "dev-uxxxxxxx-spot01"
        security_group_names    = []
        tags                    = {}
        user_data               = "XXXXXmluL2Jhc2ggLxxxxxxxxxxxxxxxx"
        vpc_security_group_ids  = []

      + block_device_mappings {
          + device_name = "/dev/xvda"

          + ebs {
              + delete_on_termination = "true"
              + encrypted             = "false"
              + iops                  = 0
              + volume_size           = 30
              + volume_type           = "gp2"
            }
        }

        credit_specification {
            cpu_credits = "standard"
        }

        iam_instance_profile {
            name = "dev-usxxxxxxxxxxxxx"
        }

      + metadata_options {
          + http_endpoint = "enabled"
          + http_tokens   = "optional"
        }

        monitoring {
            enabled = true
        }

        network_interfaces {
            associate_public_ip_address = "true"
            delete_on_termination       = "true"
            device_index                = 0
            ipv4_address_count          = 0
            ipv4_addresses              = []
            ipv6_address_count          = 0
            ipv6_addresses              = []
            security_groups             = [
                "sg-01b4be739xxxxxxxxxx",
                "sg-05aexxxxx",
                "sg-xxxxef2a59xxxxxxxxxx",
            ]
        }

        tag_specifications {
            resource_type = "volume"
            tags          = {
                "Name" = "dev-usxx"
            }
        }
        tag_specifications {
            resource_type = "instance"
            tags          = {
                "Name" = "dev-usxxxx"
            }
        }
    }

  # module.eks_cluster.aws_launch_template.workers_launch_template[1] will be destroyed
  - resource "aws_launch_template" "workers_launch_template" {
      - arn                     = "arn:aws:ec2:us-west-2:xxxxxxxxxxxxx:launch-template/lt-xxxxxxxxxxxxxxxxx" -> null
      - default_version         = 1 -> null
      - disable_api_termination = false -> null
      - ebs_optimized           = "true" -> null
      - id                      = "lt-0dexxxxxxxx" -> null
      - image_id                = "ami-010e52511bbeb82e7" -> null
      - instance_type           = "m5dn.xlarge" -> null
      - key_name                = "oldKeyName" -> null
      - latest_version          = 1 -> null
      - name                    = "dev-usxxxxxxxxxxxxxx" -> null
      - name_prefix             = "dev-usssssssssaa-main01" -> null
      - security_group_names    = [] -> null
      - tags                    = {} -> null
      - user_data               = "IyEvYmluL2Jhc2ggLXhlCgojIEFsbG93IHVzZXIgxxxxxxxxxx" -> null
      - vpc_security_group_ids  = [] -> null

      - block_device_mappings {
          - device_name = "/dev/xvda" -> null

          - ebs {
              - delete_on_termination = "true" -> null
              - encrypted             = "false" -> null
              - iops                  = 0 -> null
              - volume_size           = 30 -> null
              - volume_type           = "gp2" -> null
            }
        }

      - credit_specification {
          - cpu_credits = "standard" -> null
        }

      - iam_instance_profile {
          - name = "dev-usxxxxxxxxxx" -> null
        }

      - monitoring {
          - enabled = true -> null
        }

      - network_interfaces {
          - associate_public_ip_address = "true" -> null
          - delete_on_termination       = "true" -> null
          - device_index                = 0 -> null
          - ipv4_address_count          = 0 -> null
          - ipv4_addresses              = [] -> null
          - ipv6_address_count          = 0 -> null
          - ipv6_addresses              = [] -> null
          - security_groups             = [
              - "sg-01b4be73xxxxx",
              - "sg-xx0ab294axxx",
              - "sg-09xxxxxxxxxxxxx",
            ] -> null
        }

      - tag_specifications {
          - resource_type = "volume" -> null
          - tags          = {
              - "Name" = "dev-ussssssssss-main01-eks_asg"
            } -> null
        }
      - tag_specifications {
          - resource_type = "instance" -> null
          - tags          = {
              - "Name" = "dev-usasassssa-main01-eks_asg"
            } -> null
        }
    }

  # module.eks_cluster.kubernetes_config_map.aws_auth[0] will be updated in-place
  ~ resource "kubernetes_config_map" "aws_auth" {
        binary_data = {}
        data        = {
            "mapAccounts" = jsonencode([])
            "mapRoles"    = <<~EOT
                - "groups":
                  - "system:bootstrappers"
                  - "system:nodes"
                  "rolearn": "arn:aws:iam::xxxxxxxxx:role/dev-uxxxxxxxxx"
                  "username": "system:node:{{EC2PrivateDNSName}}"
                - "groups":
                  - "system:masters"
                  "rolearn": "arn:aws:iam::xxxxxxxxxxx:role/role_name"
                  "username": "dev-cxxxxxxxxxxxx"
                - "groups":
                  - "system:masters"
                  "rolearn": "arn:aws:iam::xxxxxxxxxxxx:role/role_name"
                  "username": "dev-cxxxxxxxxxxxxx"
            EOT
            "mapUsers"    = jsonencode([])
        }
        id          = "kube-system/aws-auth"

      ~ metadata {
            annotations      = {}
            generation       = 0
          ~ labels           = {
              + "app.kubernetes.io/managed-by" = "Terraform"
              + "terraform.io/module"          = "terraform-aws-modules.eks.aws"
            }
            name             = "aws-auth"
            namespace        = "kube-system"
            resource_version = "5271277"
            self_link        = "/api/v1/namespaces/kube-system/configmaps/aws-auth"
            uid              = "819b7bxxxxxxxxx-xxxxxxx"
        }
    }

  # module.eks_cluster.local_file.kubeconfig[0] will be created
  + resource "local_file" "kubeconfig" {
      + content              = <<~EOT
            apiVersion: v1
            preferences: {}
            kind: Config

            clusters:
            - cluster:
                server: https://xxxxxxxxxxxxx.xxxxx7.xxxxxxxxxxxxxxx.eks.amazonaws.com
                certificate-authority-data: LSxxxxxXXXXXXXXXXXXXXXXXXXXX
              name: xxxxxxxxxxxxxxx

            contexts:
            - context:
                cluster: xxxxxxxxx
                user: xxxxxxxxx
              name: xxxxxxxxxxx

            current-context: xxxxxxxxxx

            users:
            - name: xxxxxxxxxxxxxxxxxxx
              user:
                exec:
                  apiVersion: client.authentication.k8s.io/v1alpha1
                  command: aws-iam-authenticator
                  args:
                    - "token"
                    - "-i"
                    - "xxxxxxxxxxxxxxx"
                  env:
                    - name: AWS_PROFILE
                      value: zxXZXXXXXXXXXXXX

        EOT
      + directory_permission = "0755"
      + file_permission      = "0644"
      + filename             = "./kubeconfig_xxxxxxxxxxx"
      + id                   = (known after apply)
    }

  # module.eks_cluster.random_pet.workers_launch_template[0] must be replaced
+/- resource "random_pet" "workers_launch_template" {
      ~ id        = "closing-shad" -> (known after apply)
      ~ keepers   = {
          - "lt_name" = "dev-uxxxxxxxxxx"
        } -> (known after apply) # forces replacement
        length    = 2
        separator = "-"
    }

  # module.eks_cluster.random_pet.workers_launch_template[1] will be destroyed
  - resource "random_pet" "workers_launch_template" {
      - id        = "united-cicada" -> null
      - keepers   = {
          - "lt_name" = "dev-uxxxxxxxxxxxxxxx"
        } -> null
      - length    = 2 -> null
      - separator = "-" -> null
    }

.........

Plan: 5 to add, 4 to change, 6 to destroy.

Thanks!

barryib commented 3 years ago

Or Removed something in that list ?:

Answer: Yes, I changed the key_name on worker_groups_launch_template and updated the ingress on one of SG >additional_security_group_ids.

How did you do to remove a worker group in var.worker_groups_launch_template ? I can see that you had 2 worker groups.

I'm trying to understand the cycle error. For now, I can't say with your plan, why module.eks_cluster.random_pet.workers_launch_template[0] depends on module.eks_cluster.aws_iam_instance_profile.workers_launch_template[1] in your error https://github.com/terraform-aws-modules/terraform-aws-eks/issues/950#issuecomment-727234278. It doesn't make sense to my understanding right now.

I'm trying to reproduce it.

stale[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

vbakayev commented 3 years ago

I have faced the same issue. With version 12.2 and later (tested with 14.0 too) I'm getting into cycle error when destroying the kubernetes infrastructure. Current workaround is to pin version to 12.1.0 to get unblocked on resources removal.

sebas-w commented 3 years ago

I wanted to update that I too am receiving this error, but I can get around it by simply running;

terraform state rm "module.eks.aws_security_group_rule.cluster_https_worker_ingress[0]"

There is no need to even remove the sg rule in aws, it'll get removed when the sg gets deleted.

endre-synnes commented 3 years ago

I wanted to update that I too am receiving this error, but I can get around it by simply running;

terraform state rm "module.eks.aws_security_group_rule.cluster_https_worker_ingress[0]"

There is no need to even remove the sg rule in aws, it'll get removed when the sg gets deleted.

Thank you!πŸ˜„ I have been looking for a solution to this problem all day πŸŽ‰

daroga0002 commented 3 years ago

can anybody test does this issue is still in current version of module?

@vbakayev @ingluife

ingluife commented 3 years ago

@daroga0002 I'm working with these combinations: 1.)

     - Module: 13.0.0
     - Terraform:  v0.12.29
     - Provider AWS: v3.37.0

2.)

     - Module: 13.1.0
     - Terraform:  v0.12.31
     - Provider AWS: v3.55.0

and it's working perfectly.

stale[bot] commented 2 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

stale[bot] commented 2 years ago

This issue has been automatically closed because it has not had recent activity since being marked as stale.

mh-abanginwar commented 2 years ago

Same issue

β”‚ Error: Cycle: module.banyan.module.aks.output.aks_client_key (expand), module.banyan.module.aks.output.aks_client_certificate (expand), module.banyan.module.aks.output.aks_cluster_ca_certificate (expand), module.banyan.module.aks.var.resource_group_name (expand), module.banyan.module.flux.kubectl_manifest.sync["source.toolkit.fluxcd.io/v1beta1/gitrepository/flux-system/flux-system"] (destroy), module.banyan.module.aks.azurerm_kubernetes_cluster.azure-aks-cluster, module.banyan.module.aks.output.aks_host (expand), module.banyan.provider["registry.terraform.io/gavinbunney/kubectl"], module.banyan.module.flux.kubectl_manifest.sync["kustomize.toolkit.fluxcd.io/v1beta1/kustomization/flux-system/flux-system"] (destroy), module.banyan.module.aks.azurerm_kubernetes_cluster.azure-aks-cluster (destroy), module.banyan.module.resource-group.azurerm_resource_group.resource-group, module.banyan.module.resource-group.output.resource_group_name (expand)

any update ?

saidev680 commented 2 years ago

I have the same issue. Any new update on this?

bryantbiggs commented 2 years ago

There are no new updates that will come here - the module has undergone significant changes since this issue was filed and closed

github-actions[bot] commented 1 year ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.