terraform-aws-modules / terraform-aws-eks

Terraform module to create Amazon Elastic Kubernetes (EKS) resources 🇺🇦
https://registry.terraform.io/modules/terraform-aws-modules/eks/aws
Apache License 2.0
4.46k stars 4.08k forks source link

Module attempting to change desired_size of managed node_group #681

Closed davidalger closed 4 years ago

davidalger commented 4 years ago

I have issues

I'm submitting a...

What is the current behavior?

Deployed an EKS cluster using an AWS Managed Node Group for which support was added in v8.0.0 of this module. There is an autoscaler deployed into this managed node group and it has scaled up from 1 to 2 nodes. When running a plan, Terraform reports the following change to be made:

  # module.eks.module.node_groups.aws_eks_node_group.workers["0"] will be updated in-place
  ~ resource "aws_eks_node_group" "workers" {
        ami_type        = "AL2_x86_64"
        arn             = "<redacted>"
        cluster_name    = "<redacted>"
        disk_size       = 20
        id              = "<redacted>:<redacted>-0-evolving-mongoose"
        instance_types  = [
            "t3.medium",
        ]
        labels          = {}
        node_group_name = "<redacted>-0-evolving-mongoose"
        node_role_arn   = "arn:aws:iam::<redacted>:role/<redacted>20200110165639082800000001"
        release_version = "1.14.7-20190927"
        resources       = [
            {
                autoscaling_groups              = [
                    {
                        name = "<redacted>"
                    },
                ]
                remote_access_security_group_id = ""
            },
        ]
        status          = "ACTIVE"
        subnet_ids      = [
            "subnet-<redacted>",
        ]
        tags            = {
            "tf-workspace" = "<redacted>"
        }
        version         = "1.14"

      ~ scaling_config {
          ~ desired_size = 2 -> 1
            max_size     = 5
            min_size     = 1
        }
    }

If this is a bug, how to reproduce? Please include a code sample if relevant.

What's the expected behavior?

Should be a lifecycle policy on the aws_eks_node_group resource to ignore changes to desired_size similar to the one currently on worker_groups:

  lifecycle {
    create_before_destroy = true
    ignore_changes        = [desired_capacity]
  }

The aws_eks_node_group resource is missing the ignore_changes rule.

Are you able to fix this problem and submit a PR? Link here if you have already.

https://github.com/terraform-aws-modules/terraform-aws-eks/pull/691

Environment details

Any other relevant info

github-actions[bot] commented 1 year ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.