terraform-aws-modules / terraform-aws-eks

Terraform module to create Amazon Elastic Kubernetes (EKS) resources πŸ‡ΊπŸ‡¦
https://registry.terraform.io/modules/terraform-aws-modules/eks/aws
Apache License 2.0
4.41k stars 4.05k forks source link

EKS Node Group should ignore_changes on status #3150

Closed evankanderson closed 3 weeks ago

evankanderson commented 3 weeks ago

Description

It seems like Terraform/OpenTofu wants to report changes to the status of an EKS node group as diffs, even though this is state managed AWS-side that Terraform can't directly affect:

Note: Objects have changed outside of OpenTofu

OpenTofu detected the following changes made outside of OpenTofu since the
last "tofu apply" which may have affected this plan:

# module.sandbox.module.eks.module.eks_managed_node_group["spot"].aws_eks_node_group.this[0] has changed
~ resource "aws_eks_node_group" "this" {
     id                     = "sandbox-eks:spot-k8s-nodes-20240125181518727500000013"
   ~ status                 = "DEGRADED" -> "ACTIVE"
     tags                   = {
         "Name" = "spot-k8s-nodes"
     }
     # (15 unchanged attributes hidden)

     # (4 unchanged blocks hidden)
 }

(This is part of a much larger configuration, but we see this diff sometimes when one terraform run happens when spot instances are being replaced and then the next plan happens once the node group is fully up to strength.)

Versions

Reproduction Code [Required]

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "20.24.0"

  cluster_name    = "${var.cluster_name}-eks"
  cluster_version = var.eks_version

  vpc_id                         = module.vpc.vpc_id
  subnet_ids                     = module.vpc.private_subnets
  cluster_endpoint_public_access = true

  # Allow the default policy so users other than the cluster creator can
  # describe the cluster's KMS key
  kms_key_enable_default_policy = true

  eks_managed_node_groups = {
    spot = {
      name   = var.k8s_node_params.name
      create = var.k8s_node_params.use_spot

      instance_types = var.k8s_node_params.instance_types
      capacity_type  = "SPOT"

      min_size     = var.k8s_node_params.min_size
      max_size     = var.k8s_node_params.max_size
      desired_size = var.k8s_node_params.desired_size
    }
}

Steps to reproduce the behavior:

We are running terraform plan and terraform apply from GitHub Actions on PRs / submit.

Expected behavior

No reported diffs even when EKS node groups using spot instances are being cycled.

Actual behavior

We sometimes see the above diff, with no real way to avoid it.

Terminal Output Screenshot(s)

See above

Additional context

I suspect this mostly happens with spot instances, as on-demand / reserved instances won't disappear out from under the node pool in the same way.

bryantbiggs commented 3 weeks ago

this is normal Terraform behavior - nothing that we can control here