terraform-aws-modules / terraform-aws-alb

Terraform module to create AWS Application/Network Load Balancer (ALB/NLB) resources πŸ‡ΊπŸ‡¦
https://registry.terraform.io/modules/terraform-aws-modules/alb/aws
Apache License 2.0
437 stars 675 forks source link

Target Group Forces Replacement on Instance ID Change, Provider Inconsistent Plan File - target group should remain while target group attachment should be replaced #303

Closed larahroth closed 1 year ago

larahroth commented 1 year ago

Description

Please provide a clear and concise description of the issue you are encountering, and a reproduction of your configuration (see the examples/* directory for references that you can copy+paste and tailor to match your configs if you are unable to copy your exact configuration). The reproduction MUST be executable by running terraform init && terraform apply without any further changes.

If your request is for a new feature, please use the Feature request template.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4"
    }
  }
}
provider "aws" {
  region = var.region

  default_tags {
    tags = local.common_tags
  }
}

provider "aws" {
  alias  = "us-east-1"
  region = "us-east-1"

  default_tags {
    tags = local.common_tags
  }
}

provider "aws" {
  alias  = "us-west-2"
  region = "us-west-2"

  default_tags {
    tags = local.common_tags
  }
}

Reproduction Code [Required]

Module LB

module "lb" {
  source  = "terraform-aws-modules/alb/aws" # https://github.com/terraform-aws-modules/terraform-aws-alb
  version = "8.7.0"

  create_security_group = false

  name = local.name

  load_balancer_type = var.type
  internal           = var.internal

  target_groups = var.target_groups

  https_listeners      = var.https_listeners
  https_listener_rules = var.https_listener_rules

  http_tcp_listeners      = var.http_tcp_listeners
  http_tcp_listener_rules = var.http_tcp_listener_rules

  enable_deletion_protection                  = var.enable_deletion_protection
  enable_http2                                = var.enable_http2
  enable_cross_zone_load_balancing            = var.enable_cross_zone_load_balancing
  enable_tls_version_and_cipher_suite_headers = var.enable_tls_version_and_cipher_suite_headers
  enable_xff_client_port                      = var.enable_xff_client_port

  vpc_id          = var.vpc_id
  security_groups = var.security_group_ids
  subnets         = var.subnet_ids

  drop_invalid_header_fields = var.drop_invalid_header_fields
  preserve_host_header       = var.preserve_host_header

  extra_ssl_certs             = var.extra_ssl_certs
  listener_ssl_policy_default = var.listener_ssl_policy_default

  access_logs = var.log_enabled ? {
    bucket  = var.log_bucket_name
    prefix  = "${var.app_code}/${var.index + 1}"
    enabled = var.log_enabled
  } : {}

  idle_timeout = var.idle_timeout
}

Project Calling LB Module

/*
  External Application Load Balancer
  Task List:
    - s3 lb log bucket in baseline repo
    - add dns to both public and private hosted zones
*/

module "lb" {
  source = "<module library>?ref=feat/load-balancer"

  app_code     = var.app_code
  app_function = null
  environment  = var.env

  internal = false
  type     = "application"

  security_group_ids = [aws_security_group.sg_lb.id]
  vpc_id             = var.vpc_id
  subnet_ids         = var.public_subnet_ids

  enable_deletion_protection = false
  enable_http2               = false #for now

  log_bucket_name            = null  # TODO: Make s3 lb log bucket in baseline repo
  log_enabled                = false # for now
  drop_invalid_header_fields = true

  target_groups = [
    {
      backend_port     = 443,
      backend_protocol = "HTTPS"
      name             = "${var.app_code}-443-https"
      target_type      = "instance"
      ip_address_type = "ipv4"

      targets = {
        ec2-instance = {
          target_id = module.ec2_linux[0].instance.id
          port      = 443
        }
      }
    },
    {
      backend_port     = 80,
      backend_protocol = "HTTP"
      name             = "${var.app_code}-80-http"
      target_type      = "instance"

      targets = {
        (var.app_code) = {
          target_id = module.ec2_linux[0].instance.id
          port      = 80
        }
      }
    }
  ]

  https_listeners = [
    {
      port            = 443,
      protocol        = "HTTPS"
      action_type     = "forward"
      certificate_arn = module.acm.acm_certificate_arn
      ssl_policy      = "ELBSecurityPolicy-TLS-1-2-2017-01"
    }
  ]

  http_tcp_listeners = [
    {
      port               = 80,
      protocol           = "HTTP"
      action_type        = "forward"
      target_group_index = 1
    }
  ]
}

resource "aws_wafv2_web_acl_association" "this" {
  resource_arn = module.lb.lb.lb_arn
  web_acl_arn  = aws_wafv2_web_acl.public.arn
}

EC2 Instance

#tfsec:ignore:aws-ec2-enable-at-rest-encryption
module "ec2_linux" {
  source   = "<module library>?ref=v0.0.0-ec2-linux"
  for_each = { for key, value in var.instance_info : key => value }

  ami_account_id    = each.value.ami_account_id
  ami_id            = each.value.ami_id
  availability_zone = each.value.availability_zone
  extra_drives      = each.value.extra_drives
  index             = each.key
  instance_type     = each.value.instance_type
  subnet_id         = each.value.subnet_id
  volume_size       = each.value.volume_size

  app_code     = var.app_code
  app_function = null
  environment  = var.env
  kms_key_arn  = aws_kms_key.encryption.arn

  dns = {}

  instance_profile_name = aws_iam_instance_profile.ec2_linux.name
  security_group_ids    = [aws_security_group.sg_ec2.id]
  sns_topic_arns        = [aws_sns_topic.alerts.arn]

  user_data = templatefile("${path.module}/scripts/ec2-linux-user-data.sh.tftpl", {})
}

Steps to reproduce the behavior:

Update AMI passed in for the ec2 instance so it rebuilds the ec2 instance.

Expected behavior

Theoretically, this would just replace the aws_lb_target_group_attachment resource only. The replacement of the ec2 instance should not require the target group to be rebuilt, just the target group attachment.

Actual behavior

This wants to replace the aws_lb_target_group itself as well as the aws_lb_target_group_attachment. Even if we were ok with this, the Name is throwing an error on tag: Provider produced inconsistent final plan.

Terminal Output Screenshot(s)

aws_lb_target_group_attachment

# module.ec2_linux_east.module.lb.module.lb.aws_lb_target_group_attachment.this["0.ec2-instance"] must be replaced
-/+ resource "aws_lb_target_group_attachment" "this" {
      ~ id               = "arn:aws:elasticloadbalancing:us-east-1:<account-id>:targetgroup/<app code>-443-https/<id>" -> (known after apply)
      ~ target_group_arn = "arn:aws:elasticloadbalancing:us-east-1:<account-id>:targetgroup/<app code>-443-https/<id>" # forces replacement -> (known after apply) # forces replacement
      ~ target_id        = "<id>" # forces replacement -> (known after apply) # forces replacement
        # (1 unchanged attribute hidden)
    }

aws_lb_target_group

 # module.ec2_linux_east.module.lb.module.lb.aws_lb_target_group.main[1] must be replaced
+/- resource "aws_lb_target_group" "main" {
      ~ arn                                = "arn:aws:elasticloadbalancing:us-east-1:<account-id>:targetgroup/<app code>-80-http/<id>" -> (known after apply)
      ~ arn_suffix                         = "targetgroup/<app code>-80-http/<id>" -> (known after apply)
      ~ connection_termination             = false -> (known after apply)
      ~ deregistration_delay               = "300" -> (known after apply)
      ~ id                                 = "arn:aws:elasticloadbalancing:us-east-1:<account-id>:targetgroup/<app code>-80-http/<id>" -> (known after apply)
      ~ ip_address_type                    = "ipv4" # forces replacement -> (known after apply) # forces replacement
      ~ lambda_multi_value_headers_enabled = false -> (known after apply)
      ~ load_balancing_algorithm_type      = "round_robin" -> (known after apply)
      ~ load_balancing_cross_zone_enabled  = "use_load_balancer_configuration" -> (known after apply)
        name                               = "<app code>-80-http"
      ~ port                               = 80 # forces replacement -> (known after apply) # forces replacement
      + preserve_client_ip                 = (known after apply)
      ~ protocol                           = "HTTP" # forces replacement -> (known after apply) # forces replacement
      ~ protocol_version                   = "HTTP1" -> (known after apply)
      ~ proxy_protocol_v2                  = false -> (known after apply)
      ~ slow_start                         = 0 -> (known after apply)
      ~ tags                               = {
          - "Name" = "<app code>-80-http"
        } -> (known after apply)
      ~ tags_all                           = {
          - "Name"      = "<app code>-80-http" -> null
            # (2 unchanged elements hidden)
        }
      ~ target_type                        = "instance" # forces replacement -> (known after apply) # forces replacement
        # (1 unchanged attribute hidden)

      - health_check {
          - enabled             = true -> null
          - healthy_threshold   = 5 -> null
          - interval            = 30 -> null
          - matcher             = "200" -> null
          - path                = "/" -> null
          - port                = "traffic-port" -> null
          - protocol            = "HTTP" -> null
          - timeout             = 5 -> null
          - unhealthy_threshold = 2 -> null
        }

      - stickiness {
          - cookie_duration = 86400 -> null
          - enabled         = false -> null
          - type            = "lb_cookie" -> null
        }

      - target_failover {}
    }

Error on Apply

360β”‚ Error: Provider produced inconsistent final plan 64s
361β”‚ 64s
362β”‚ When expanding the plan for 64s
363β”‚ module.ec2_linux_east.module.lb.module.lb.aws_lb_target_group.main[1] to 64s
364β”‚ include new values learned so far during apply, provider 64s
365β”‚ "registry.terraform.io/hashicorp/aws" produced an invalid new value for 64s
366β”‚ .tags_all: new element "Name" has appeared. 64s
367β”‚ 64s
368β”‚ This is a bug in the provider, which should be reported in the provider's 64s
369β”‚ own issue tracker. 64s
370β•΅ 64s
371β•· 64s
372β”‚ Error: Provider produced inconsistent final plan 64s
373β”‚ 64s
374β”‚ When expanding the plan for 64s
375β”‚ module.ec2_linux_east.module.lb.module.lb.aws_lb_target_group.main[0] to 64s
376β”‚ include new values learned so far during apply, provider 64s
377β”‚ "registry.terraform.io/hashicorp/aws" produced an invalid new value for 64s
378β”‚ .tags_all: new element "Name" has appeared. 64s
379β”‚ 64s
380β”‚ This is a bug in the provider, which should be reported in the provider's 64s
381β”‚ own issue tracker. 64s
382β•΅

Additional context

We have tried adding the fields that are marked as forcing replacement to eliminate the target group substitution, but this did not work. We have also tried changing versions of terraform and the aws provider.

igoratencompass commented 1 year ago

Since the error indicates this is provider tags related have you tried removing default_tags from the providers?

When looking at doc about default tags for the aws provider https://www.hashicorp.com/blog/default-tags-in-the-terraform-aws-provider, it seems you can not use dynamic variable for default_tags block like you do.

github-actions[bot] commented 1 year ago

This issue has been automatically marked as stale because it has been open 30 days with no activity. Remove stale label or comment or this issue will be closed in 10 days

github-actions[bot] commented 1 year ago

This issue was automatically closed because of stale in 10 days

github-actions[bot] commented 1 year ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.