hashicorp / terraform-provider-aws

The AWS Provider enables Terraform to manage AWS resources.
https://registry.terraform.io/providers/hashicorp/aws
Mozilla Public License 2.0
9.74k stars 9.1k forks source link

[Bug]: Provider bug when migrating documentdb resources from terraform 1.3 to terraform 1.8 #38986

Open AnastasiaKnt opened 3 weeks ago

AnastasiaKnt commented 3 weeks ago

Terraform Core Version

~>1.8

AWS Provider Version

~> 5.0

Affected Resource(s)

aws_docdb_subnet_group and aws_docdb_cluster_parameter_group

Expected Behavior

When using our documentdb module with version ~>1.3 of terraform and ~>5.0 of provider the upgrade to 1.8 should be transparent.

Actual Behavior

The pipeline of the service using the documentdb resources fails with the following error

Relevant Error/Panic Output Snippet

Error: Provider produced inconsistent final plan
│ 
│ When expanding the plan for aws_docdb_cluster_parameter_group.this to
│ include new values learned so far during apply, provider
│ "registry.terraform.io/hashicorp/aws" produced an invalid new value for
│ .tags_all: new element "module_version" has appeared.
│ 
│ This is a bug in the provider, which should be reported in the provider's
│ own issue tracker.
╵

Note that I get multiple errors with all tags, not just module_version

The way we declare are tags:

in locals.tf

locals {
  shared_tags = {
    application     = var.application
    app_environment = var.environment
    owner           = 
    module_name     = 
    module_version  = trimspace(file("${path.module}/.semver"))
    updated_at      = formatdate("YYYY-MM-DD hh:mm:ss ZZZ", timestamp())
    ci_job_url      = var.ci_job_url
    managed_by      = "terraform"
    vcs_url         = var.ci_project_url

  }

in providers.tf

provider "aws" {
  default_tags {
    tags = local.default_tags
  }
}

Terraform Configuration Files

resource "random_id" "snapshot_identifier_suffix" {
  keepers = {
    application     = var.application
    app_environment = var.environment
  }

  byte_length = 6
}

resource "random_id" "db_identifier" {
  keepers = {
    application     = var.application
    app_environment = var.environment
  }

  byte_length = 8
}

resource "aws_docdb_cluster" "this" {
  cluster_identifier              = local.cluster_identifier_prefix
  engine                          = var.engine
  engine_version                  = var.engine_version
  master_username                 = var.master_username
  master_password                 = var.master_password
  backup_retention_period         = var.backup_retention_period
  preferred_backup_window         = var.backup_window
  final_snapshot_identifier       = local.final_snapshot_identifier
  availability_zones              = var.availability_zones
  db_subnet_group_name            = aws_docdb_subnet_group.this.id
  vpc_security_group_ids          = [aws_security_group.this.id]
  db_cluster_parameter_group_name = aws_docdb_cluster_parameter_group.this.id
  enabled_cloudwatch_logs_exports = ["audit", "profiler"]
  storage_encrypted               = true
  deletion_protection             = local.is_a_protected_environment ? true : var.deletion_protection
  skip_final_snapshot             = local.skip_final_snapshot
  kms_key_id                      = 
  tags                            = local.default_tags
}

resource "aws_docdb_cluster_instance" "these" {
  count                       = var.cluster_instances
  identifier                  = 
  instance_class              = var.instance_size
  cluster_identifier          = aws_docdb_cluster.this.id
  enable_performance_insights = var.enable_performance_insights
  ca_cert_identifier          = var.ca_cert_identifier
  tags                        = local.default_tags
}

data "aws_docdb_engine_version" "this" {
  version = var.engine_version
}

resource "aws_docdb_cluster_parameter_group" "this" {
  family      = data.aws_docdb_engine_version.this.parameter_group_family
  name        = local.cluster_identifier_prefix
  description = "${var.application} ${var.environment} parameter group"

  parameter {
    name  = "audit_logs"
    value = local.audit_logging_enabled
  }

  parameter {
    name  = "profiler"
    value = local.profiler_enabled
  }

  tags = local.default_tags
}

Steps to Reproduce

Change the versions.tf file from terraform 1.3 to 1.8 and re-deploying the services that is using the documentdb module

Debug Output

No response

Panic Output

No response

Important Factoids

Tried downgrading provider version down to 5.10.0 but same error appears. When rolling back to terraform 1.3 the error disappears.

References

No response

Would you like to implement a fix?

None

github-actions[bot] commented 3 weeks ago

Community Note

Voting for Prioritization

Volunteering to Work on This Issue