hashicorp / terraform-provider-aws

The AWS Provider enables Terraform to manage AWS resources.
https://registry.terraform.io/providers/hashicorp/aws
Mozilla Public License 2.0
9.87k stars 9.21k forks source link

[Bug]: Subsequent apply forces global cluster recreation when source cluster's database_name is specified #28187

Open rutujachaudhari124 opened 1 year ago

rutujachaudhari124 commented 1 year ago

Terraform Core Version

0.12.31

AWS Provider Version

3.70.0

Affected Resource(s)

aws_rds_global_cluster

Expected Behavior

Subsequent apply should show no changes and no force replacement.

Actual Behavior

The subsequent apply shows a diff for the database_name parameter since this value is inherited from source cluster configuration and the applied configuration contains N/A value for this parameter.

-/+ resource "aws_rds_global_cluster" "this" {
      ~ arn                          = "************" -> (known after apply)
      - database_name                = "database04" -> null # forces replacement
        deletion_protection          = false
      ~ engine                       = "aurora" -> (known after apply)
      ~ engine_version               = "5.6.mysql_aurora.1.22.5" -> (known after apply)
        force_destroy                = true
        global_cluster_identifier    = "***************"
      ~ global_cluster_members       = [
          - {
              - db_cluster_arn = "******************"
              - is_writer      = true
            },
        ] -> (known after apply)
      ~ global_cluster_resource_id   = "**********" -> (known after apply)
      ~ id                           = "*************" -> (known after apply)
        source_db_cluster_identifier = "****************"
      ~ storage_encrypted            = true -> (known after apply)
    }

Plan: 1 to add, 0 to change, 1 to destroy.

Relevant Error/Panic Output Snippet

Does not occur any error while terraform apply, but force replacement is occurred for database_name parameter
database_name                = "database04" -> null # forces replacement

Terraform Configuration Files


resource "aws_rds_cluster" "primary" {
    apply_immediately                   = true
    availability_zones                  = ["ap-south-1a", "ap-south-1b", "ap-south-1c"]
    backtrack_window                    = 0
    backup_retention_period             = 1
    cluster_identifier                  = "****************"
    copy_tags_to_snapshot               = true
    database_name                       = "database04"
    deletion_protection                 = false
    enable_http_endpoint                = false
    enabled_cloudwatch_logs_exports     = ["error", "slowquery"]
    engine_mode                         = "provisioned"
    engine_version                      = "5.6.mysql_aurora.1.22.5"
    engine                              = "aurora"
    final_snapshot_identifier           = "******************"
    iam_database_authentication_enabled = false
    master_password                     = "*******"
    master_username                     = "dbadmin"
    port                                = 5432
    preferred_backup_window             = "03:30-05:00"
    preferred_maintenance_window        = "sun:19:00-mon:00:00"
    skip_final_snapshot                 = true
    storage_encrypted                   = false

}

data "aws_rds_cluster" "clusterName" {
  cluster_identifier = "****************"

}

resource "aws_rds_global_cluster" "example" {
  source_db_cluster_identifier = data.aws_rds_cluster.clusterName.arn
    global_cluster_identifier = "******************"
    deletion_protection       = false
    force_destroy             = true
}

Steps to Reproduce

terraform apply terraform plan

Debug Output

No response

Panic Output

No response

Important Factoids

No response

References

This github issue is similar issue for 'storage_encrypted' parameter in this same resource. https://github.com/hashicorp/terraform-provider-aws/issues/15177

Would you like to implement a fix?

None

github-actions[bot] commented 1 year ago

Community Note

Voting for Prioritization

Volunteering to Work on This Issue

justinretzolk commented 1 year ago

Hey @rutujachaudhari124 👋 Thank you for taking the time to raise this! So that we have the information needed to look into this, can you supply a sample Terraform configuration as well?

Kristin0 commented 1 year ago

have you tried to specify database_name it implicitly ?

aws_rds_global_cluster = {
  ***************** = {
    global_cluster_identifier    = "*************"
    deletion_protection          = false
    database_name                = "database04"
    force_destroy                = true
    source_db_cluster_identifier = "*************"
  }
}
rutujachaudhari124 commented 1 year ago

have you tried to specify database_name it implicitly ?

aws_rds_global_cluster = {
  ***************** = {
    global_cluster_identifier    = "*************"
    deletion_protection          = false
    database_name                = "database04"
    force_destroy                = true
    source_db_cluster_identifier = "*************"
  }
}

@Kristin0 We have tested this case where we have added the database_name as "database04", in this case we got an error that the database_name should not be specified as it will be inherited from source cluster. The error is mentioned below:

db_name
rutujachaudhari124 commented 1 year ago

@justinretzolk Updated the terraform configuration.

Kristin0 commented 1 year ago

@justinretzolk any updates?

vitali-miadzvedzki commented 1 year ago

Looking into similar issue mentioned here #15177 it seems the database_name parameter requires to have computed=true (regional has it: https://github.com/hashicorp/terraform-provider-aws/blob/v4.46.0/internal/service/rds/cluster.go#L122, global does not: https://github.com/hashicorp/terraform-provider-aws/blob/v4.46.0/internal/service/rds/global_cluster.go#L48)

Kristin0 commented 1 year ago

same issue with these versions:

Terraform Core Version
v1.3.3

AWS Provider Version
v4.46.0
ollieanwyll commented 1 year ago

Having this same issue. Has anyone had any luck?

CGandhi0 commented 1 year ago

We are have a similar issue with aurora global cluster for postgres built out of a snapshot. With is_promoted_global_cluster = true, global cluster gets correctly setup with database_name that is identical to regional cluster but upon subsequent apply instead of detecting no changes to infrastructure - terraform is noticing database_name getting updated from current database_name to null which forces replacement of global cluster.

-/+ resource "aws_rds_global_cluster" "global_cluster_promoted" {
      ~ arn                          = "arn:aws:rds::******:global-cluster:****" -> (known after apply)
      - database_name                = "*****" -> null # forces replacement
      ~ engine                       = "aurora-postgresql" -> (known after apply)
      ~ engine_version               = "13.6" -> (known after apply)
      ~ engine_version_actual        = "13.6" -> (known after apply)
      ~ global_cluster_members       = [
          - {
              - db_cluster_arn = "arn:aws:rds:us-east-1:*******:cluster:********"
              - is_writer      = false
            },
          - {
              - db_cluster_arn = "arn:aws:rds:us-west-2:*******:cluster:********"
              - is_writer      = true
            },
        ] -> (known after apply)
      ~ global_cluster_resource_id   = "cluster-3d909f38d460f47e" -> (known after apply)
      ~ id                           = "********" -> (known after apply)
      ~ storage_encrypted            = true -> (known after apply)
        # (4 unchanged attributes hidden)
    }

Plan: 1 to add, 0 to change, 1 to destroy.

If you try to apply it fails with following error

Error: error removing RDS DB Cluster (arn:aws:rds:us-west-2:******:cluster:********) from Global Cluster (******): InvalidParameterValue: Can not remove writer cluster when there are other clusters
│   status code: 400, request id: 1c891562-958e-4ae0-a7ce-3a6cc0dcdb22

If you don't have a secondary cluster or destroy secondary cluster and run apply it will still recreate global cluster but at this time apply doesn't fail and it will succesfully complete. However, re-running apply still show same issue about global cluster needs to be recreated.

CGandhi0 commented 1 year ago

Any updates...this is blocking all subsequent updates to the project that includes global cluster spread across multiple regions.

CGandhi0 commented 1 year ago

@justinretzolk - any idea on when this will get address? It is blocking all updates to the project that include other resources.

cstclair-tunein commented 1 year ago

+1 to this. We are currently experiencing this bug.

justinretzolk commented 1 year ago

Hey y'all 👋 Thank you for checking in on this! Unfortunately I can't provide an ETA on when this will be looked into due to the potential of shifting priorities. We prioritize by count of :+1: reactions and a few other things (more information on our prioritization guide if you're interested).

mnebot commented 1 year ago

Hi all,

I faced the same issue, I used this workarround, hope that could work also with you

resource "aws_rds_global_cluster" "example" {
  ...
  lifecycle {
    ignore_changes = [database_name]
  }
}  
CGandhi0 commented 1 year ago

I used ignore_changes to database_name as workaround to mitigate issue. Forgot to mention it here.