hashicorp / terraform-provider-aws

The AWS Provider enables Terraform to manage AWS resources.
https://registry.terraform.io/providers/hashicorp/aws
Mozilla Public License 2.0
9.83k stars 9.17k forks source link

[Bug]: Aurora postgres read replica creation for RDS postgres source #32253

Open yogesh2580 opened 1 year ago

yogesh2580 commented 1 year ago

Terraform Core Version

1.3.2

AWS Provider Version

4.67.0

Affected Resource(s)

We are trying to create a aurora read replica to the RDS postgres instance, link to the AWS Blog post We are not passing the master_usernamre value to the aws_rds_cluster resource the value is getting populated from the source RDS DB instance, However the read replica is in syns with the Source DB instance.

module.secondry.aws_rds_cluster.datafabric[0] will be created
    + resource "aws_rds_cluster" "datafabric" {
        + allocated_storage                   = (known after apply)
        + allow_major_version_upgrade         = false
        + apply_immediately                   = false
        + arn                                 = (known after apply)
        + availability_zones                  = [
            + "us-east-1d",
            + "us-east-1e",
            + "us-east-1f",
          ]
        + backtrack_window                    = 0
        + backup_retention_period             = 7
        + cluster_identifier                  = "datafabric-p-8061"
        + cluster_identifier_prefix           = (known after apply)
        + cluster_members                     = (known after apply)
        + cluster_resource_id                 = (known after apply)
        + copy_tags_to_snapshot               = true
        + database_name                       = (known after apply)
        + db_cluster_parameter_group_name     = "datafabric-p-8061"
        + db_subnet_group_name                = "datafabric-p-8061"
        + deletion_protection                 = true
        + enable_global_write_forwarding      = false
        + enable_http_endpoint                = false
        + enabled_cloudwatch_logs_exports     = [
            + "postgresql",
          ]
        + endpoint                            = (known after apply)
        + engine                              = "aurora-postgresql"
        + engine_mode                         = "provisioned"
        + engine_version                      = "12.13"
        + engine_version_actual               = (known after apply)
        + final_snapshot_identifier           = "datafabric-p-8061-final"
        + hosted_zone_id                      = (known after apply)
        + iam_database_authentication_enabled = false
        + iam_roles                           = (known after apply)
        + id                                  = (known after apply)
        + kms_key_id                          = (known after apply)
        + master_user_secret                  = (known after apply)
        + master_user_secret_kms_key_id       = (known after apply)
        + master_username                     = (known after apply)
        + network_type                        = (known after apply)
        + port                                = 5432
        + preferred_backup_window             = "02:00-03:00"
        + preferred_maintenance_window        = "sun:03:00-sun:04:00"
        + reader_endpoint                     = (known after apply)
        + replication_source_identifier       = "arn:aws:rds:us-east-1:598693051713:db:pdfrb-8061"
        + skip_final_snapshot                 = false
        + source_region                       = "us-east-1"
        + storage_encrypted                   = true
        + storage_type                        = (known after apply)
        + tags                                = {}
        }

module.secondry.aws_rds_cluster_instance.cluster_instances[1]: Creation complete after 32m14s [id=datafabric-p-8061-1]
 ā”‚ Error: creating RDS Cluster (datafabric-p-8061) Instance (datafabric-p-8061-0): InvalidParameterValue: Creation of an Aurora Replica in a cluster which is already replicating from an RDS for PostgreSQL master is not allowed.

Expected Behavior

Terraform apply should be successful to create the Aurora read replica to source RDS DB instance.

Actual Behavior

module.secondry.aws_rds_cluster_instance.cluster_instances[1]: Still creating... [31m30s elapsed]
  module.secondry.aws_rds_cluster_instance.cluster_instances[1]: Still creating... [31m40s elapsed]
  module.secondry.aws_rds_cluster_instance.cluster_instances[1]: Still creating... [31m50s elapsed]
  module.secondry.aws_rds_cluster_instance.cluster_instances[1]: Still creating... [32m0s elapsed]
  module.secondry.aws_rds_cluster_instance.cluster_instances[1]: Still creating... [32m10s elapsed]
 module.secondry.aws_rds_cluster_instance.cluster_instances[1]: Creation complete after 32m14s [id=datafabric-p-8061-1]
 ā•·
 ā”‚ Error: creating RDS Cluster (datafabric-p-8061) Instance (datafabric-p-8061-0): InvalidParameterValue: Creation of an Aurora Replica in a cluster which is already replicating from an RDS for PostgreSQL master is not allowed.

Relevant Error/Panic Output Snippet

No response

Terraform Configuration Files

resource "aws_rds_cluster" "datafabric" {
  count                               = 1
  cluster_identifier                  = local.cluster_identifier
  engine                              = var.engine
  engine_mode                         = var.engine_mode
  engine_version                      = var.engine_version
  db_subnet_group_name                = aws_db_subnet_group.cluster_sbn_grp[0].name
  db_cluster_parameter_group_name     = aws_rds_cluster_parameter_group.cluster_parameter_group[0].name
  availability_zones                  = var.availability_zones != null ? var.availability_zones : random_shuffle.az.result
  database_name                       = var.is_secondary_cluster == false ? var.database_name : null
  master_username                     = var.replication_source_identifier == null && var.snapshot_identifier == null && var.is_secondary_cluster == false ? var.master_username : null
  master_password                     = var.replication_source_identifier == null && var.snapshot_identifier == null && var.is_secondary_cluster == false ? random_password.password.result : var.snapshot_identifier != null ? var.master_password : null
  backup_retention_period             = var.backup_retention_period
  preferred_backup_window             = var.preferred_backup_window
  copy_tags_to_snapshot               = true
  final_snapshot_identifier           = "${local.cluster_identifier}-final"
  skip_final_snapshot                 = false
  snapshot_identifier                 = var.snapshot_identifier
  storage_encrypted                   = true
  tags                                = module.common.base_tags
  vpc_security_group_ids              = local.security_group_id_list
  allow_major_version_upgrade         = var.allow_major_version_upgrade
  backtrack_window                    = var.backtrack_window
  apply_immediately                   = var.apply_immediately
  deletion_protection                 = var.deletion_protection
  enable_http_endpoint                = var.enable_http_endpoint
  enabled_cloudwatch_logs_exports     = var.engine_mode == "serverless" ? null : var.enabled_cloudwatch_logs_exports
  port                                = var.port
  kms_key_id                          = var.kms_key_id
  replication_source_identifier       = var.replication_source_identifier
  source_region                       = var.source_region
  global_cluster_identifier           = var.global_cluster_identifier
  iam_database_authentication_enabled = var.enable_iam_authentication
  preferred_maintenance_window        = var.preferred_maintenance_window

  dynamic "scaling_configuration" {
    for_each = var.engine_mode == "serverless" ? [1] : []
    content {
      auto_pause               = var.auto_pause
      max_capacity             = var.max_capacity
      min_capacity             = var.min_capacity
      seconds_until_auto_pause = var.seconds_until_auto_pause
      timeout_action           = var.timeout_action
    }
  }

  dynamic "serverlessv2_scaling_configuration" {
    for_each = var.engine_mode == "provisioned" && var.enable_serverlessv2_scaling ? [1] : []
    content {
      min_capacity = var.provisioned_mode_min_capacity
      max_capacity = var.provisioned_mode_max_capacity
    }
  }

  dynamic "restore_to_point_in_time" {
    for_each = var.source_cluster_identifier == null ? [] : [1]
    content {
      source_cluster_identifier  = var.source_cluster_identifier
      restore_type               = var.restore_type
      use_latest_restorable_time = var.use_latest_restorable_time
      restore_to_time            = var.restore_to_time
    }
  }

  dynamic "s3_import" {
    for_each = var.bucket_name == null ? [] : [1]
    content {
      source_engine         = var.source_engine
      source_engine_version = var.source_engine_version
      bucket_name           = var.bucket_name
      bucket_prefix         = var.bucket_prefix
      ingestion_role        = var.ingestion_role
    }
  }

  lifecycle {
    ignore_changes = [global_cluster_identifier, replication_source_identifier]
  }
}

Steps to Reproduce

terraform init terraform plan terraform apply

Debug Output

No response

Panic Output

No response

Important Factoids

No response

References

No response

Would you like to implement a fix?

None

github-actions[bot] commented 1 year ago

Community Note

Voting for Prioritization

Volunteering to Work on This Issue