hashicorp / terraform-provider-aws

The AWS Provider enables Terraform to manage AWS resources.
https://registry.terraform.io/providers/hashicorp/aws
Mozilla Public License 2.0
9.83k stars 9.17k forks source link

aws_db_instance read replica 'cannot elect new source database for replication' #16054

Closed jrobison-sb closed 2 weeks ago

jrobison-sb commented 4 years ago

Community Note

Terraform CLI and Terraform AWS Provider Version

$ terraform -v
Terraform v0.13.5
+ provider registry.terraform.io/-/aws v3.13.0

Affected Resource(s)

Terraform Configuration Files

resource "aws_db_instance" "primary" {
  identifier_prefix = "some-prefix-"
  snapshot_identifier = "some-new-snapshot-which-will-recreate-this-resource"
  ...
}

resource "aws_db_instance" "read_replica" {
  replicate_source_db          = aws_db_instance.primary.identifier
  ...
}

Expected Behavior

When I change the snapshot_identifier of the primary RDS instance, terraform will destroy/create the resource as expected.

I would also expect that the read_replica should be destroyed/created as well.

Actual Behavior

The read_replica isn't destroyed/created, instead Terraform attempts to modify the replicate_source_db attribute. This fails with the error cannot elect new source database for replication.

The only way to unblock this is to taint the read replica and then both resources would be recreated upon applying.

Steps to Reproduce

  1. Create a primary RDS and a read-replica, as shown in the above HCL.
  2. Change the snapshot_identifier of the primary.
  3. Run an apply and the read replica will error out saying cannot elect new source database for replication.
WoodProgrammer commented 3 years ago

Hi, did you find the solution ?

jrobison-sb commented 3 years ago

@WoodProgrammer I didn't find a solution. My workaround was to manually taint the replica, then on the next apply both DB instances would be recreated.

WoodProgrammer commented 3 years ago

Hi, you mention the a taint as to add ignore lifecycle changes right? For example :

  lifecycle {
    ignore_changes = [
      "replicate_source_db",
    ]
  }
jrobison-sb commented 3 years ago

@WoodProgrammer if your use case matches mine (where you are replacing the source RDS instance and so you also want the replica to be replaced), then tainting the replica is all that should be needed.

dvasiljevic-humanity commented 3 years ago

This works for me:

resource "aws_db_instance" "dbname" {
  ...
  identifier             = "dbname-${substr(md5(var.snapshot_id),0,8)}"
  ...
}

resource "aws_db_instance" "dbname-replica" {
  ...
  count = var.replica_instances
  replicate_source_db      = aws_db_instance.dbname.identifier
  identifier                            = "${aws_db_instance.dbname.identifier}-replica-0${count.index}"
  ...
  }

Hope it helps.

panaut0lordv commented 2 years ago

personally I chose aws_db_instance.this.resource_id as it's actually unique when replacing main DB, but I'm putting sha1 -> substr of it as suffix in the identifier.

nwsparks commented 2 years ago

haven't tested this but replace_triggered_by may help here

  lifecycle {
    replace_triggered_by = [
      aws_db_instance.rds.name
    ]
  }
jrobison-sb commented 2 years ago

I ended up working around this myself by giving the replica the same identifier as the primary. When the primary gets replaced, it gets a new identifier, which then forces the replica to also be replaced:

resource "aws_db_instance" "primary" {
  identifier_prefix   = "some-prefix-"
  snapshot_identifier = "some-new-snapshot-which-will-recreate-this-resource"
  ...
}

resource "aws_db_instance" "read_replica" {
  identifier          = aws_db_instance.primary.identifier
  replicate_source_db = aws_db_instance.primary.identifier
  ...
}
github-actions[bot] commented 1 month ago

Marking this issue as stale due to inactivity. This helps our maintainers find and focus on the active issues. If this issue receives no comments in the next 30 days it will automatically be closed. Maintainers can also remove the stale label.

If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thank you!