After successful restoration of the DB to a point in time, deletion of the nested block _restore_to_point_intime should not cause the AWS RDS DB to be replaced again.
If the first point isn't feasible/logical and the block has to stay there, the change of its parameters should trigger the restoration procedure.
The solution of at least one problem/use-case would actually resolve both. If at least 1st point is resolved, then we can (dynamically) delete the block from configuration without triggering another replacement. If at least 2nd point is resolved, once there, we can leave the restore nested block with the last configuration as is and enter a different parameter value, when another restoration is needed.
Actual Behavior
After successful restoration to point in time, conditionally setting the nested block _restore_to_point_intime to [ ] again causes plan to replace the database. When I compare this behavior with restoration from (classic) snapshot (parameter _snapshotidentifier), there is no such issue. Once we set _snapshotidentifier to null, no replace is triggered, it's just noted in the state file, which is ok. I find this inconsistent, because for AWS this is a one time action, after which the DB is not associated with the original snapshot in any way.
The second issue is that if we want to restore the DB again, let's say to a different time (or from a different DB identifier), Terraform recognizes configuration changes, but does not trigger DB restore, it just writes it into the state file. This is because the nested block from previous restoration is still in the state file. The only workaround would be to let it restore using snapshot_identifier parameter first, while simultaneously deleting the _restore_to_point_intime nested block and after all that setting the nested block again, now to a different time or identifier.
Relevant Error/Panic Output Snippet
No response
Terraform Configuration Files
main.tf:
resource "aws_db_instance" "default" {
tags = merge(
{
"dtit:sec:InfoSecClass" = var.InfoSecClass
"CreatedBy" = "onepm-robot"
"Name" = var.db_name_tag
},
local.schedule_tag,
)
# Allocating the storage for database instance.
allocated_storage = "${var.allocated_storage["${var.cicd_env}"]}"
# Enabled autoscaling up to 2 TB
max_allocated_storage = "${var.max_allocated_storage["${var.cicd_env}"]}"
# Added Storage Type - A.L. 18.09.2023
storage_type = var.storage_type
# Declaring the database engine and engine_version
engine = "${var.engine["${var.cicd_env}"]}"
engine_version = var.engine_version
# Declaring the instance class
#instance_class = var.instance_class
instance_class = "${var.instance_class["${var.cicd_env}"]}"
license_model = "${var.license_model["${var.cicd_env}"]}"
db_name = var.db_name
# User to connect the database instance
username = var.username
# Password to connect the database instance
password = var.password
parameter_group_name = local.parameter_group
# Enable DB-Logging
enabled_cloudwatch_logs_exports = ["alert","audit","listener","trace"]
## BACKUP & SNAPSHOT SETTINGS ##
# Automated backups
backup_retention_period = local.bu_retention_period
delete_automated_backups = false
# Final snapshot
skip_final_snapshot = false
final_snapshot_identifier = "final-snap-${random_string.random-final-snap-id.result}-${formatdate("DD-MMM-YYYY-hh-mm-ss", timestamp())}"
# Conditionally restore from snapshot (condition defined in local variables):
snapshot_identifier = local.snapshot_identifier
# Conditionally restore DB to specific point in time (conditions defined in local variables):
dynamic "restore_to_point_in_time" {
for_each = var.recover_to_point_in_time["source_db_id"] != null ? var.recover_to_point_in_time[*] : []
content {
source_db_instance_identifier = restore_to_point_in_time.value.source_db_id
restore_time = restore_to_point_in_time.value.timestamp
}
}
### Rest of the configuration skipped ###
}
variable "recover_to_point_in_time" {
type = object({
source_db_id = any
timestamp = any
})
}
Steps to Reproduce
Use case #1:
Create AWS Oracle db instance using _aws_dbinstance
Set the nested block restore_to_point_in_time (PITR) either dynamically or manually
Recover the database from a so called continuous backup to any time in the past.
Delete the nested PITR block
Plan will announce replacement of the DB instance.
Use case #2:
Create AWS Oracle db instance using _aws_dbinstance
Set the nested block restore_to_point_in_time (PITR) either dynamically or manually
Recover the database from a so called continuous backup to any time in the past.
Say you need to recover again, now to a different time, so set the parameters of the PITR block to a different time and/or use a different instance identifier.
Plan will announce update of the DB resource in-place and this time it won't trigger the restoration.
Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request.
Volunteering to Work on This Issue
If you are interested in working on this issue, please leave a comment.
If this would be your first contribution, please review the contribution guide.
Terraform Core Version
1.6.3
AWS Provider Version
5.26.0
Affected Resource(s)
aws_db_instance
Expected Behavior
The solution of at least one problem/use-case would actually resolve both. If at least 1st point is resolved, then we can (dynamically) delete the block from configuration without triggering another replacement. If at least 2nd point is resolved, once there, we can leave the restore nested block with the last configuration as is and enter a different parameter value, when another restoration is needed.
Actual Behavior
Relevant Error/Panic Output Snippet
No response
Terraform Configuration Files
main.tf:
terraform.tfvars snippet:
vars.tf snippet:
Steps to Reproduce
Use case #1:
Use case #2:
Debug Output
TF_debug_output.txt
Panic Output
No response
Important Factoids
Using Oracle SE2 19 in the test environment and Oracle EE 19 in prod. env.
References
No response
Would you like to implement a fix?
None