Open rocketnova opened 1 year ago
Task 1
The database is mostly used for data persistence rather than long-term storage which means our actual allocated storage doesn't need to be that large. The recommendation is to have a max_allocated_storage
of 2 GiB and to configure autoscaling to further reduce costs. I believe the minimum value is 1 GiB. Which matches our current configuration for the aurora database.
current storage allocation
If the need arises, we could also set up an automated job (cron, lambda, etc) that drops data after one week.
Additional reading
Excellent! Thank you.
The recommendation is to have a max_allocated_storage of 2 GiB and to configure autoscaling to further reduce costs. I believe the minimum value is 1 GiB. Which matches our current configuration for the aurora database.
Does this mean that we should change this line to the following?
allocated_storage = 1
max_allocated_storage = 2
@rocketnova I think so? Nothing in the documents say if the values for max_allocated_storage
should be the actual amount of space or the capacity units
@aplybeah I think it must be? The terraform argument reference says allocated_storage
is in GiB:
allocated_storage - (Required unless a snapshot_identifier or replicate_source_db is provided) The allocated storage in gibibytes.
And then the storage autoscaling section says:
To enable Storage Autoscaling with instances that support the feature, define the max_allocated_storage argument higher than the allocated_storage argument. Terraform will automatically hide differences with the allocated_storage argument value if autoscaling occurs.
resource "aws_db_instance" "example" { ... other configuration ...
allocated_storage = 50 max_allocated_storage = 100 }
Which implies to me that the example sets the base allocated_storage
to 50 GiB and max_allocated_storage
to 100 GiB, but I could be wrong!
When you test out the implementation, could you try the following and see if it works as expected/hoped?
allocated_storage = 1
max_allocated_storage = 2
Sure! I'll give that a try.
Task 2 Following the request in this comment: https://github.com/navapbc/wic-participant-recertification-portal/pull/116#discussion_r1237377080 I think the only thing we need to do is set retention periods in the database and possibly make sure a bucket exists for the snapshots
# something like this
backup_retention_period = 7
backup_window = 02:00-03:00 # this is a random time but in UTC
I assume this these would be arguments on the aws_db_instance
resource? If so, how do we tell it what s3 bucket to use?
@rocketnova I don't think the s3 bucket applies in our case, I misread something when I added that comment. But yes, these are arguments in aws_db_instance
@aplybeah Oh I see. What does the backup restore process look like if we use backup_retention_period
?
@rocketnova Every day there is a backup window that is about 30 min. We can either define when we want this backup to be, or use AWS's default time. During this time there may be increased latency, etc (nothing we need to be too concerned about), and the database creates a snapshot of the entire instance
@aplybeah Are we talking about these RDS automated backups? If so, is this the correct restoration process?
@rocketnova Yes we are, and that is the correct process.
Context
We discovered that AWS RDS Aurora v2 is more expensive than AWS RDS. On PRP, Aurora is costing ~$150/mo, even with very little load. On MWDP, RDS is costing ~$15/mo. Even though on large projects, the difference of $135/mo is not significant, it ends up being quite costly for our long-lived demo projects.
In #116, we got really far on creating an RDS drop-in alternative to the RDS Aurora module. But the PR was never merged because 2 comments were not resolved.
Tasks
allocated_storage
should be for the participant portal should be