Closed sokada1221 closed 4 years ago
Thanks!
Considering the disk space issue, would a number based retention policy be better? e.g. keep the latest 3 backups.
And, IMHO, if there's a upstream store configured, the backup data can be deleted immediately when uploaded. Then, the retention policy and deleting backup after upload (consider the PV as a temporary store) can be solved in 2 separate issues.
Good catch! Yes, number based retention would be better.
And OK, I'll open another issue for the following:
If that takes too much implementation effort, we can workaround with immediate delete upon successful upload to CEPH/S3/GCP.
@aylei Logged another ticket for the short-term solution, and updated this ticket to focus on the retention part.
We are creating a backup controller now that I think will be able to address these concerns.
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 15 days
Feature Request
Is your feature request related to a problem? Please describe:
PV for the scheduled backup job eventually becomes full. Currently manual intervention is needed to purge the old data.
Describe the feature you'd like:
Implement a retention policy feature where scheduled backup automatically purges data that is more than the configured backup data retention number. e.g. If 3 is configured, at most 3 backup data is kept on disk. Any additional successful backup will replace the oldest backup data.
Describe alternatives you've considered:
https://github.com/pingcap/tidb-operator/issues/841
Teachability, Documentation, Adoption, Migration Strategy:
For example, a new configuration like the following can be added: