Open AndrewBedscastle opened 6 years ago
It is a good idea @AndrewBedscastle , and has been discussed before. Since this runs in an infinite loop, it might be able to prune them as part of its loop.
On the other hand, it makes it do "one more thing", which may not be desired.
I don't object though, since it always will be optional. Open a PR for it, including tests?
@AndrewBedscastle i had the same idea but a simple "find rm" solves that problem for most cases.
i guess (never tested) it should also work with /scripts.d/pre-backup
or /scripts.d/post-backup
Workaround for S3 users:
Resources:
BackupServiceS3Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: backup
VersioningConfiguration:
Status: Enabled
LifecycleConfiguration:
Rules:
# clean up old unfinished uploads
- AbortIncompleteMultipartUpload:
DaysAfterInitiation: 7
Status: Enabled
# hold removed files for 14 days
- NoncurrentVersionExpirationInDays: 14
Status: Enabled
apiVersion: v1
kind: ConfigMap
metadata:
name: backup
data:
target.sh: |
#!/bin/bash
echo "backup.tgz"
[...]
volumeMounts:
- name: backup
mountPath: /scripts.d/target.sh
subPath: target.sh
Now, backup file is overridden on every backup and old versions are away after 14 days. You can get not expired backups through console or api.
There is a quite nice utility to implement this cronicle. At least for file backups this should work. requires python
What's the progress of this feature?
No one has offered a PR yet.
PR?
I put a pull request in to limit the number of days a backup is kept.
Hi, it would be nice to be able to specify a number of backups to keep, oldest backups should be removed automatically.
Best regards Andreas