Closed r2evans closed 7 years ago
This might apply to docker-gitlab as well, if you were to take over the backup of repositories
. Again, small steps first, but the space-savings will be significant for anyone (who does backups) but the smallest of users.
I believe a more formal backup setup probably belongs outside of the docker container. I've combined all of my services (incl. gitlab, nextcloud) into a single incremental backup using duplicity
. Without significantly more configuration, I get the more-efficient incremental backups I wanted plus some other options (e.g., encryption, remote backup host, etc).
Do you have any interest in enabling incremental backups to the backup methodology? Especially with nextcloud, backups get hard and expensive, and incremental backups are widely seen as a preferred option.
Since the current method uses
tar
, the solution can either:--listed-incremental=<some_state_file>
, then using--level=0
for full backups and--level=1
for incremental backups (relative to the most recent level 0 backup); ortar
-providedbackup
andrestore
scripts (perhaps overkill for this simple configuration).I'm proposing simple logic that would do something like:
BACKUP_FULL=604800
(seconds in a week)Since the backup file you output contains several tar files, this logic could be applied to the
ocdata.tar
file itself, and optionally toconfig.tar.gz
(though this has very little benefit due to its small file size).A future extension to this would be to save the
diff
fordatabase.sql
(.gz
) in the incrementals. (Not suggested at this time.)References:
tar
, https://www.gnu.org/software/tar/manual/html_node/Incremental-Dumps.htmlbackup
andrestore
script configuration, https://www.gnu.org/software/tar/manual/html_node/Backup-Parameters.html