Closed stuckj closed 3 years ago
Oh, look at that. I guess just doing a docker hub build with the dockerfile set to Dockerfile.template
works. :) https://hub.docker.com/r/stuckj/duplicacy. I'll give this a test now just to make sure it's working as expected.
Yep, worked as expected. I just did this:
docker run -it -v "${HOME}/test-src":/data -v "${PWD}/test-dest":/backup-dir -v "${PWD}/pre-backup.sh":/pre-backup.sh -e SNAPSHOT_ID="id" -e STORAGE_URL="/backup-dir" -e RUN_JOB_IMMEDIATELY="yes" -e PRE_BACKUP_SCRIPT="/pre-backup.sh" -e BACKUP_CRON="0 1 * * *" stuckj/duplicacy
With an empty /test-dest
directory and a test-src
directory with some test files to backup. The pre-backup.sh
script I used is just this:
#!/bin/sh
echo "YAY, I ran before the backup!"
The output verifies the pre-backup.sh
script ran before the backup. Specifically this part of the below output:
...
========== Run backup job at Thu Apr 8 03:59:27 UTC 2021 ==========
Run pre backup script
YAY, I ran before the backup!
Repository set to /data
...
Full output:
Unable to find image 'stuckj/duplicacy:latest' locally
latest: Pulling from stuckj/duplicacy
ca3cd42a7c95: Already exists
a6125919561a: Pull complete
0e69123f963c: Pull complete
9492ec35d371: Pull complete
aa5c19b7d875: Pull complete
336007bc5678: Pull complete
Digest: sha256:518f1e0baa0d171af15eadb3c7ac48b66df9d02cd376d87200b3166817d2a3fb
Status: Downloaded newer image for stuckj/duplicacy:latest
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 10-setupssmtp: executing...
[cont-init.d] 10-setupssmtp: exited 0.
[cont-init.d] 20-init: executing...
/data will be backed up to /backup-dir with id id
[cont-init.d] 20-init: exited 0.
[cont-init.d] 60-createcron: executing...
[cont-init.d] 60-createcron: exited 0.
[cont-init.d] 99-backupimmediately: executing...
========== Run backup job at Thu Apr 8 03:59:27 UTC 2021 ==========
Run pre backup script
YAY, I ran before the backup!
Repository set to /data
Storage set to /backup-dir
No previous backup found
Indexing /data
Parsing filter file /config/filters
Loaded 0 include/exclude pattern(s)
Packed from-powerpoint/Creche Photo Show_1076.pptx (5656825)
Packed from-powerpoint/Creche Photo Show_1124.pptx (475844)
Packed from-powerpoint/Creche Photo Show_1214.pptx (5643265)
Packed from-powerpoint/Creche Photo Show_1217.pptx (97787)
Packed from-powerpoint/Creche Photo Show_1218.pptx (130077)
Packed from-powerpoint/Creche Photo Show_1228.pptx (119149)
Packed from-powerpoint/Creche Photo Show_1229.pptx (114011)
Packed from-powerpoint/Creche Photo Show_1230.pptx (99314)
Packed from-powerpoint/Creche Photo Show_1231.pptx (144486)
Packed from-powerpoint/Creche Photo Show_1232.pptx (176891)
Packed from-powerpoint/Creche Photo Show_1233.pptx (64571)
Packed from-powerpoint/Creche Photo Show_1234.pptx (121786)
Packed from-powerpoint/Creche Photo Show_1235.pptx (120272)
Packed from-powerpoint/Creche Photo Show_1236.pptx (126696)
Packed from-powerpoint/Creche Photo Show_1237.pptx (111960)
Packed from-powerpoint/Creche Photo Show_1238.pptx (104833)
Packed from-powerpoint/Creche Photo Show_1239.pptx (135596)
Packed from-powerpoint/Creche Photo Show_815.pptx (16165)
Backup for /data at revision 1 completed
Backup COMPLETED, duration 00:00:00
[cont-init.d] 99-backupimmediately: exited 0.
[cont-init.d] done.
[services.d] starting services
crond[288]: crond (busybox 1.32.1) started, log level 8
[services.d] done.
^C[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] waiting for services.
s6-svwait: fatal: timed out
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.
Actually, as I think about this more, I think I'll modify the PR tomorrow to have the process fail if the pre-backup script fails. At least in my use case that would make sense since not being able to mount a snapshot would make the backup pointless (and not failing wouldn't alert me to problems).
Thanks for effort! I'll wait.
No problem. :) I've made that change now and tested with a non-failing PRE_BACKUP_SCRIPT
, a failing PRE_BACKUP_SCRIPT
(which now will not run the backup) and no PRE_BACKUP_SCRIPT
. All cases look like they're working for me.
Merged both to master
branch (edge
docker tag) and release
branch (latest
docker tag)
I've added a small feature to run a pre-backup script before running the duplicacy backup. I don't have it prevent the backup from running upon failure, but could add that. My use case for this was to be able to mount a ZFS snapshot (the latest one for a dataset) to a fixed mount point before running duplicacy on that mount point. That way duplicacy will only see the changes in the latest snapshot and the snapshot will give an immutable snapshot of the filesystem to avoid problems with files changing while duplicacy is backing up. This is helpful for backing up database files without needing to shutdown the DB (part of the pre-snapshot process handles quiescing the DB first).
~I had planned to test this before sending you the PR, but I'm not quite sure how you build this. :) I'm familiar with basic docker builds, but haven't use multi-arch builds nor s6-overlay. It looked like all the s6 stuff is setup in docker build hooks (which I'm also unfamiliar with) and couldn't get to work locally. But, if you have any advice (or can point me to some build resources) I'd love to give it a whirl first too to make sure it's working well. :)~
Figured it out. I've tested and made sure this is working.
Here's the PR. Let me know if you have any questions or if you don't want to merge it. I'm fine to just use my fork, but would prefer to use your code so I get any updates you make more easily. It's a pretty simple change. https://github.com/azinchen/duplicacy/pull/9