A little rust tool to do S3 backups of my sanoid zfs snapshots. (Although any snapshot system should work)
S3 snapshots are cheap... Especially if you are (like in my case) taking a backup of a computer which is acting as a backup of something else and can take advantage of S3 deep archive. (NB : this means recovery time can increase, but price is very considerably decreased). I also quite like binary dumps in a dumb blob file storage. Backing up with something like rsync.net is more elegant, I would argue this is less complex.
AWS S3 recovery times. Deep glacier recovery is 12 hours. I know from experience that if I loose the backup of my backups 12hours to recover my data is the least of my problems. More information on the s3 pricing page.
This tool creates two categories of backups: "full" and "incremental" based on two zfs snapshots. These zfs snapshots are identified by the tool by regex matching the snapshot names with the following strings by default:
This program will not create these zfs snapshots for you. You must design or deploy automation that creates these snapshots and replaces them over time. These regex strings can be changed in the application configuration file.
For reference, zfs snapshots can be managed with the following zfs subcommands:
# Note: Replace the "*-name" strings with the proper values.
# Create snapshots:
zfs snapshot pool-name/dataset-name@monthly
zfs snapshot pool-name/dataset-name@daily
# List snapshots:
zfs list -t snapshot
# Delete a snapshot:
zfs destroy pool-name/dataset-name@snapshot-name
After enabling and testing your zfs snapshot automation, you can setup and configure the application as follows:
zfs_to_glacier generateconfig
to get a sample config.yamlzfs_to_glacier generatecloudformation
to create an AWS cloudformation template. This will be used to create the AWS resources required by the toolAWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
AWS_REGION
to whatever region you uploaded the file from. (If you run the command under you'll also see the region in the endpoint url). For example export AWS_REGION="eu-west-3"
zfs_to_glacier sync
. You can run zfs_to_glacier sync -v -n
to see what it would snapshot in incremental mode and in full mode.zfs_to_glacier will keep encrypted data encrypted, read warnings below!
When sending files this will confirm both md5 checksums of each individual part of a file sent, and confirm that the zfs command exits with status 0. I don't think it's possible to send corrupted data this way. That said, if the zfs command exits with status code 0 and does not produce the required output of course this app would happily upload a corrupted snapshot.
I would recommend taking great care when dealing with something as critical as backups, I rely on this tool personally, but it comes with zero guarantees.
export CHECK_URL="https://hc-ping.com/<URL_FROM_HEALTHCHECKS.IO>"
export AWS_SECRET_ACCESS_KEY="<AWS_SECRET_KEY>"
export AWS_ACCESS_KEY_ID="<AWS_ACCESS_KEY>"
export AWS_REGION="<AWS_REGION, for example eu-west-3>"
url=$CHECK_URL
curl -fsS --retry 3 -X GET $url/start
nice zfs_to_glacier sync -v &> backup.log
if [ $? -ne 0 ]; then
url=$url/fail
curl -fsS --retry 3 -X POST --data-raw "$(tail -n 20 backup.log)" $url
exit 1
fi
curl -fsS --retry 3 -X POST --data-raw "$(tail -n 20 backup.log)" $url
If you want to build from source rather than downloading a release:
# Install rust from your package manager, or use curl https://sh.rustup.rs -sSf | sh
cargo build --release
# the release executable is in target/release