Closed marshalleq closed 3 years ago
if even one destination fails, the src will not be cleaned ... check the log
For newest znapzend after PR #506 merged this issue should go away mostly, after you have a successful sync to all destinations and znapzend records a latest known common snapshot(s) that it should not delete (to make later syncs possible) but can remove other regular snapshots from source "safely" even if one of destinations is offline, full, etc.
You certainly should look into why it claims the error, for the past year in master branch code there is a summary of failed replications at the end of a send, just before it refuses to clean up sources because it had the errors. And fix it on destination somehow.
The next thing I wanted to suggest was using different backup schedules, so if there is only one problematic subtree (e.g. backups of rootfs and zoneroot boot environments per #503), others can still sync and clean properly. But seems you already have that, and the problem is shared somehow...
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I use the following command. The destination is online permanently (same host).
But I have hundreds of 10 minutes backups on the src where it should only have 2 hours worth. It currently shows from 29 Jan to 5 Feb, with 10 minute backups for those numbers of days relentlessly and counting. I've stopped the daemon and even restarted the server. Deleted the old snapshots, edited the command as per below and still no dice. I assume the below command should work?
znapzendzetup edit --recursive --mbuffersize=1G --tsformat='%d-%m-%Y-%H:%M:%S' SRC '2h=>10min,1d=>1h' INTEL1TB/virtual_machines/game_server DST:a '7d=>1d,30d=>1w,1y=>1m' data/zfsbackups/virtual_machines/game_server
A couple of lines from ZFS list INTEL1TB/virtual_machines/game_server@05-02-2020-10:50:00 86.3M - 73.6G - INTEL1TB/virtual_machines/game_server@05-02-2020-11:00:00 187M - 73.6G - INTEL1TB/virtual_machines/game_server@05-02-2020-11:10:00 102M - 73.6G - INTEL1TB/virtual_machines/game_server@05-02-2020-11:20:00 29.9M - 73.6G - INTEL1TB/virtual_machines/game_server@05-02-2020-11:30:00 85.1M - 73.6G - INTEL1TB/virtual_machines/game_server@05-02-2020-11:40:00 19.9M - 73.6G - INTEL1TB/virtual_machines/game_server@05-02-2020-11:50:00 34.3M - 73.5G -
All my other backup schedules appear to suffer from the same problem. This means I keep running out of disk space.