Closed michaelfranzl closed 11 months ago
The full failing example script is:
zfs create tank1/mydata
zfs create tank1/backup_internal
zfs create tank1/backup_external
zfs set autobackup:site1=true tank1/mydata
zfs set autobackup:site2=true tank1/backup_internal
zfs_autobackup site1 tank1/backup_internal --clear-mountpoint --set-properties readonly=on --exclude-received
zfs_autobackup site2 tank1/backup_external --clear-mountpoint --set-properties readonly=on --exclude-received
touch /mnt/tank1/mydata/test20230512T0834.txt
zfs_autobackup site1 tank1/backup_internal --clear-mountpoint --set-properties readonly=on --exclude-received
# ! [Target] STDERR > cannot receive incremental stream: destination tank1/backup_internal/tank1/mydata has been modified
(It is inconsequential that all of this is tested on just one pool, tank1
.)
I'm sure that this can be coded into a failing unit test.
Thanks,
Will try this out asap.
Also: Can you try with 3.2 beta1?
What you said about the rollback function is correct, it rolls back to the latest snapshot and not other snapshots, because then it would destroy all snapshots after that. (too dangerous)
Still you could try it, in case something else has changed the data.
Normally 2 snapshots without changes in between SHOULD point to the same data. So in that case the site1/site2 snapshots in tank1/backup_external should point to the same data.
note that transferring encrypted data in raw mode might also give this problem, see #219
feel free to reopen it if you have more info
For anyone with this problem: Look at https://github.com/psy0rz/zfs_autobackup/wiki/Mounting
I'm following the following documented use case (emphasis added). I can reliably reproduce an error with it, which makes me think that either this is a bug or I misunderstand the usage of the tool.
This is a very advantageous workflow, because the secondary backup will not impact the read performance of the productive pool.
I'm only deviating from this workflow in that the second zfs-autobackup is not remote but local (transferring to a local pool called
backup_external
which is on an external drive which can be disconnected later for off-site storage), but I believe that it would not make a difference were it remote.I have the following dataset labels:
My goal is to back up
mypool/mydata
tobackup_internal
, and then immediately after, to back upbackup_internal
tobackup_external
.So I run the following script:
It succeeds the first time.
But, if then there is a regular change of a file in
mypool/mydata
and I run this script again, then the first command (the backup ofsite1
tobackup_internal
) fails with the error:I can exclude that this dataset was modified since I'm running this in a controlled testing environment.
The
--rollback
option was sometimes suggested in other issues here, but does not help in this case because it will roll back to the latest snapshot, which now has a different backup name (@site2-<date>
instead of@site1-<date>
). This means that during thesite1
run, it tries to roll back to thesite2
snapshot. This looks wrong to me. (implementation)But, since I'm not using the
--rollback
argument, I think that this is a separate issue.Am I doing something wrong or is this a bug? How can I achieve this workflow?
Thanks for your support.