Closed sfatula closed 1 year ago
yes, this is the way its supposed to be: https://github.com/psy0rz/zfs_autobackup/blob/ee1d17b6ff942d6a2d151c3bdaa664643cd80413/zfs_autobackup/ZfsAutobackup.py#L233
Between the two mbuffers there is "room" for the custom pipes and compression step.
We could optimize the extra mbuffer away in cases like this, but its more code/tests/possible bugs
autobackup-venv/bin/python -m zfs_autobackup.ZfsAutobackup -v -d --no-progress --clear-refreservation --exclude-received --keep-source=10,1d1w,1w1m,1m3m --keep-target=10,1d1w,1w1m,1m3m --allow-empty --clear-mountpoint --rollback --destroy-missing=30d --zfs-compressed --ssh-config /mnt/tank/Scripts/Keys/config --ssh-target backup --rate=400k --buffer=100M backup Backup/Replicate
This yields in ps:
/bin/sh -c zfs send -L -e -c --raw -p tank/Data@backup-20230802183017 | mbuffer -q -s128k -m100M | mbuffer -q -s128k -m16M -R400k
Note the double mbuffer, just wanting to make sure that is correct as it seems odd and superfluous. Also, the second mbuffer which has the rate limit is ignoring the increased buffer size. Shouldn't a single mbuffer be preferable?
Receiving side looks ok: ssh -F /mnt/tank/Scripts/Keys/config backup mbuffer -q -s128k -m100M | zfs recv -u -x refreservation -o canmount=noauto -v -s Backup/Replicate/tank/Data