Closed geerlingguy closed 6 months ago
Finished #3 for now, updating the task list in the OP.
After the initial replication completes, the dataset exists on the backup server, and I can log in via smb, but it is not automatically mounted, so I can't browse the contents (even readonly):
pi@nas02:/ssdpool/backup/jupiter $ zfs get mounted ssdpool/backup/jupiter
NAME PROPERTY VALUE SOURCE
ssdpool/backup/jupiter mounted no -
pi@nas02:/ssdpool/backup/jupiter $ zfs get mountpoint ssdpool/backup/jupiter
NAME PROPERTY VALUE SOURCE
ssdpool/backup/jupiter mountpoint /ssdpool/backup/jupiter default
pi@nas02:/ssdpool/backup/jupiter $ zfs get canmount ssdpool/backup/jupiter
NAME PROPERTY VALUE SOURCE
ssdpool/backup/jupiter canmount on default
To mount it, then:
sudo zfs mount ssdpool/backup/jupiter
For syncoid / sanoid snapshot pruning on the Pi (the target), I would like to have it basically stay in sync with the source (HL15). I was looking around and all the older posts mentioned it's not something syncoid
can manage, and people had all sorts of solutions, from configuring sanoid
on the target to prune snapshots, to building their own ZFS snapshot management scripts they'd call on cron.
The danger with some of those techniques is you could easily get out of sync with the source.
Luckily, it looks like last year the option https://github.com/jimsalterjrs/sanoid/pull/523 was added, which adds the --delete-target-snapshots
option. Hopefully that's present on the Debian 12 apt package version of syncoid :D
I was checking on sanoid on the HL15 and found:
jgeerling@nas01:~$ sudo sanoid --monitor-snapshots
FATAL: cannot load /etc/sanoid/sanoid.conf - please create a valid local config file before running sanoid! at /usr/sbin/sanoid line 818.
jgeerling@nas01:~$ sudo sanoid --monitor-snapshots
CRIT: hddpool/jupiter's newest daily snapshot is 2d 12h 26m 48s old (should be < 1d 8h 0m 0s), CRIT: hddpool/jupiter's newest hourly snapshot is 2d 12h 26m 48s old (should be < 6h 0m 0s)
The template had some formatting issues due to Ansible's YAML formatting, it seems — the ini sections were on the same line as some comments. Oops. Fixed that and now it looks like snapshots may work correctly again. Going to wait an hour and see if all the snapshots are in sync between HL15 and Pi.
Huh...
pi@nas02:/ssdpool $ /usr/sbin/syncoid --sshkey=/home/pi/.ssh/id_rsa_zfs --recursive --no-privilege-elevation --delete-target-snapshots pi@nas01.mmoffice.net:hddpool/jupiter ssdpool/backup/jupiter
Unknown option: delete-target-snapshots
Looks like Debian has version 2.1.0:
pi@nas02:/ssdpool $ syncoid -v
/usr/sbin/syncoid version 2.1.0
(Getopt::Long::GetOptions version 2.52; Perl version 5.36.0)
And even 2.2.0 is slightly older than the version in Git, meaning https://github.com/jimsalterjrs/sanoid/pull/523 isn't included in any stable release yet. D'oh! I'll have to configure sanoid on the target I guess?
Everything seems to be working well now, I'll have to monitor things for the next few days to see if hourlies are cleaned up after a day...
pi@nas02:/ssdpool $ crontab -l
#Ansible: Nightly syncoid replication task.
13 7 * * * /usr/sbin/syncoid --sshkey=/home/pi/.ssh/id_rsa_zfs --recursive --no-privilege-elevation pi@nas01.mmoffice.net:hddpool/jupiter ssdpool/backup/jupiter
pi@nas02:/ssdpool $ zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
ssdpool/backup/jupiter@autosnap_2024-04-27_03:15:00_monthly 0B - 10.9T -
ssdpool/backup/jupiter@autosnap_2024-04-27_03:15:00_daily 0B - 10.9T -
ssdpool/backup/jupiter@autosnap_2024-04-27_03:15:00_hourly 0B - 10.9T -
ssdpool/backup/jupiter@syncoid_nas02_2024-04-29:09:09:59-GMT-05:00 0B - 10.9T -
pi@nas02:/ssdpool $ /usr/sbin/syncoid --sshkey=/home/pi/.ssh/id_rsa_zfs --recursive --no-privilege-elevation pi@nas01.mmoffice.net:hddpool/jupiter ssdpool/backup/jupiter
Sending incremental hddpool/jupiter@syncoid_nas02_2024-04-29:09:09:59-GMT-05:00 ... syncoid_nas02_2024-04-29:11:03:27-GMT-05:00 (~ 650.7 MB):
651MiB 0:00:05 [ 126MiB/s] [========================================================================>] 100%
pi@nas02:/ssdpool $ zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
ssdpool/backup/jupiter@autosnap_2024-04-27_03:15:00_monthly 0B - 10.9T -
ssdpool/backup/jupiter@autosnap_2024-04-27_03:15:00_daily 0B - 10.9T -
ssdpool/backup/jupiter@autosnap_2024-04-27_03:15:00_hourly 0B - 10.9T -
ssdpool/backup/jupiter@autosnap_2024-04-29_15:45:01_daily 0B - 10.9T -
ssdpool/backup/jupiter@autosnap_2024-04-29_15:45:01_hourly 0B - 10.9T -
ssdpool/backup/jupiter@autosnap_2024-04-29_16:00:01_hourly 0B - 10.9T -
ssdpool/backup/jupiter@syncoid_nas02_2024-04-29:11:03:27-GMT-05:00 0B - 10.9T -
This seems to be working well enough for now. Broke out monitoring to #12.
I would like to configure my Raspberry Pi 5 NAS (running on a 2.5 Gbps network connection) as a replication target for ZFS, so I have two local copies of all my important data.
It currently has 4x 8 TB SSDs (32TB raw capacity, 24TB in RAIDZ1), so I hope that's enough to replicate the dataset. Right now:
(And I'm considering setting up a script to transcode all my old ProRes RAW footage down to H.265 so that would probably save another 10TB or so! Would take months to transcode, lol.
Task list
pi
user permissions into Ansible playbook (see TODO inreplication.yml
)Monitor snapshot replication with- I broke this out to #12sudo sanoid --monitor-snapshots
(HL15) /zfs list -t snapshot
(Pi)