Open mddeff opened 1 year ago
Correct. If you ask for no sync snap and your source has no snapshots, it's impossible to replicate. Literally impossible, since replication is based on snapshots.
I generally recommend not excluding empty parent datasets from your snapshot policy. The empty snapshots don't really cost you anything, and they keep you from having problems like this.
Yep, totally understood. It's not so much a space thing, rather, I'm creating the snapshots at the child dataset level because they have different uses (and subsequently have different retention policies).
For instance, tank2/vms
and tank2/iot
have different needs for retention, so the policies are different.
Would it be better to:
A) Set a general Sanoid snapshot policy for tank2
that is the boolean of the policies for vms
and iot
, and then have the delta policies for each of those child sets? (This feels prone to error having two different policies creating the set of snapshots for a single dataset, but I could be overthinking it.)
B) Point Syncoid at tank2/vms
and tank2/iot
seperately?
C) Just do what I did where I create blank datasets on the target and let Syncoid take it from there
When I had originally read the --recursive
option, my brain auto-completed a mkdir -p
type behavior. Is there any reason why that behavior would be unwanted? Effectively just creating blank datasets on the target to support the tree structure of target datasets with actual snapshots to be transferred? Maybe a new flag?
We have the same issue. We have one system storage0 that all sort of systems sync to. This then replicates to a second system storage1. But since we want to keep all snapshot taking local to the machines sending, but all snapshot removal local to the machines receiving (so as not to have some snapshot bug propagate automatically), there are no automatic snapshots taken on storage0. Now syncing storage0 to storage1 runs into this issue since we cannot simply sync the (unsnapshotted) root dataset.
mkdir -p
/ zfs create -p
behavior would be much easier to deal with.
As the tile says, this is likely a PEBKAC/ID10T issue, but I haven't been able to figure it out so I'm sending up a flare. Redirect me as appropriate.
Source:
(Inb4 CentOS 8 is dead and I'm running it on a storage array; its on the to-do list. And I'm sure my drastically different versions of ZFS isn't the best either.)
All of those datasets are created, populated, and managed by local (to src) syncoid using my autosyncoid script.
Dest:
On dest, I pre-created the
dozer0/ze-fs01/dozer1
dataset and then ran:It successfully creates
dozer0/ze-fs01/dozer1/fast
and then syncsdozer1/fast1/*
todozer0/ze-fs01/dozer1/fast/*
recursively creating all necessary child datasets. Then, when it gets todozer1/tank2
, it barfs:So I manually created all of the child datasets on dest and then re-ran the same command, and now it seems to be working (ish)
While
fast1
has snapshots (everything under there has the same retention policy, so I'm having sanoid just snap the whole dataset recursively), thetank*
datasets do not as they all have mixed usage (and snapshots only are occurring in the child datasets).It looks like when there's no snapshots (and my usage of
--no-sync-snap
) on a dataset, syncoid doesn't sync it, but then it doesn't get created on the target system to support creation of child datasets that do have snapshots.Is this behavior expected or have I found an edge case?
As always, thank you to Jim and the {san,sync,find}oid contributors that enable enterprise-grade storage/backup for the FOSS community!