Open DragonQ opened 1 year ago
For anyone with the same issue, I was able to workaround this manually by finding the oldest snapshot (first shown when running zfs list -t snapshot source/backups
), then running the following command:
sudo zfs send -p -w source/backups@pyznap_oldest_snapshot | pv | ssh backup-nas sudo zfs recv backups/backups
Pyznap then works as normal after removing the dest_auto_create
option. It would be nice if this manual step wasn't necessary for encrypted datasets though.
Sorry, I totally missed this question. I never used encrypted datasets, so I'm actually not too familiar with how they work with zfs send/recv. Would it also work if you just create the dataset and then use pyznap without the dest_auto_create
? Or does there need to be a snapshot on the dataset before it works?
I manually create a destination dataset, then do a sudo pyznap send -s source/test1 -d ssh:22:root@backup-nas:backups/test1 -w
, I get the same error as above.
Hm no idea what exactly is causing this. I never worked with encrypted datasets, so no experience there on how exactly they behave with zfs send/recv.
I had the same problem so I submitted a PR for this specific case #101
With raw send you should not create the destination since it'll always complain that the dest doesn't match source.
What's the correct way to send an encrypted dataset to a remote pool with no existing dataset? I'm using these options:
When I run
pyznap send
, I get this error:Is there a "proper" way to do this without manually creating a dataset on the destination pool?