Open JBaptisme33 opened 1 year ago
Anyone have an idea :) ?
I have the same problem, I believe the implementation of backups are buggy as some of my full backups seem to be too small to be possible. I also believe this is the case because there is no checksum verification of backups sent over the cluster network. If the connection is interrupted, the driver assumes the backup is complete.
What steps did you take and what happened:
I have 2 servers running k3s with zfs on local partition. I've configured cross-replications between these 2 servers like with velero + minio Each server have a minio instance (minio namespace / pod is excluded from velero backup) , and each day Velero located in each server send a backup to each other (manifests backup + volume snapshot backup)
In each backup location, for a specific PVC, I have zfs backup like follow (which seem reprensant à zfs incremental volume
What did you expect to happen:
I need to restore a destroyed PVC by misstake on server 1 (zfs volume into server 1 was removed to beause of Retain Policy to Delete). So I tried to restore PVC from Minio remote backup via Velero tool Restore example is as follow:
Restore is impossible because of incremental zfs snapshot it seem ?
First error when trying to restore PVC from remote minio instance:
So I create zvol called pvc-1ab8af82-181d-40eb-975c-5150b0be5586 Afterthat, I have error regarding incremental snapshot:
For info, incrBackupCount: was configured to "10" and velero backup ttl was configured to 240h
Is my setup broken by design ? Is it possible to make a cross replication setup with only zfs-localpv / minio and velero ? Does I need to restore some another ressources like zfsvolumes.zfs.openebs.io manifests before restore my complete backup ?
Thanks