Open gorantornqvist opened 3 years ago
any update on this? I think it's pretty valid use case and workaround I can think of now is to split them by namespace and let two different backups take care of each.
To my knowledge, this is not supported by current and coming release of Velero, but, of course, we will consider this need in our future developing process. I think it's better for our PM to review this issue first, then we can decide the plan and make a schedule for this. Up to now, our PM is still on vacation, so please tune in for update.
A workaround I have gotten to work is that you run your backups with defaultVolumesToRestic: true
and then specifically opt-out the pods using a SC compatible with CSI snapshots via annotations.
Any update on this? It would be nice to snapshot rook cephfs with csi (which stored inside the cluster by default / not durable) and backup also with restic for disaster recovery or planning migration.
Restoring from snapshot is fast, where restoring from restic is slow. The use case would be to restore from csi if something happen where the cluster is still exists, and full restore from restic if something catastrophic happen.
The workaround from @felfa01 can't solve the problem because by setting defaultVolumesToRestic: true
, every volumes (including istio sidecar) are backed up, which messed up the entire restore. It would be a pain to manually annotate them rather than annotate a few volumes using opt-in approach.
Well, OOT for a bit, any idea on existing solution for controller to automatically annotate the pod for restic backup?
@ghilman27 There is still no progress in method of mixing several tools in a single backup or restore. Talking about the existing way, I think the opt-in way can be used as a workaround. This is the document link: Using opt-in pod volume backup
As discussed in the above issue the opt in way seems to not resolve use cases where you want to mix CSI for local DR and restic for a remote DR.
My use case: i do want to leverage on CSI snaps to have a local copy but also want an exported copy of volume via restic in case the cloud provider have a major outage (the restic repository is on another cloud provider).
I was thinking to a flag in schedule spec in order to force the type of snapshots you want to use (and have 2 differents shedules for local and remote DR)
apiVersion: velero.io/v1
kind: Schedule
metadata:
name: csi-snapshots
namespace: velero
labels:
app: velero
spec:
schedule: 17 2 * * *
template:
metadata:
labels:
app: velero
# backup nothing just use CSI snapshots
includeClusterResources: false
includedResources: []
excludedResources: ['*']
snapshotVolumes: true
***
snapshotPlugin: restic
***
ttl: 26h0m0s
I think it is absolutely required to allow Velero backups doing both - snapshots of capable volumes and file system backups of volumes snapshots can't be made of.
Would't it be as easy as having an option that allows for additional snapshots here? https://github.com/vmware-tanzu/velero/blob/main/pkg/backup/item_backupper.go#L470
What steps did you take and what happened:
I am trying to backup a namespace containing a pod with a PVC using a CSI snapshot capable storage class and another pod using a storage class that doesnt provide snapshots.
My volumesnapshotclass uses driver: csi.trident.netapp.io and has label velero.io/csi-volumesnapshot-class: "true"
All storageclasses is for provisioner csi.trident.netapp.io and the CSI snapshot capable SC has parameters: snapshots: "true" set and the SCs that doesnt allow snapshots (trident economy driver) has parameters: snapshots: "false"
Can you please clarify the correct backup procedure for this scenario? I couldnt find anything in the docs about this ...
What did you expect to happen:
The PVC using a CSI snapshot capable SC should be backed up using snapshots and the PVC using a SC that doesnt allow for snapshots should be backed up with restic.
Environment:
velero version
): v1.6.3-konveyorvelero client config get features
): NOT SETkubectl version
): v1.21.1+a620f50/etc/os-release
): CoreOSVote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.