Closed kayrus closed 1 year ago
@Lirt what about this PR? I've also managed to make the plugin to work with both Cinder and Manila volumes.
Yes I noticed it. I am leaving this as last MR to review/test as it's most challenging and it will take some time.
Thank you for implementation. I will keep you posted.
@Lirt I just noticed that velero supports optional config map. So I added Manila CSI driver name there.
@Lirt I just noticed that velero supports optional config map. So I added Manila CSI driver name there.
You can put arbitrary configuration to every BSL or VSL.
---yaml
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
name: my-backup-in-cloud1
namespace: velero
spec:
accessMode: ReadWrite
config:
cloud: cloud1
# optional region
region: fra1
driver: ...
default: false
objectStorage:
bucket: velero-backup-cloud1
provider: community.openstack.org/openstack
@kayrus I'd like to know what is your view about waiting for status (WaitForStatus()
) in CreateSnapshot()
function.
How velero works is that there is only one reconciliation loop for each VSL. This means that if you have a lot of big volumes to backup every day and you wait for each snapshot to finish (which could take some time) the backup process would be very slow.
That's why the WaitForStatus()
was used only in restoration process (it's necessary to wait until the volume is created from snapshot).
What I don't know exactly but I think is your point is that CreateSnapshot()
should wait until the snapshot is finished because if it fails later and the function doesn't return error the Velero will evaluate backup as finished successfully, but snapshot would be secretly failed. Honestly I don't remember if I was replicating this scenario historically but based on this code it looks like we should return error to evaluate snapshot as failed (meaning there is no additional mechanism that would evaluate whether the snapshot finished successfully later).
So if that's the case and you agree it's safer to wait for snapshot to finish we can keep the code you suggested. I would like to additionally make the timeout configurable (If you want I can add commit into your MR for it) as I am pretty sure that there will be users for which the default timeout will not be sufficient.
This means that if you have a lot of big volumes to backup every day and you wait for each snapshot to finish (which could take some time) the backup process would be very slow.
Our use case is to put a DB into a special mode and flush all the data on the disc. Once the snapshot is finished, it switches back into a normal mode. If we don't wait for a snapshot to be finished, the snapshot data will be unusable.
I would like to additionally make the timeout configurable
This is done in the #81 PR.
@Lirt I'm apologies for pushing you, but will you have a chance to review this PR during this week?
Hi, I reviewed the code already few times. There is still one small "nit" comment from me about changing one log to warn
severity.
Last thing that I would propose to do different is the integration test. As you can see the existing integration test is not doing any cinder backup (I wasn't successful to make it work in the GitHub CI on my first try). And so is your test not actually testing manila :smile: because we have no existing PVC in the kind
cluster.
So in the end the best would be if there is only 1 integration test job that will test both Cinder and Manila in 1 run (it can still be split to 2 jobs if it make sense in future).
But for now I'd propose to have 1 integration test job - so if you merge contents of the manila integration test into the original one it would be great. As far as I see the difference between those 2 tests is only that you set up Manila with DevStack.
After that we can merge this. I will make the integration test to work with both plugins later on.
Edit: My mistake... I didn't publish the review. It's there now.
@Lirt what do you think about adding a new community.openstack.org/openstack-cinder
alias to cinder with possible further deprecation of the old community.openstack.org/openstack
?
@Lirt what do you think about adding a new
community.openstack.org/openstack-cinder
alias to cinder with possible further deprecation of the oldcommunity.openstack.org/openstack
?
Makes sense :+1:
Let's make this (alias to cinder
) as a new PR :) I'm looking forward to see this PR to be merged and focus on new enhancements.
Resolves #62
Original work was done by @yIKX5 @chrislinan, I adjusted it, so it could be merged.
cc @syy6