Open muxuelanKK opened 3 months ago
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: Once this PR has been reviewed and has the lgtm label, please assign awels for approval. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Hi @muxuelanKK. Thanks for your PR.
PRs from untrusted users cannot be marked as trusted with /ok-to-test
in this repo meaning untrusted PR authors can never trigger tests themselves. Collaborators can still trigger tests on the PR using /test all
.
I understand the commands that are listed here.
Hi @muxuelanKK ! This is interesting I wonder how did you come up with the need for this PR, Have you noticed issues with restore of vm with hotplug because the hotplug pod exists?
Hi @muxuelanKK ! This is interesting I wonder how did you come up with the need for this PR, Have you noticed issues with restore of vm with hotplug because the hotplug pod exists?
I just happened to encounter this phenomenon. And I think we needn't to backup up vmi and pods(eg launcher pod、hp-voulme pod),we just backup up vm and volumes, because kubevirt itself will create vmi and launcher pod based on vm when restoring .
Thanks @muxuelan for your observation as you stated there is no need to restore virt-launcher pod and hotplug pod since they will be created with the vm start.
Regarding the virt-launcher we already deal with this in pod_restore_item_action. We decided to still backup this pod and just skip its restore.
Regarding the hotplug pod we did miss to skip it too, and although it is tested and doesnt seem to do actually cause any failure in restoring it, I agree its better to skip its restore. So I would suggest to just change this line: https://github.com/kubevirt/kubevirt-velero-plugin/blob/main/pkg/plugin/pod_restore_item_action.go#L45 to:
LabelSelector: "kubevirt.io in (virt-launcher, hotplug-disk)",
We prefer to keep in the backup everything that was in the namespace/cluster at the backup time.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
skip bakcup of hotplug-volume pod