Open esteban-ee opened 1 year ago
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
/close
@kubevirt-bot: Closing this issue.
/reopen
@mhenriks: Reopened this issue.
/remove-lifecycle rotten
@mhenriks @alromeros This will be a critical enhancement to support restore to alternate namespace. Current understanding is that kubevirt-velero-plugin skips VMI restore if VMI is owned by a VM. In the vm we deployed on kubevirt ocp cluster, I see that macaddress is in VM spec and VMI had the firmware uuid. For restore of vm-1 to new namespace ns-2, velero restore might fail since there is original vm-1 in ns-1 using macaddrress-1. Could we expect kubevirt-velero-plugin to remove macaddress from vm spec in vmbackupitemaction or vmrestoreitemaction? Since kubevirt-velero-plugin skips VMI if it's owned by vm, will restoring a vm, provision a vmi with new firmware uuid ?
@30787
Could we expect kubevirt-velero-plugin to remove macaddress from vm spec in vmbackupitemaction or vmrestoreitemaction?
I think default behavior should be to preserve the mac in case moving the VM to another namespace but we can support a label on the Restore
resource to clear the mac. Does that work for you?
Since kubevirt-velero-plugin skips VMI if it's owned by vm, will restoring a vm, provision a vmi with new firmware uuid ?
If firmware UUID is not specified in the VM, it is calculated by hashing the VM name. I think that hashing the VM UID (or namespace+name) would be better but I'm not sure this is something we can change at this point [1].
We could also support generating a unique firmware id at restore time if that is important to you
[1] https://github.com/kubevirt/kubevirt/pull/12885#issuecomment-2369245917
cc @alromeros ^
Hey @mhenriks, I agree with the proposed implementation. @30787, we can get quite flexible with how we handle VM backups and restores, as long as the new behavior remains optional and we manage it through labels or annotations on the backup and restore objects. I'm happy to work on implementing this if you're good with these details.
@mhenriks @alromeros Thank you. Looking forward to this enhancement.
Is this a BUG REPORT or FEATURE REQUEST?:
What happened:
When a virtual machine is restored to an alternate namespace, the virtual machine is restored with the same MAC address as the original virtual machine. This results in a MAC address conflict if the original virtual machine is still running on the original namespace.
What you expected to happen:
Provide a way to blank the MAC address on restore.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
The issue was resolved by updating the plugin to clear the MAC addresses on the restore item action
Environment:
kubectl get deployments cdi-deployment -o yaml
):kubectl version
):