What you expected to happen:
VirtualMachineClone has several bad behaviors on restore.
First Behavior
Clone a VM using VirtualMachineClone (Openshift Virtualization 4.15 from the UI plugin, 4.14 does not use VirtualMachineClone objects from the UI)
Do a backup using Velero and kubevirt plugin
Delete the original namespace and all VMs
Restore to the original namespace
Velero reports the restore as failing and is unable to complete the restore as unable to create the VirtualMachineClone because the source VM does not exist.
This is caused by the default Velero restore order in alphabetical once you get past the default restore order set. The VirtualMachineClone always restores the VirtualMachine objects. At minimum, some documentation is needed to mention that the restore order has to be changed for the VirtualMachineClones to be successfully restored.
This was resolved by setting Velero restore order, but it is a required behavior for dealing with this specific object. Otherwise, Velero will always report the restore as failed. The VMs did get restored at least due to not automatically stopping the restore on a single object error.
Similar issue with VirtalMachineInstanceMigration objects. Always restores before VirtualMachine objects.
Second behavior
Set the Velero restore order to "virtualmachines,virtualmachineclones"
Clone a VM using VirtualMachineClone (Openshift Virtualization 4.15 from the UI plugin, 4.14 does not use VirtualMachineClone objects from the UI) object
Delete the clone.
Do a backup using Velero and kubevirt plugin
Delete the original namespace and all VMs
Restore to the original namespace
The clone comes back
This is because the status gets ignored that the original clone object was successful. The creation of the VirtualMachineClone object, regardless of the status field contents triggers the clone operation again.
Third Behavior
Set the Velero restore order to "virtualmachines,virtualmachineclones"
Clone a VM using VirtualMachineClone (Openshift Virtualization 4.15 from the UI plugin, 4.14 does not use VirtualMachineClone objects from the UI) object
Do a backup using Velero and kubevirt plugin
Delete the original namespace and all VMs
Restore to the original namespace
The clone triggers despite the original status was successful causing a VirtualMachineSnapshot of the original VM. The clone never finishes, leaving a mysterious VirtualMachineSnapshot of unclear origin.
Again, this behavior is caused by ignoring the original VirtualMachineClone status on creation. This triggers a clone process of a previous clone process that was successful.
What happened:
What you expected to happen: VirtualMachineClone has several bad behaviors on restore.
First Behavior
This is caused by the default Velero restore order in alphabetical once you get past the default restore order set. The VirtualMachineClone always restores the VirtualMachine objects. At minimum, some documentation is needed to mention that the restore order has to be changed for the VirtualMachineClones to be successfully restored.
This was resolved by setting Velero restore order, but it is a required behavior for dealing with this specific object. Otherwise, Velero will always report the restore as failed. The VMs did get restored at least due to not automatically stopping the restore on a single object error.
Similar issue with VirtalMachineInstanceMigration objects. Always restores before VirtualMachine objects.
Second behavior
This is because the status gets ignored that the original clone object was successful. The creation of the VirtualMachineClone object, regardless of the status field contents triggers the clone operation again.
Third Behavior
Again, this behavior is caused by ignoring the original VirtualMachineClone status on creation. This triggers a clone process of a previous clone process that was successful.
Status stays stuck in progress forever.
How to reproduce it (as minimally and precisely as possible): See above. Steps for each behavior included.
Additional context: Add any other context about the problem here.
Environment:
virtctl version
): Openshift Virtualization 4.15 / KubeVirt 1.1kubectl version
):uname -a
): Red Hat 9 kernel, doesn't matter