Open cniket opened 2 months ago
This looks very similar to a bug that was recently fixed: https://github.com/kubevirt/containerized-data-importer/pull/3385
Is there any chance you can try with a version that includes that change? It looks like it was backported to CDI releases 1.58, 1.59, and 1.60.
Hello @mrnold ,
Thanks a lot for your reply and pointing to the respective PR.
I will try with one of those CDI releases with higher version and will share the results.
Hello @mrnold ,
The migration completed successfully without those errors with cdi v1.58. Thanks a lot for your help.
However, I am facing new issue now, after completing DV migration, the VM comes up in 'Running' state. When I am trying to take its console using kubectl virt vnc <vm-name>
, getting windows blue screen error as per attached screenshot. (Not sure if this is the right place to add this issue)
JFI: serureBoot is kept 'false' in the VM template;
firmware:
bootloader:
efi:
secureBoot: false
What could be causing this boot issue for post migration?
long shot but maybe @lyarwood or @vsibirsk are able to help EDIT: looks like people hit this before https://gist.github.com/Francesco149/dc156cfd9ecfc3659469315c45fa0f96 https://bugzilla.redhat.com/show_bug.cgi?id=1908421
Usually that means something didn't work right in the VM conversion step, either the VirtIO drivers weren't installed or the virtual hardware configuration was not quite as expected. Probably forklift is the right place to continue discussion.
What happened: I have a DV that has source as
vddk
(spec.source.vddk
) using which I am trying to migration a windows VM from VMware to Kubevirt. Before starting the migration if the windows VM is in 'power on' state, then the migration(warm) doesn't start and the corresponding import pod fails. Attaching the file: importer-plan-winserver2k19-warm-powered-on.log that has the corresponding errors. If the windows VM is in 'power-off' state before starting the migration, then the migration(cold) happens without any issue. For Linux VM, both 'warm' and 'cold' migration works without any issue.Tried the migration with VSphere administration privileges(full access), however still getting the same errors.
What you expected to happen: Both 'cold' and 'warm' migration for Windows VM should work similar to Linux VMs.
How to reproduce it (as minimally and precisely as possible):
DV
withspec.source.vddk
is getting autocreated similar to this: https://github.com/kubevirt/containerized-data-importer/blob/main/doc/datavolumes.md#vddk-data-volume after creating a forkliftMigration
CR. The resulted DV's disk import fails for windows VM if the VM is in power-on state, however the same windows VM's cold migration works and for the Linux VMs both 'cold' and 'warm' works as explained earlier.Additional context: Add any other context about the problem here.
Environment:
kubectl get deployments cdi-deployment -o yaml
):kubectl version
):uname -a
): N/A