After creating a plan CR and then creating a migration CR for a 'Windows 2k19' VM, a DV gets spin-up and disk migration from VMware starts. After successful completion on DV, ultimately a VM gets created and gets listed in 'Running' state.
After that when I am trying to get console access of the VM using command kubectl virt vnc <vm-name> , I am getting windows blue screen with error: 'Stop code: INACCESSIBLE BOOT DEVICE', as shown below;
storagemap is using rook ceph storage and the corresponding pv,pvc are getting created/bound properly.
Hi @cniket please try out my PR hope it will help you.
I'm not sure if I'll merge that one in as it is a bit hacky but I also stumbled on that issue.
Need to find a proper solution.
Hello,
After creating a
plan
CR and then creating amigration
CR for a 'Windows 2k19' VM, aDV
gets spin-up and disk migration from VMware starts. After successful completion on DV, ultimately a VM gets created and gets listed in 'Running' state. After that when I am trying to get console access of the VM using commandkubectl virt vnc <vm-name>
, I am getting windows blue screen with error: 'Stop code: INACCESSIBLE BOOT DEVICE', as shown below;storagemap
is using rook ceph storage and the correspondingpv
,pvc
are getting created/bound properly.The VM manifest already has secureBoot as 'false'. Below is the full manifest:
Using
:latest
images offorklift-operator
andforklift-controller
kubevirt version: v1.0.0 cdi version: v1.58.0 k8s cluster version:Can someone please suggest what needs to be checked/debugged? why the boot disk might not be getting available post migration to kubevirt?