kubev2v / forklift

Apache License 2.0
29 stars 28 forks source link

Unable to boot and start 'windows' vm post migration to kubevirt. #1029

Open cniket opened 3 weeks ago

cniket commented 3 weeks ago

Hello,

After creating a plan CR and then creating a migration CR for a 'Windows 2k19' VM, a DV gets spin-up and disk migration from VMware starts. After successful completion on DV, ultimately a VM gets created and gets listed in 'Running' state. After that when I am trying to get console access of the VM using command kubectl virt vnc <vm-name> , I am getting windows blue screen with error: 'Stop code: INACCESSIBLE BOOT DEVICE', as shown below;

Screenshot 2024-09-12 at 12 42 02

storagemap is using rook ceph storage and the corresponding pv,pvc are getting created/bound properly.

❯ kg pvc -n virtualmachines |grep w195
plan-w195-warm-vm-4427-jklkt             Bound    pvc-xxxxxxxx-8012-4f35-xxxxx-7dd56xxxxxx   16Gi          RWX            ceph-block     <unset>                 129m    Block
❯

The VM manifest already has secureBoot as 'false'. Below is the full manifest:

❯ k get vm w195 -n virtualmachines -oyaml
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  annotations:
    kubemacpool.io/transaction-timestamp: "2024-09-12T09:03:38.109524776Z"
    kubevirt.io/latest-observed-api-version: v1
    kubevirt.io/storage-observed-api-version: v1
  creationTimestamp: "2024-09-12T07:07:04Z"
  finalizers:
  - kubevirt.io/virtualMachineControllerFinalize
  generation: 5
  labels:
    migration: xxxxxxxx-9a13-xxxxxx-acdb-xxxxxxxxx
    plan: xxxxxx-xxxxx-4bc1-xxxxx-babdxxxxxx
    vmID: vm-4427
  name: w195
  namespace: virtualmachines
  resourceVersion: "130711651"
  uid: xxxxxx-42b7-xxxxxx-4f01bxxxxxxx
spec:
  running: true
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: w195
    spec:
      architecture: amd64
      domain:
        clock:
          timezone: UTC
        cpu:
          cores: 1
          sockets: 2
        devices:
          disks:
          - disk:
              bus: virtio
            name: vol-0
          inputs:
          - bus: virtio
            name: tablet
            type: tablet
          interfaces:
          - macAddress: xx:xx:xx:xx:xx:xx
            masquerade: {}
            model: virtio
            name: net-0
        firmware:
          bootloader:
            efi:
              secureBoot: false
          serial: xxxxxx-38dc-xxxx-1532-898582ac2c29
        machine:
          type: q35
        memory:
          guest: 4Gi
        resources:
          requests:
            memory: 4Gi
      networks:
      - name: net-0
        pod: {}
      volumes:
      - name: vol-0
        persistentVolumeClaim:
          claimName: plan-w195-warm-vm-4427-jklkt
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2024-09-12T09:03:40Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: null
    status: "True"
    type: LiveMigratable
  created: true
  desiredGeneration: 5
  observedGeneration: 5
  printableStatus: Running
  ready: true
  volumeSnapshotStatuses:
  - enabled: false
    name: vol-0
    reason: 'No VolumeSnapshotClass: Volume snapshots are not configured for this
      StorageClass [ceph-block] [vol-0]'
❯ 

Using :latest images of forklift-operator and forklift-controller kubevirt version: v1.0.0 cdi version: v1.58.0 k8s cluster version:

❯ k version
Client Version: v1.30.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.2
❯

Can someone please suggest what needs to be checked/debugged? why the boot disk might not be getting available post migration to kubevirt?

mnecas commented 2 weeks ago

Hi @cniket please try out my PR hope it will help you. I'm not sure if I'll merge that one in as it is a bit hacky but I also stumbled on that issue. Need to find a proper solution.