I need to utilize NVMe-over-Fabrics (NVMeoF) devices within my KubeVirt VMs. Currently, my PVCs are backed by NVMeoF devices. However, I am encountering an issue where these NVMe devices are being presented as SCSI devices (e.g., /dev/sda, /dev/sdb) inside the VMs.
I expect it to be NVMe devices in side the kubevirt VMs
csi driver that support NVMeOF and storage is required, need to use the storage class for root disk and additional disks to reproduce this issue
Environment:
KubeVirt version (use virtctl version): 1.1
Kubernetes version (use kubectl version): 1.28
VM or VMI specifications:
`$ k get vmi aisaacs-agent-1 -o yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
annotations:
kubevirt.io/latest-observed-api-version: v1
kubevirt.io/storage-observed-api-version: v1
kubevirt.io/vm-generation: "1"
prometheus.io/port: "9100"
prometheus.io/scrape: "true"
creationTimestamp: "2024-07-11T05:13:03Z"
finalizers:
lastProbeTime: null
lastTransitionTime: null
message: 'cannot migrate VMI: PVC dv-aisaacs-agent-1-root-1720674303 is not shared,
live migration requires that all PVCs must be shared (using ReadWriteMany access
mode)'
reason: DisksNotLiveMigratable
status: "False"
type: LiveMigratable
I need to utilize NVMe-over-Fabrics (NVMeoF) devices within my KubeVirt VMs. Currently, my PVCs are backed by NVMeoF devices. However, I am encountering an issue where these NVMe devices are being presented as SCSI devices (e.g., /dev/sda, /dev/sdb) inside the VMs.
I expect it to be NVMe devices in side the kubevirt VMs
csi driver that support NVMeOF and storage is required, need to use the storage class for root disk and additional disks to reproduce this issue
Environment:
virtctl version
): 1.1kubectl version
): 1.28uname -a
): 6.8