bashtheshell / IOMMU-nested-pve

How to configure IOMMU device for nested Proxmox hypervisor (PVE) VM - PCIe Passthrough
MIT License
12 stars 1 forks source link

Working on Intel but not AMD #3

Open mikeyo opened 7 months ago

mikeyo commented 7 months ago

I followed the guide on my Intel 11900K build and nested pve works perfectly passing through my GPU. However, I also tried on my X570 3950x AMD build but the GPU is not exposed to the L1 PVE.

Output from L0 PVE

dmesg | grep -e DMAR -e IOMMU -e AMD-Vi [ 0.417329] AMD-Vi: Using global IVHD EFR:0x0, EFR2:0x0 [ 1.001980] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported [ 1.003440] AMD-Vi: Extended features (0x58f77ef22294a5a, 0x0): PPR NX GT IA PC GA_vAPIC [ 1.003446] AMD-Vi: Interrupt remapping enabled [ 1.005649] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).

IOMMU dofo enabled on L0.

Do I need to change any qemu args to make this work on AMD?

bashtheshell commented 7 months ago

Hi. Hm, it's been a while since I played with nested virtualization, and frankly, I haven't tested on an AMD machine as I wrote the guide long before I got one. Although, I believe it should still work as the issue you're facing isn't Proxmox-specific. You're probably only missing a step or two given you got it to work on the other (Intel) build.

Mainly, the guide was only pointing out the fact that Proxmox didn't offer a native way to allow users to set up passthrough properly in a nested setup, and I happened to find a workaround by moving the raw command-line arguments around. Granted, this isn't a practical setup that one can use outside of homelabs, and I don't blame Proxmox.

I see you enabled IOMMU for L0 PVE, but what about L1 PVE?

Perhaps, the Arch Wiki can give you some insights on the problems you're facing.