Open LDprg opened 1 year ago
no direct implementation yet, right now you need to use the extra args for the VMs quickemu config. And you will have to add the whole qemu pci passthrough argument for the configured graphic card like this example
-device vfio-pci,host=0000:09:00.0,romfile=/usr/share/kvm/vfio_rx570.rom -device vfio-pci,host=0000:09:00.1
or without a rom file
-device vfio-pci,host=0000:09:00.0 -device vfio-pci,host=0000:09:00.1
the idea once things will be more finalized is to have a config file you can just drop into the quickemu VM folder and quickemu reads it and moves on from there.
oh thx
Picking up from your discussion on Discord yesterday:
you will have to add the whole qemu pci passthrough argument for the configured graphic card like this example
-device vfio-pci,host=0000:09:00.0,romfile=/usr/share/kvm/vfio_rx570.rom -device vfio-pci,host=0000:09:00.1
or without a rom file
-device vfio-pci,host=0000:09:00.0 -device vfio-pci,host=0000:09:00.1
From the qqX settings file:
Also guide on customising quickemu output here
Not using a discrete GPU myself, some feed back from someone would be useful here.
@LDprg does this help?
@HikariKnight is this host=0000:09:00.0
constant or variable?
@HikariKnight is this host=0000:09:00.0 constant or variable?
@TuxVinyards it is a variable, 0000:09:00.0
is the pci address to the gpu (different for each system) in my example.
the one ending with .0
is the GPU where if you need a romfile you should provide one (however people should test without one first), .1
and higher are other devices linked to the GPU (like the audio controller, usb controller on the card if it has any, serial, etc)
@HikariKnight is this host=0000:09:00.0 constant or variable?
@TuxVinyards it is a variable,
0000:09:0.0
is the pci address to the gpu (different for each system)
As long as no cards or devices get swapped, that would stay fairly constant for each system, I imagine?
Plugging in a USB device wouldn't affect this ?? Just PCI.
I do have a very old machine that has a 1G Nvidia and a basic on-board GPU. Maybe I could test on that ... ? It's not really any good for coding on. Very slow.
@HikariKnight is this host=0000:09:00.0 constant or variable?
@TuxVinyards it is a variable,
0000:09:0.0
is the pci address to the gpu (different for each system)As long as no cards or devices get swapped, that would stay fairly constant for each system, I imagine?
Plugging in a USB device wouldn't affect this ?? Just PCI.
I do have a very old machine that has a 1G Nvidia and a basic on-board GPU. Maybe I could test on that ... ? It's not really any good for coding on. Very slow.
yeah its only a variable in the sense of if you replace the card, the value will change and based on which slot you use, and that it is different for each system.
for example on 1 system my RX6600XT is on pciaddr 0000:0b:00.X
but on my main system it is on 0000:44:00.X
and the address 0000:0b:00.X
is taken up by a PCI dummy device on .0
the encryption controller on .2
and a usb controller on .3
, while .1
does not exist.
i do not believe bios updates can change these addresses but i do know that a bios update can change IOMMU groups which pci addresses go into.
and yes plugging in an usb would not change anything as it would go through the usb controller(s) which would be on their own pci address, lets say 0000:0c:00.3
as an example and would only display a vendorID:deviceID
through lsusb
and the individual usb device would never show up through lspci
or ls-iommu
I do have a very old machine that has a 1G Nvidia and a basic on-board GPU. Maybe I could test on that ... ? It's not really any good for coding on. Very slow.
anything older than a 10 series is very hit and miss due to some cards lacking UEFI firmware, some have UEFI firmware but will cause other issues, i have a 750 ti i tried to passthrough on my server, it did pass through but if anything tried to do anything more intense than displaying the desktop on it, the system and host would crawl to a halt, replaced it with an RX 570 8GB (with the vendor-reset kernel module) and it worked fine since (although the RX570 is now in my HTPC and the server has an RX6600XT now).
👍 I was just having a quick look at lspci.
I notice you use cpuinfo. Not on my machine. And lots of others.
There's also a lot of other dependencies I noticed that get dragged into the build. Like bubble tea.
I try to avoid dependencies where possible... But that's me.
If you can find free mental time at some point 😂 and can post up some proof of concept snap shots of passthrough working with qqX via ExtraArgs then I will implement something.
I reckon that I should be able to grep lspci output easily enough to find the host parameters and do some kind of auto-detect at my end of things.
I am already committed on some other things right now though. Want to get qqX new release out, linking it up with a community quickemu release ... I think you saw that on Discord. 🚀
@TuxVinyards bubbletea is only there for the handy logging module (i was going to use the toolkit for making the ui but it was overkill for what i needed), the end binary is self contained with no dependencies (other than ls-iommu which it downloads the latest version of automatically) if you build it like described in the readme. As i am the same, i want the end result to have as few dependencies as possible for the user, build dependencies does not matter as much.
also if you want something that is more tailored for this than lspci
then look at ls-iommu which is what quickpassthrough uses in the background and is in theory it's only dependency.
it lets you find the gpus (skipping 3D controllers as they are not functional for passthrough) using ls-iommu -g
you can then list everything in a specific group using -i num
so ls-iommu -i 16
would list everything in iommu group 16.
If you need to list the graphic cards and everything in their respective iommu groups then you can just do ls-iommu -gr
or ls-iommu -g -r
need to also list the kernel driver used? append -k
to the args!
need the pci address for the gpu in iommu group 16 because that is the only card that is isolated in it's own group?
ls-iommu -i 16 --pciaddr
Need the vendor and device ids for adding the arguments to bind the gpu to the vfio driver?
ls-iommu -i 16 --id
need just the pci address for the GPU not the other devices on it?
ls-iommu -g -i 16 --pciaddr
lots of other things in it too, but i made it primarily just to get the information that mattered for passthrough while outputting in the same format as most ls-iommu.sh
scripts
However if you want to autodetect what card is being used for gpu passthrough, check which card is bound to the vfio-pci
kernel driver using lspci -vk | grep -iP "(amdgpu|nvidia|vga|vfio-pci|nouveau)" | grep -i -B1 "vfio-pci"
this will list all devices that are used by the vfio-pci
driver, however it is not as reliable as ls-iommu -i 16 --pciaddr
as it might be missing a device that does not need to be bound to vfio-pci
like the usb controller in some gpus.
also no stress about implementing anything, things are hectic for me too these days
I am confused, is it possible and if how to use this with quickemu?