This guide is to help people through the process of using GPU Passthrough via libvirt/virt-manager on systems that only have one GPU.
For hosting news and information about VFIO passthrough, and for the libvirt/qemu hook helper in this guide.
For providing the vfio-pci-bind tool. A tool that is no longer used in this guide, but was previously used and he still deserves thanks.
For the Nvidia ROM patcher. Making passing the boot gpu to the VM without GPU bios problems. Patching the rom is no longer required but I never would have written this guide without the original work so I'm keeping them here.
For diagnosing, developing, and testing methods to successfully rebind the EFI-Framebuffer when passing the video card back to the host OS.
For instructions on manually editing the vBIOS hex for use with VFIO passthrough
A guide that is no doubt better than mine. Learning a few things from his implementation that can help me out a bit. This guide depends on libvirt at the base where as his has implementations that do not.
You are completely responsible for your hardware and software. This guide makes no guarentees that the process will work for you, or will not void your waranty on various parts or break your computer in some way. Everything from here on out is at your own risk.
Historically, VFIO passthrough has been built on a very specific model. I.E.
I personally, as well as some of you out there, might not have those things available. Maybe You've got a Mini-ITX build with no iGPU. Or maybe you're poor like me, and can't shell out for new computer components without some financial planning before hand.
Whatever your reason is. VFIO is still possible. But with caveats. Here's some advantages and disadvantages of this model.
This setup model is a lot like dual booting, without actually rebooting.
For my personal use case. This model is worth it to me and it might be for you too!
This guide is going to assume a few things
I am not going to cover the basic setup of VFIO passthrough here. There are a lot of guides out there that cover the process from beginning to end.
What I will say is that using the Arch Wiki is your best bet.
Follow the instructions found here
Skip the Isolating the GPU section We are not going to do that in this method as we still want the host to have access to it. I will cover this again in the procedure section.
With all this ready. Let's move on to how to actually do this.
Using libvirt hooks will allow us to automatically run scripts before the VM is started and after the VM has stopped.
Using the instructions here to install the base scripts, you'll find a directory structure that now looks like this:
/etc/libvirt/hooks
├── qemu <- The script that does the magic
└── qemu.d
└── {VM Name}
├── prepare
│ └── begin
│ └── start.sh
└── release
└── end
└── revert.sh
Anything in the directory /etc/libvirt/hooks/qemu.d/{VM Name}/prepare/begin
will run when starting your VM
Anything in the directory /etc/libvirt/hooks/qemu.d/{VM Name}/release/end
will run when your VM is stopped
I've made my start script /etc/libvirt/hooks/qemu.d/{VMName}/prepare/begin/start.sh
#!/bin/bash
# Helpful to read output when debugging
set -x
# Stop display manager
systemctl stop display-manager.service
## Uncomment the following line if you use GDM
#killall gdm-x-session
# Unbind VTconsoles
echo 0 > /sys/class/vtconsole/vtcon0/bind
echo 0 > /sys/class/vtconsole/vtcon1/bind
# Unbind EFI-Framebuffer
echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind
# Avoid a Race condition by waiting 2 seconds. This can be calibrated to be shorter or longer if required for your system
sleep 2
# Unbind the GPU from display driver
virsh nodedev-detach pci_0000_0c_00_0
virsh nodedev-detach pci_0000_0c_00_1
# Load VFIO Kernel Module
modprobe vfio-pci
NOTE: Gnome/GDM users. You have to uncommment the line killall gdm-x-session
in order for the script to work properly. Killing GDM does not destroy all users sessions like other display managers do.
My stop script is /etc/libvirt/hooks/qemu.d/{VMName}/release/end/revert.sh
#!/bin/bash
set -x
# Re-Bind GPU to Nvidia Driver
virsh nodedev-reattach pci_0000_0c_00_1
virsh nodedev-reattach pci_0000_0c_00_0
# Reload nvidia modules
modprobe nvidia
modprobe nvidia_modeset
modprobe nvidia_uvm
modprobe nvidia_drm
# Rebind VT consoles
echo 1 > /sys/class/vtconsole/vtcon0/bind
# Some machines might have more than 1 virtual console. Add a line for each corresponding VTConsole
#echo 1 > /sys/class/vtconsole/vtcon1/bind
nvidia-xconfig --query-gpu-info > /dev/null 2>&1
echo "efi-framebuffer.0" > /sys/bus/platform/drivers/efi-framebuffer/bind
# Restart Display Manager
systemctl start display-manager.service
When running the VM, the scripts should now automatically stop your display manager, unbind your GPU from all drivers currently using it and pass control over the libvirt. Libvirt handles binding the card to VFIO-PCI automatically.
When the VM is stopped, Libvirt will also handle removing the card from VFIO-PCI. The stop script will then rebind the card to Nvidia and SHOULD rebind your vtconsoles and EFI-Framebuffer.
First of all. If you ask for help, then tell me you skipped some step... I'm gonna be a little annoyed. So before moving on to troubleshooting, and DEFINATELY before asking for help, make sure you've follwed ALL of the steps of this guide. They are all here for a reason.
Logs can be found under /var/log/libvirt/qemu/[VM name].log
sudo virsh start {vmname}
sudo virsh list
Check out the ArchWIKI entry for tips on audio. I've used both Pulseaudio Passthrough but am currently using a Scream IVSHMEM device on the VM.
Either of these will require a user systemd service. You can keep user systemd services running by enabling linger for your user account like so:
sudo loginctl enable-linger {username}
This will keep services running even when your account is not logged in. I do not know the security implications of this. My assumption is that it's not a great idea, but oh well.
Here's a few things I do to make managing the host easier.
Let me know your success and failure stories.