Open Joly0 opened 10 months ago
@Husky110 It depends on the game.
I've been able to play Transport Fever 2 in Ubuntu 20.04 over RDP and said Ubuntu 20.04 was running in a LXC container.
I've been able to partition my 3090 inside a Windows 10 VM that's running on Proxmox into 4 Hyper V VMs, and play more games over Parsec.
(Roblox failed to load though.)
But there are also other games like Cities Skylines 2 that you can't even install on Linux, despite said improvements to Proton.
So....it really depends.
@alpha754293 - Cities Skylines is playable on Linux... See https://www.protondb.com/app/949230 ;) ProtonQt is a must-have tho.
I still see your point. Just wanted to add to the discussion. :)
@Husky110 Cities Skylines 1 -- yes.
Cities Skylines 2 -- no.
@alpha754293 - Please check provided link. :)
I'w also noticed this unfortunately, i'w tried every trick in the book to mask the fact that it's a VM but with no luck :-/
Mostly EAC games such as rust.
@Husky110 I did check the link.
Here is the screenshot from my 5950X/3090 system (that I am using to test out this dockur/windows on) where I have installed a Ubuntu 20.04 LXC container, passed my 3090 through, and installed Steam.
It doesn't give me the option to install it.
I'll have to do some more research to find out how to enable Proton (as there is a "Proton Experimental.desktop" icon on my desktop, but it opens it up as a text file when I double click on it).
@alpha754293 - Right click on the game -> Properties -> Compatibility (works for every game in linux) is where you wanna go. :) I suggest you install ProtonQt, since it sometimes needs a specific GE-Version.
@Husky110 Thank you. I appreciate your help and advice. I will have to try that.
@Husky110 So I tried what you suggested and contrary to the reports from Proton DB, I was not able to get Cities Skylines 2 working in Ubuntu 20.04. :(
Screenshot of Paradox Launcher failing to load
That's a bummer.
(And yes, I did install ProtonUp-Qt, installed Proton Experimental, GE-Proton8-5, GE-Proton9-1, and also Proton Experimental and they all failed.)
I even tried the instructions that someone else had reported and that failed as well.
Thank you for trying to help though. Your help is greatly appreciated.
@alpha754293 - This will be my last comment on Cities that, since this thread is about passing the GPU to QEMU...
Maybe you should try the solution suggested by cali (see screenshot):
Launch-Options can be set when you right-click your game in Steam.
AND NOW BACK TO THE TOPIC AT HAND!
@Husky110 Agreed and already tried that. (I tried all of the commands, one at a time, from that page, since I was motivated to want to get it to work.)
ok so been put here ig but is there is a way to put thru a Nivida gpu so i can do gpu stuff
ok so been put here ig but is there is a way to put thru a Nivida gpu so i can do gpu stuff
So far I have had no luck.... I ended up Just installing qemu on the host. I am using an AMD Gpu thats too old to use proton (Like 1 model to old) (but its decent enough to play most games at 30fps-60fps lowest settings)
The following is how I got my GPU running on HOST>Qemu You need to do the Host configuration a second time but inside the docker container
Here is what I have learned so far. Assuming you are doing GPU Passthrough, and not using a Commercial GPU (Those can apparently be split between virtual machines similar to how you assign CPU's).
On the HOST here is a guide to what I had to do to setup GPU Passthrough to Qemu, setting it up in docker would basically just be doing the setup twice (once on the host and once inside the docker container).
This assumes that you have already enabled cpu virtualization and iommu in your grub command line if required. (Ask ChatGPT)
Notes: The method to load the VFIO drivers in my case was blacklisting. This will however remove any display output (unless you have a 2nd graphics adapter you will not be able to use a monitor and will have to remotely connect with TTY). If for example you are running an integrated gpu, and you are running a dedicated GPU that relies on the same Kernel driver, you will need to force the VFIO drivers to load.. This goes beyond what I currently understand though (as my gpu does not 'reset' properly once it has been initialized)
Run lspci -nnk | grep -A 7 VGA
on the host (computer running docker), This will output lspci -nnk then search the output for VGA outputing the VGA line + the 7 following lines. Record the first set of numbers In my case these are 02:00.0, However I also have an Audio device (HDMI) So I need to make note of 02:00.1 as well.
Record "Kernel driver in use: *****" In my case this was radeon.
Blacklist the Driver: This step is different depending on your OS, I highly recommend asking ChatGPT. "What file do I need to modify to blacklist a driver on [Insert Host OS and Version]" Note: Blacklisting is not required in all cases. Depending on the exact hardware it is possible to unbind/bind the drivers without blacklisting.
You will need to write a simple script to load the VFIO Drivers (Again this is going to be different for every OS. Ask Chat GPT for this information). In Ubuntu server it is:
sudo modprobe vfio sudo modprobe vfio-pci echo "0000:xx:xx.x" > /sys/bus/pci/devices/0000:xx:xx.x/driver/unbind echo "0000:xx:xx.x" > /sys/bus/pci/drivers/vfio-pci/bind
The first two lines ensure that the vfio drivers are loaded
The following lines unbind the drivers from the device, then bind them with vfio-pci
Replace x with numbers obtained in step 1, For each device on the bus. (GPU + HDMI AUDIO in most cases)
You need to do this for each GPU you want to pass into the system.
Finally reboot, if done correctly (depending on the drivers being blacklisted or not) your screen should either load for a split second then go dark, OR it will load show you most of the boot sequence then go dark when the VFIO drivers load. At this point you will need to switch over to a remote shell.
IF At this point you still have display output then the VFIO-PCI drivers are not loaded or you have two Gpu's
Run lspci -nnk | grep -A 7 VGA
again, this will confirm if the device has the VFIO-PCI drivers loaded.
At this point you need to modify qemu's launch parameters, simply add -device vfio-pci,host=XX:XX.X,multifunction=on
If you are running a Windows Guest you will additionally need to mount one of the windows VirtIO driver installers, iso
After installing the VirtIO Drivers from the Iso, on the windows VM you should be able to view in device manager your Graphics card under display adapters. At this point it should be fine to install the GPU drivers you would normally install in windows.
Now that it has been done outside of a docker image. You can start writing a dockerfile that pulls from dockur/windows, modifies the startup scripts (In this case you probably want to replace the display.sh script, in it you would need to adjust the qemu launch parameters)
Note: I am unsure how to modify the dockur/windows underlying linux host kernel to blacklist certain drivers. So what kept happening for me is that the gpu passthrough would work, but then the docker linux would take over my gpu. This caused my gpu to need a full reset as once it has been 'touched' by linux its driver state becomes immutable.
So to follow up, This is not for the easily deterred. I barely managed to get GPU passthrough working on Host>Qemu. By adding docker its an entire extra layer of configuration you need to deal with. Host>Docker>Qemu. Its not impossible though, it will just require a good chunk of time. Especially if its your first time doing it.
But if you successfully Get Host>Qemu GPU Passthrough working, its not much further to get Host>Docker>Qemu GPU Passthrough going.
ok so been put here ig but is there is a way to put thru a Nivida gpu so i can do gpu stuff
Short answer: No, not with this.
Slightly longer answer:
If you want a Windows VM with Nvidia GPU passthrough, you can, 100%, do that.
(I have my 3090 running in a Windows 10 VM, which runs on top of Proxmox 7.4-17.)
But that was set up without the assistance of this VM that runs through Docker, and by removing the Docker layer of complexity, it actually made passing the GPU through a LOT easier.
ok so been put here ig but is there is a way to put thru a Nivida gpu so i can do gpu stuff
Short answer: No, not with this.
Slightly longer answer:
If you want a Windows VM with Nvidia GPU passthrough, you can, 100%, do that.
(I have my 3090 running in a Windows 10 VM, which runs on top of Proxmox 7.4-17.)
But that was set up without the assistance of this VM that runs through Docker, and by removing the Docker layer of complexity, it actually made passing the GPU through a LOT easier.
how?
@progamer562 Depending on what hypervisor you're using, you can google the instructions for GPU passthrough.
@progamer562 Depending on what hypervisor you're using, you can google the instructions for GPU passthrough.
nvm going to old school
@progamer562 ok
Okay - since this discussion is still ongoing, I've put my ChatGTP-subscription to use and asked it for help. Maybe someone (like @alpha754293, @jasonmbrown or some one else with more time than me) finds that helpfull: https://chat.openai.com/share/fd59f580-4d36-4c85-8645-d0e4a450ceaa Edit: The ubuntu 22.04 baseimage is used since I've already figured out that you can replace the original debian-trixie with ubuntu 22.04 in here.
@Husky110 Thank you for tagging me.
I took a look at the answer that ChatGPT produced.
It calls the vfio-pci
device/kernel module/driver and wants to pass that through to the QEMU VM, except that in my deployment notes for how to passthrough a GPU to a LXC container, my Proxmox host has blacklisted the vfio-pci
kernel module/driver, and as such, I can't pass that through to the LXC container, and then onto the QEMU VM.
If someone else is able to test this, that would be great!
hmm I am newbie in terms of docker and virtual machines went read all of the comments + the links provided, I faced the same issue the intel driver and opengl modules installed but it doesn't appear on device manager
Hi all,
I managed yesterday to have a successful GPU passthrough (Yay).
As mentioned before, switching kernel module used by GPU from i915
to vfio-pci
was the key.
On my system (Debian, 6.6.13), intel arc A380: In BIOS, enable IOMMU, virtualization VT-d, VT-x.
Edit /etc/default/grub
and add intel_iommu=on
in GRUB_CMDLINE_LINUX_DEFAULT
sudo update-grub
to update grub, and restart.
I wanted to be able to switch the GPU from host to VM, and therefore decided to have a script instead of having options within the modprobe loads. You can find how to pass a PCI to vfio-pci
in other links.
sudo lspci
gives you the list of PCI devices.
My GPU is listed as 03:00.0
, it's audio device as 04:00.0
. It's important to pass both or you'll end up with some issues later on.
As root:
To detach from i915 to vfio-pci:
modprobe vfio vfio_pci
Then for both 0000:03:00.0
and 0000:04:00.0
in my case:
echo %s > /sys/bus/pci/devices/%s/driver/unbind
echo vfio-pci > /sys/bus/pci/devices/%s/driver_override
echo %s > /sys/bus/pci/drivers_probe
From now on, lspci -v | grep -A 15 " VGA "
shall give you vfio-pci
as driver in use.
My docker-compose file as following:
version: "3"
services:
windows:
image: dockurr/windows
build: .
container_name: windows
privileged: true
environment:
VERSION: "win11"
DEBUG: Y
RAM_SIZE: "16G"
CPU_CORES: "14"
ARGUMENTS: "-device vfio-pci,host=03:00.0,multifunction=on -device vfio-pci,host=04:00.0,multifunction=on"
devices:
- /dev/kvm
- /dev/vfio/1
group_add:
- "105"
volumes:
- ./storage:/storage
cap_add:
- NET_ADMIN
ports:
- 8006:8006
- 3389:3389/tcp
- 3389:3389/udp
stop_grace_period: 2m
restart: on-failure
I was then able to install Intel GPU drivers within Windows with no issue.
I'm not an expert, therefore don't hesitate to comment/redact as per your needs.
@Xav-v If you are not an expert, than its even more impressive that you got this working! I'm sure this will be very useful for other people, as they can follow your steps now. Thanks!
Hi all,
I managed yesterday to have a successful GPU passthrough (Yay). As mentioned before, switching kernel module used by GPU from
i915
tovfio-pci
was the key.On my system (Debian, 6.6.13), intel arc A380: In BIOS, enable IOMMU, virtualization VT-d, VT-x.
Edit
/etc/default/grub
and addintel_iommu=on
inGRUB_CMDLINE_LINUX_DEFAULT
sudo update-grub
to update grub, and restart.I wanted to be able to switch the GPU from host to VM, and therefore decided to have a script instead of having options within the modprobe loads. You can find how to pass a PCI to
vfio-pci
in other links.
sudo lspci
gives you the list of PCI devices. My GPU is listed as03:00.0
, it's audio device as04:00.0
. It's important to pass both or you'll end up with some issues later on.As root: To detach from i915 to vfio-pci:
modprobe vfio vfio_pci
Then for both
0000:03:00.0
and0000:04:00.0
in my case:echo %s > /sys/bus/pci/devices/%s/driver/unbind echo vfio-pci > /sys/bus/pci/devices/%s/driver_override echo %s > /sys/bus/pci/drivers_probe
From now on,
lspci -v | grep -A 15 " VGA "
shall give youvfio-pci
as driver in use.My docker-compose file as following:
version: "3" services: windows: image: dockurr/windows build: . container_name: windows privileged: true environment: VERSION: "win11" DEBUG: Y RAM_SIZE: "16G" CPU_CORES: "14" ARGUMENTS: "-device vfio-pci,host=03:00.0,multifunction=on -device vfio-pci,host=04:00.0,multifunction=on" devices: - /dev/kvm - /dev/vfio/1 group_add: - "105" volumes: - ./storage:/storage cap_add: - NET_ADMIN ports: - 8006:8006 - 3389:3389/tcp - 3389:3389/udp stop_grace_period: 2m restart: on-failure
I was then able to install Intel GPU drivers within Windows with no issue.
I'm not an expert, therefore don't hesitate to comment/redact as per your needs.
I'm glad you got it working, I really wish there was a tiny script that could generate the correct vfio passthrough for noobs. But sadly it wouldnt work for everyone... (Like me with my stupid AMD Gpu and its inability to reset itself)
Hello, been following this issue for a while, couldnt really partecipate at the discussion as my knowledge on the matter is very limited.
@Xav-v's tutorial did the trick for me as well, so I successfully managed to passthrough on my dual GPU setup using my bench nvidia 1050ti (ancient, I know) for the docker itself.
My configuration is almost identical to his, I kinda tried to configure looking-glass w/ IddSampleDriver as well, but had no success at it is supposedly failing at configuring the spice server for the IVSHMEM device, also I am not entirly sure whether or not it has to be configured as a plain file volume or as a device.
One thing I know for sure is that the file is correctly initalized, with the following procedure
touch /dev/shm/looking-glass
Also it is mandatory to have privileged: true
, otherwise the docker would fail on me
with RLIMIT_MEMLOCK
messages.
services:
windows:
image: dockurr/windows:latest
container_name: W11-Core
privileged: true
environment:
VERSION: "win11"
RAM_SIZE: "12G"
CPU_CORES: "4"
DEVICE2: "/dev/sda"
ARGUMENTS: >
-device vfio-pci,host=23:00.0,multifunction=on
-device vfio-pci,host=23:00.1,multifunction=on
-device ivshmem-plain,memdev=ivshmem,bus=pcie.0
-object memory-backend-file,id=ivshmem,share=on,mem-path=/dev/shm/looking-glass,size=32M
devices:
- /dev/kvm
- /dev/sda
- /dev/vfio/22
- /dev/vfio/vfio
- /dev/shm/looking-glass
# volumes:
# - /dev/shm/looking-glass:/dev/shm/looking-glass
cap_add:
- NET_ADMIN
ports:
- 8006:8006
- 3389:3389/tcp
- 3389:3389/udp
stop_grace_period: 2m
restart: on-failure
As I said, I suppose we are missing a spice server configuration, this is how virt-manager would do it and probably disable the default vnc display as suggested in this issue; so far I had no luck, as (supposedly) the docker instance is missing the required QEMU module.
ARGUMENTS: >
-device vfio-pci,host=23:00.0,multifunction=on
-device vfio-pci,host=23:00.1,multifunction=on
-device ivshmem-plain,memdev=ivshmem,bus=pcie.0
-object memory-backend-file,id=ivshmem,share=on,mem-path=/dev/shm/looking-glass,size=32M
-spice port=5900
[+] Running 1/0
✔ Container W11-Core Created
Attaching to W11-Core
W11-Core | ❯ Starting Windows for Docker v2.08...
W11-Core | ❯ For support visit https://github.com/dockur/windows
W11-Core |
W11-Core | ❯ Booting Windows using QEMU emulator version 8.2.1 ...
W11-Core | ❯ ERROR: qemu-system-x86_64: -spice 5900: There is no option group 'spice'
W11-Core | qemu-system-x86_64: -spice 5900: Perhaps you want to install qemu-system-modules-spice package?
W11-Core exited with code 0
Just in case someone else would need it An older gist to GPU passthrough A guide to IddSampleDriver + Looking Glass
Last but not least, I would like to thank both the project owner and all participants!
Good news everyone, I did in fact manage to make looking-glass work as intended!
Of course, there is still something missing (such as audio and the clipboard is not sync'd), but it is only a matter of configuration at this point.
My intuition was, in fact, correct; qemu-system-modules-spice package was missing, thus I had to slightly modify the docker by adding the debian repository (thus the package).
FROM dockurr/windows:latest
# Add testing repository
RUN echo "deb http://deb.debian.org/debian/ testing main" >> /etc/apt/sources.list.d/sid.list
RUN echo -e "Package: *\nPin: testing n=trixie\nPin-Priority: 350" | tee -a /etc/apt/preferences.d/preferences > /dev/null
RUN apt-get update && \
apt-get --no-install-recommends -y install \
qemu-system-modules-spice
ENTRYPOINT ["/usr/bin/tini", "-s", "/run/entry.sh"]
Thus I built the new docker via
docker buildx build -t windows-spice --file spice-support.dockerfile .
I then found some looking-glass documentation which gave all I had to know to configure the passthrough as I really needed.
By default looking-glass host on windows uses port 5900, not going to change that,
but you are required to expose that port, (and I did on port 60400 60400:5900
).
As matter of fact, you should NOT disable the display, as it disables all displays,
you could theoretically pass -vga none
as an additional argument thought.
One major difference from yesterday is that I decided to setup the IVSHMEM with KVMFR module as suggested from the documentation itself
Please be aware that as a result you will not be able to take advantage of your GPUs ability to access memory via it’s hardware DMA engine if you use this method.
For arch linux there's an AUR package available looking-glass-module-dkms
# Configure KVMFR (IVSHMEM) with 32MB (ideal for 1920x1080)
modprobe kvmfr static_size_mb=32
modprobe kvmfr
My full docker compose .yaml
file configuration ahead!
services:
windows:
image: windows-spice
container_name: W11-Core
privileged: true
environment:
VERSION: "win11"
RAM_SIZE: "12G"
CPU_CORES: "4"
DEVICE2: "/dev/sda"
ARGUMENTS: >
-device vfio-pci,host=23:00.0,multifunction=on
-device vfio-pci,host=23:00.1,multifunction=on
-device ivshmem-plain,id=shmem0,memdev=looking-glass
-object memory-backend-file,id=looking-glass,mem-path=/dev/kvmfr0,size=32M,share=yes
-device virtio-mouse-pci
-device virtio-keyboard-pci
-device virtio-serial-pci
-spice addr=0.0.0.0,port=5900,disable-ticketing
-device virtio-serial-pci
-chardev spicevmc,id=vdagent,name=vdagent
-device virtserialport,chardev=vdagent,name=com.redhat.spice.0
devices:
- /dev/kvm
- /dev/sda
- /dev/vfio/22
- /dev/vfio/vfio
- /dev/kvmfr0
cap_add:
- NET_ADMIN
ports:
- 60400:5900
- 8006:8006
- 3389:3389/tcp
- 3389:3389/udp
stop_grace_period: 2m
restart: on-failure
Of course, also IddSampleDriver and looking-glass requires just the right configuration as well.
I'd suggest to install IddSampleDriver at C:\IddSampleDriver\
, thus to configure
the C:\IddSampleDriver\option.txt
with only the right resolution (for some reason it
defaults to 640x480, which is unusable with a virtio-mouse); my primary monitor is a 1920x1080@144hz
, thus:
1
1920, 1080, 144
You'd probably want to configure looking-glass as well on the windows host, by default it
should be installed at C:\Program Files\Looking Glass (host)
, thus
add a looking-glass-client.ini
.
Have a look here for available configuration options;
as I am using a nvidia card (1050ti) for the passthrough I have enabled the nvfbc interface.
[app]
capture=nvfbc
It might be fine with the default configuration.
You then HAVE to configure looking-glass for the linux client itself, it
has to match the docker compose .yaml
configuration. Again, have a look
at the official documentation, as
my configuration may not work for you (e.g. I use right control to toggle capture mode, which locks
mouse/keyboard).
[app]
shmFile=/dev/kvmfr0
[win]
title=WizariMachine
size=1920x1080
keepAspect=yes
borderless=yes
fullScreen=no
showFPS=yes
[input]
ignoreWindowsKeys=no
escapeKey=97
mouseSmoothing=no
mouseSens=1
[wayland]
warpSupport=yes
fractionScale=yes
[spice]
port=60400
Run the docker, run looking-glass-client from your linux host, at this point you should see your windows machine
Finally, connect via VNC like you normally would and change which one is the primary display (or disable the default altogether).
I am also going to attach some screenshots where you can clearly see I am on linux (wayland, hyprland, a plain and simple ags bar on top), I tested both furmark (for the video capabilities) and gzdoom/youtube (mouse, keyboard and display latency, I'd say there is no noticable latency at all)
EDIT 1: nvfbc is only supported for "professional grade GPUs", I suppose it is automatically falling back to dxgi then?
EDIT 2: lately been busy with studies, I figured out a way to also enable audio via pulseaudio/pipewire; as always I am not an expert. Not sure if -audio spice
would somehow work by itself, but I found that passing the native pulseaudio unix server as a volume (on Arch /run/user/1000/pulse/native
, mount anywhere you please, e.g. /tmp/pa
), thus configuring it MANUALLY (audiodev + device qemu arguments instead of audio) it just works.
Of course, not taking full credits, I had a look into the qemu documentation, this very forum which explained how to setup a pulseaudio socket (which I totally skipped and gave the native socket instead xD), and this stackoverflow thread.
TL:DR Add these lines as ARGUMENTS (configuration above)
-device ich9-intel-hda,addr=1f.1
-audiodev pa,id=snd0,server=unix:/tmp/pa
-device hda-output,audiodev=snd0
Also mount the pipewire/pulseaudio as a docker volume
volumes:
- /run/user/1000/pulse/native:/tmp/pa
I'd say only clipboard sharing is missing.
Hi all, I managed yesterday to have a successful GPU passthrough (Yay). As mentioned before, switching kernel module used by GPU from
i915
tovfio-pci
was the key. On my system (Debian, 6.6.13), intel arc A380: In BIOS, enable IOMMU, virtualization VT-d, VT-x. Edit/etc/default/grub
and addintel_iommu=on
inGRUB_CMDLINE_LINUX_DEFAULT
sudo update-grub
to update grub, and restart. I wanted to be able to switch the GPU from host to VM, and therefore decided to have a script instead of having options within the modprobe loads. You can find how to pass a PCI tovfio-pci
in other links.sudo lspci
gives you the list of PCI devices. My GPU is listed as03:00.0
, it's audio device as04:00.0
. It's important to pass both or you'll end up with some issues later on. As root: To detach from i915 to vfio-pci:modprobe vfio vfio_pci
Then for both0000:03:00.0
and0000:04:00.0
in my case:echo %s > /sys/bus/pci/devices/%s/driver/unbind echo vfio-pci > /sys/bus/pci/devices/%s/driver_override echo %s > /sys/bus/pci/drivers_probe
From now on,
lspci -v | grep -A 15 " VGA "
shall give youvfio-pci
as driver in use. My docker-compose file as following:version: "3" services: windows: image: dockurr/windows build: . container_name: windows privileged: true environment: VERSION: "win11" DEBUG: Y RAM_SIZE: "16G" CPU_CORES: "14" ARGUMENTS: "-device vfio-pci,host=03:00.0,multifunction=on -device vfio-pci,host=04:00.0,multifunction=on" devices: - /dev/kvm - /dev/vfio/1 group_add: - "105" volumes: - ./storage:/storage cap_add: - NET_ADMIN ports: - 8006:8006 - 3389:3389/tcp - 3389:3389/udp stop_grace_period: 2m restart: on-failure
I was then able to install Intel GPU drivers within Windows with no issue. I'm not an expert, therefore don't hesitate to comment/redact as per your needs.
I'm glad you got it working, I really wish there was a tiny script that could generate the correct vfio passthrough for noobs. But sadly it wouldnt work for everyone... (Like me with my stupid AMD Gpu and its inability to reset itself)
I'm using that, of course to adapt as per your needs (array):
#!/bin/bash
#vfio-pci or i915
array=( '0000:03:00.0' '0000:04:00.0' )
while getopts t: flag
do
case "${flag}" in
t) type=${OPTARG};;
esac
done
modprobe vfio
modprobe vfio_pci
modprobe i915
for pcid in "${array[@]}"
do
echo "Switching pcids $pcid to $type"
echo $pcid > "/sys/bus/pci/devices/$pcid/driver/unbind"
echo $type > "/sys/bus/pci/devices/$pcid/driver_override"
echo $pcid > "/sys/bus/pci/drivers_probe"
done
you have to call this script with either -t vfio-pci
or -t i915
Hello,
I am following this issue since it was created as a silent reader and want to thank everyone that has provided so much information regarding this topic.
I`d like to throw another layer into the pit regarding passing gpus to windows running inside docker via this project.
I would be highly interested in any information regarding not doing a full gpu passthrough but splitting a gpu into vGPUs using the https://github.com/mbilker/vgpu_unlock-rs project (a detailed tutorial how to do this with a Proxmox Server can be found here https://gitlab.com/polloloco/vgpu-proxmox) and then passing a vGPU to a specific windows docker container.
Maybe someone has already tried this. It works like charm on proxmox with Windows VMs using for example enterprise GPUs like the Tesla M40 or Tesla P4.
Thanks in advance
Hi, new to this thread and having a go at the config to get a NVIDIA card passed through to a docker image (dockur/windows) and have it show up in the nested VM. I have the card showing up in nvidia-smi in the docker container and am about to do the passthrough from there to the Windows11 VM. I did this by installing nvidia container tools on the host, then passing through the GPU using portainer and/or command line switches in the docker run command ( i dont use compose ) then installint the nvidia drivers and the nvidia-container toolkit in the docker container.
I just wanted to ask, as my server is headless, do I really need to add in vfio-pci and/or looking-glass on the docker image ? from the perspective of the docker image, it is the only thing using the card... so cant I just forward the pci device ?
There are other docker images using it for other purposes, but the windows image will be the only one using it for 'display'
Hi @kroese, The previous discussions have been quite technical. While some users have reportedly been successful in passing through their GPUs to Dockerized Windows containers, the process seems complex for those who are not Docker experts. Is there a plan to simplify GPU passthrough in the future? Ideally, users like myself could easily enable it by adding just a few lines of configuration to the docker-compose.yml file.
Would it be possible to create a video teaching how to do "GPU Passthrough"?
这是我的配置截图,已经实现了1660 SUPER的直通,和网卡的直通,而且我已经实现了CPU去虚拟化. 这是我的截图 其中ARGUMENTS变量参数如下:-device vfio-pci,host=01:00.0,multifunction=on -device vfio-pci,host=01:00.1,multifunction=on -device vfio-pci,host=01:00.2,multifunction=on -device vfio-pci,host=88:00.0,multifunction=on -device usb-host,vendorid=0x0557,productid=0x2419 -cpu host,-hypervisor,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,kvm=off,hv_vendor_id=intel
具体的ID参照你的硬件修改.
DOCKER_CONTAINERS=("qbittorrent" "nas-tools" "transmission" "xiaoyaliu" "MoviePilot")
CURRENT_HOUR=$(date +"%H")
CURRENT_MINUTE=$(date +"%M")
LOG_FILE="/mnt/user/domains/docker_control.log"
log() { echo "[$(date +"%Y-%m-%d %H:%M:%S")] $1" >> "$LOG_FILE" echo "[$(date +"%Y-%m-%d %H:%M:%S")] $1" }
cleanup_logs() { log_size=$(du -m "$LOG_FILE" | cut -f1) max_log_size=50 if [ "$log_size" -gt "$max_log_size" ]; then mv "$LOG_FILE" "/mnt/user/domains/dockercontrol$(date +"%Y%m%d%H%M%S").log" touch "$LOG_FILE" # 清空日志文件 log "日志文件超过50M,已清理." fi }
log "开始脚本执行."
if [ "$CURRENT_HOUR" -ge 0 ] && [ "$CURRENT_HOUR" -lt 8 ] && [ "$CURRENT_MINUTE" -lt 60 ]; then log "当前时间在0点到7点59分59秒之间,启动Docker容器..." for CONTAINER in "${DOCKER_CONTAINERS[@]}"; do log "启动容器 $CONTAINER..." echo "启动容器 $CONTAINER..." if docker start "$CONTAINER" >> "$LOG_FILE" 2>&1; then log "容器 $CONTAINER 启动成功." echo "容器 $CONTAINER 启动成功." sleep 5 # 等待5秒 else log "容器 $CONTAINER 启动失败. 详细错误信息请查看日志." echo "容器 $CONTAINER 启动失败. 详细错误信息请查看日志." fi done else log "当前时间不在0点到7点59分59秒之间,停止Docker容器..." for CONTAINER in "${DOCKER_CONTAINERS[@]}"; do log "停止容器 $CONTAINER..." echo "停止容器 $CONTAINER..." if docker stop "$CONTAINER" >> "$LOG_FILE" 2>&1; then log "容器 $CONTAINER 停止成功." echo "容器 $CONTAINER 停止成功." sleep 5 # 等待5秒 else log "容器 $CONTAINER 停止失败. 详细错误信息请查看日志." echo "容器 $CONTAINER 停止失败. 详细错误信息请查看日志." fi done fi
log "脚本执行结束."
cleanup_logs
这是我的docker定时开启,关闭脚本,我目前在写代码,准备使用类似 virsh shutdown命令来进行这个工作.
Olá a todos,
Consegui ontem uma passagem de GPU bem-sucedida (Yay). Como mencionado antes, mudar o módulo do kernel usado pela GPU de
i915
paravfio-pci
era a chave.No meu sistema (Debian, 6.6.13), Intel Arc A380: No BIOS, habilite IOMMU, virtualização VT-d, VT-x.
Edite
/etc/default/grub
e adicioneintel_iommu=on
paraGRUB_CMDLINE_LINUX_DEFAULT
sudo update-grub
atualizar o grub e reinicie.Eu queria poder mudar a GPU do host para a VM e, portanto, decidi ter um script em vez de opções nas cargas do modprobe. Você pode descobrir como passar um PCI
vfio-pci
em outros links.
sudo lspci
fornece a lista de dispositivos PCI. Minha GPU está listada como03:00.0
, é um dispositivo de áudio como04:00.0
. É importante passar em ambos ou você terá alguns problemas mais tarde.Como root: Para desconectar do i915 para vfio-pci:
modprobe vfio vfio_pci
Então, para ambos
0000:03:00.0
e0000:04:00.0
no meu caso:echo %s > /sys/bus/pci/devices/%s/driver/unbind echo vfio-pci > /sys/bus/pci/devices/%s/driver_override echo %s > /sys/bus/pci/drivers_probe
A partir de agora,
lspci -v | grep -A 15 " VGA "
darei vocêvfio-pci
como motorista em uso.Meu arquivo docker-compose da seguinte forma:
version: "3" services: windows: image: dockurr/windows build: . container_name: windows privileged: true environment: VERSION: "win11" DEBUG: Y RAM_SIZE: "16G" CPU_CORES: "14" ARGUMENTS: "-device vfio-pci,host=03:00.0,multifunction=on -device vfio-pci,host=04:00.0,multifunction=on" devices: - /dev/kvm - /dev/vfio/1 group_add: - "105" volumes: - ./storage:/storage cap_add: - NET_ADMIN ports: - 8006:8006 - 3389:3389/tcp - 3389:3389/udp stop_grace_period: 2m restart: on-failure
Consegui então instalar os drivers da GPU Intel no Windows sem problemas.
Não sou um especialista, portanto, não hesite em comentar/redigir conforme suas necessidades.
Do I need to have two video cards? Can I transport it to Windows only with a Video Card (HOST)?
Thanks!
这是我的配置截图,已经实现了1660 SUPER的直通,和网卡的直通,而且我已经实现了CPU去虚拟化. 这是我的截图 其中ARGUMENTS变量参数如下:-device vfio-pci,host=01:00.0,multifunction=on -device vfio-pci,host=01:00.1,multifunction=on -device vfio-pci,host=01:00.2,multifunction=on -device vfio-pci,host=88:00.0,multifunction=on -device usb-host,vendorid=0x0557,productid=0x2419 -cpu host,-hypervisor,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,kvm=off,hv_vendor_id=intel
具体的ID参照你的硬件修改.
!/bin/bash
定义Docker容器的名称
DOCKER_CONTAINERS=("qbittorrent" "nas-tools" "transmission" "xiaoyaliu" "MoviePilot")
获取当前小时(24小时制)
CURRENT_HOUR=$(date +"%H")
获取当前分钟
CURRENT_MINUTE=$(date +"%M")
日志文件路径
LOG_FILE="/mnt/user/domains/docker_control.log"
函数:记录日志
log() { echo "[$(date +"%Y-%m-%d %H:%M:%S")] $1" >> "$LOG_FILE" echo "[$(date +"%Y-%m-%d %H:%M:%S")] $1" }
函数:清理日志
cleanup_logs() { log_size=$(du -m "$LOG_FILE" | cut -f1) max_log_size=50 if [ "$log_size" -gt "$max_log_size" ]; then mv "$LOG_FILE" "/mnt/user/domains/dockercontrol$(date +"%Y%m%d%H%M%S").log" touch "$LOG_FILE" # 清空日志文件 log "日志文件超过50M,已清理." fi }
log "开始脚本执行."
如果当前时间在0点到7点59分59秒之间
if [ "$CURRENT_HOUR" -ge 0 ] && [ "$CURRENT_HOUR" -lt 8 ] && [ "$CURRENT_MINUTE" -lt 60 ]; then log "当前时间在0点到7点59分59秒之间,启动Docker容器..." for CONTAINER in "${DOCKER_CONTAINERS[@]}"; do log "启动容器 $CONTAINER..." echo "启动容器 $CONTAINER..." if docker start "$CONTAINER" >> "$LOG_FILE" 2>&1; then log "容器 $CONTAINER 启动成功." echo "容器 $CONTAINER 启动成功." sleep 5 # 等待5秒 else log "容器 $CONTAINER 启动失败. 详细错误信息请查看日志." echo "容器 $CONTAINER 启动失败. 详细错误信息请查看日志." fi done else log "当前时间不在0点到7点59分59秒之间,停止Docker容器..." for CONTAINER in "${DOCKER_CONTAINERS[@]}"; do log "停止容器 $CONTAINER..." echo "停止容器 $CONTAINER..." if docker stop "$CONTAINER" >> "$LOG_FILE" 2>&1; then log "容器 $CONTAINER 停止成功." echo "容器 $CONTAINER 停止成功." sleep 5 # 等待5秒 else log "容器 $CONTAINER 停止失败. 详细错误信息请查看日志." echo "容器 $CONTAINER 停止失败. 详细错误信息请查看日志." fi done fi
log "脚本执行结束."
清理日志
cleanup_logs
这是我的docker定时开启,关闭脚本,我目前在写代码,准备使用类似 virsh shutdown命令来进行这个工作.
能给个邮箱,问问怎么在unraid下做到直通的吗,我也试过了,但是报错了。
Hello All,
I'm not sure if anyone is curious how to passthrough a GPU to the VM directly on an UnRaid system still but, if you are i have a quick hit guide listed below.
NOTES: This is an UnRaid Setup w/ NVIDIA | I have 2 GPUs on bare metal (1080 & 3060) & DEDICATING one (3060) to the Windows inside docker | Mileage may vary.
lspci -nnk | grep -i -A 3 'VGA'
lspci -nnk | grep -i -A 3 'VGA' 03:00.0 VGA compatible controller [0300]: NVIDIA Corporation GA106 [GeForce RTX 3060] [10de:2503] (rev a1) Subsystem: eVga.com. Corp. GA106 [GeForce RTX 3060] [3842:3657] Kernel driver in use: nvidia Kernel modules: nvidia_drm, nvidia
03:00.1 Audio device [0403]: NVIDIA Corporation GA106 High Definition Audio Controller [10de:228e] (rev a1) Subsystem: eVga.com. Corp. Device [3842:3657]
81:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [GeForce GTX 1080] [10de:1b80] (rev a1) Subsystem: Gigabyte Technology Co., Ltd GP104 [GeForce GTX 1080] [1458:3702] Kernel driver in use: nvidia Kernel modules: nvidia_drm, nvidia
03:00.0 VGA compatible controller [0300]: NVIDIA Corporation GA106 [GeForce RTX 3060] [10de:2503] (rev a1)
03:00.1 Audio device [0403]: NVIDIA Corporation GA106 High Definition Audio Controller [10de:228e] (rev a1)
The variable as you might expect is well......variable, change the code below based on your system output above in my case its built like this:
-device vfio-pci,host=03:00.0,multifunction=on -device vfio-pci,host=03:00.1,multifunction=on
If i wanted to use the 1080 itd be built like this:
-device vfio-pci,host=81:00.0,multifunction=on
ect.
you should see an error in the logs stating that it cant access the vfio device ect.
lspci -nnk | grep -i -A 3 'VGA'
Kernel driver in use: nvidia
This will be the based on the above output, my GPU Video device ID is 03:00:0 & Vendor ID is 10de:2503
echo "0000:03:00.0" > /sys/bus/pci/devices/0000:03:00.0/driver/unbind
echo "10de 2503" > /sys/bus/pci/drivers/vfio-pci/new_id
OR
echo "0000:03:00.0" > /sys/bus/pci/drivers/vfio-pci/bind
- Updated command with Unraid 6.12.13
This will be the based on the above output, my GPU Audio device ID is 03:00:1 & Vendor ID is 10de:228e
echo "0000:03:00.1" > /sys/bus/pci/devices/0000:03:00.1/driver/unbind
echo "10de 228e" > /sys/bus/pci/drivers/vfio-pci/new_id
OR
echo "0000:03:00.1" > /sys/bus/pci/drivers/vfio-pci/bind
- Updated command with Unraid 6.12.13
Kernel driver in use: vfio-pci
Let it run through the install, once you hit the desktop - type device manager
in the start menu, you should see your GPU in there. Add device drivers, reboot, device should now be in task manager as a dedicated GPU.
Device Manager:
Task Manager:
The changes made in the unbind NVIDIA & bind to VFIO-PCI section stays in effect until reboot oh the host (UnRaid) after reboot you will need to redo that section. You can however run a script on start up or on-demand to help automate the process. I can add that onto this if enough people ask for it. Hope this helps & I didn't miss anything :)
Good news everyone, I did in fact manage to make looking-glass work as intended!
Of course, there is still something missing (such as audio and the clipboard is not sync'd), but it is only a matter of configuration at this point.
My intuition was, in fact, correct; qemu-system-modules-spice package was missing, thus I had to slightly modify the docker by adding the debian repository (thus the package).
FROM dockurr/windows:latest # Add testing repository RUN echo "deb http://deb.debian.org/debian/ testing main" >> /etc/apt/sources.list.d/sid.list RUN echo -e "Package: *\nPin: testing n=trixie\nPin-Priority: 350" | tee -a /etc/apt/preferences.d/preferences > /dev/null RUN apt-get update && \ apt-get --no-install-recommends -y install \ qemu-system-modules-spice ENTRYPOINT ["/usr/bin/tini", "-s", "/run/entry.sh"]
Thus I built the new docker via
docker buildx build -t windows-spice --file spice-support.dockerfile .
I then found some looking-glass documentation which gave all I had to know to configure the passthrough as I really needed.
By default looking-glass host on windows uses port 5900, not going to change that, but you are required to expose that port, (and I did on port 60400
60400:5900
).As matter of fact, you should NOT disable the display, as it disables all displays, you could theoretically pass
-vga none
as an additional argument thought.One major difference from yesterday is that I decided to setup the IVSHMEM with KVMFR module as suggested from the documentation itself
Please be aware that as a result you will not be able to take advantage of your GPUs ability to access memory via it’s hardware DMA engine if you use this method.
For arch linux there's an AUR package available
looking-glass-module-dkms
# Configure KVMFR (IVSHMEM) with 32MB (ideal for 1920x1080) modprobe kvmfr static_size_mb=32 modprobe kvmfr
My full docker compose
.yaml
file configuration ahead!services: windows: image: windows-spice container_name: W11-Core privileged: true environment: VERSION: "win11" RAM_SIZE: "12G" CPU_CORES: "4" DEVICE2: "/dev/sda" ARGUMENTS: > -device vfio-pci,host=23:00.0,multifunction=on -device vfio-pci,host=23:00.1,multifunction=on -device ivshmem-plain,id=shmem0,memdev=looking-glass -object memory-backend-file,id=looking-glass,mem-path=/dev/kvmfr0,size=32M,share=yes -device virtio-mouse-pci -device virtio-keyboard-pci -device virtio-serial-pci -spice addr=0.0.0.0,port=5900,disable-ticketing -device virtio-serial-pci -chardev spicevmc,id=vdagent,name=vdagent -device virtserialport,chardev=vdagent,name=com.redhat.spice.0 devices: - /dev/kvm - /dev/sda - /dev/vfio/22 - /dev/vfio/vfio - /dev/kvmfr0 cap_add: - NET_ADMIN ports: - 60400:5900 - 8006:8006 - 3389:3389/tcp - 3389:3389/udp stop_grace_period: 2m restart: on-failure
Of course, also IddSampleDriver and looking-glass requires just the right configuration as well.
I'd suggest to install IddSampleDriver at
C:\IddSampleDriver\
, thus to configure theC:\IddSampleDriver\option.txt
with only the right resolution (for some reason it defaults to 640x480, which is unusable with a virtio-mouse); my primary monitor is a1920x1080@144hz
, thus:1 1920, 1080, 144
You'd probably want to configure looking-glass as well on the windows host, by default it should be installed at
C:\Program Files\Looking Glass (host)
, thus add alooking-glass-client.ini
. Have a look here for available configuration options; as I am using a nvidia card (1050ti) for the passthrough I have enabled the nvfbc interface.[app] capture=nvfbc
It might be fine with the default configuration.
You then HAVE to configure looking-glass for the linux client itself, it has to match the docker compose
.yaml
configuration. Again, have a look at the official documentation, as my configuration may not work for you (e.g. I use right control to toggle capture mode, which locks mouse/keyboard).[app] shmFile=/dev/kvmfr0 [win] title=WizariMachine size=1920x1080 keepAspect=yes borderless=yes fullScreen=no showFPS=yes [input] ignoreWindowsKeys=no escapeKey=97 mouseSmoothing=no mouseSens=1 [wayland] warpSupport=yes fractionScale=yes [spice] port=60400
Run the docker, run looking-glass-client from your linux host, at this point you should see your windows machine
Finally, connect via VNC like you normally would and change which one is the primary display (or disable the default altogether).
I am also going to attach some screenshots where you can clearly see I am on linux (wayland, hyprland, a plain and simple ags bar on top), I tested both furmark (for the video capabilities) and gzdoom/youtube (mouse, keyboard and display latency, I'd say there is no noticable latency at all)
EDIT 1: nvfbc is only supported for "professional grade GPUs", I suppose it is automatically falling back to dxgi then?
EDIT 2: lately been busy with studies, I figured out a way to also enable audio via pulseaudio/pipewire; as always I am not an expert. Not sure if
-audio spice
would somehow work by itself, but I found that passing the native pulseaudio unix server as a volume (on Arch/run/user/1000/pulse/native
, mount anywhere you please, e.g./tmp/pa
), thus configuring it MANUALLY (audiodev + device qemu arguments instead of audio) it just works.Of course, not taking full credits, I had a look into the qemu documentation, this very forum which explained how to setup a pulseaudio socket (which I totally skipped and gave the native socket instead xD), and this stackoverflow thread.
TL:DR Add these lines as ARGUMENTS (configuration above)
-device ich9-intel-hda,addr=1f.1 -audiodev pa,id=snd0,server=unix:/tmp/pa -device hda-output,audiodev=snd0
Also mount the pipewire/pulseaudio as a docker volume
volumes: - /run/user/1000/pulse/native:/tmp/pa
I'd say only clipboard sharing is missing.
I assume that you were running the docker container from a linux host machine not windows host machine, right? :)
这是我的配置截图,已经实现了1660 SUPER的直通,和网卡的直通,而且我已经实现了CPU去虚拟化。 这是我的截图 其中ARGUMENTS变量参数如下:-device vfio-pci,host=01:00.0,multifunction=on -device vfio-pci,host=01:00.1,multifunction=on -device vfio-pci,host=01:00.2,multifunction=on -device vfio-pci,host=88:00.0,multifunction=on -device usb-host,vendorid=0x0557,productid=0x2419 -cpu host,-hypervisor,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,kvm=off,hv_vendor_id=intel
具体的ID参照你的硬件修改.
!/bin/bash
定义Docker容器的名称
DOCKER_CONTAINERS=(“qbittorrent”, “nas-tools”, “transmission”, “xiaoyaliu”, “MoviePilot”)
获取当前小时(24小时制)
CURRENT_HOUR=$(日期 +“%H”)
获取当前分钟
CURRENT_MINUTE=$(日期 +“%M”)
日志文件路径
LOG_FILE=“/mnt/user/domains/docker_control.log”
函数:记录日志
log() { echo “[$(date +”%Y-%m-%d %H:%M:%S“)] $1” >> “$LOG_FILE” echo “[$(date +”%Y-%m-%d %H:%M:%S“)] $1” }
函数:清理日志
cleanup_logs() { log_size=$(du -m “$LOG_FILE” | cut -f1) max_log_size=50 if [ “$log_size” -gt “$max_log_size” ];then mv “$LOG_FILE” “/mnt/user/domains/dockercontrol$(date +”%Y%m%d%H%M%S“).log” touch “$LOG_FILE” # 清空日志文件log “日志文件超过50M,已清理。 fi }
log “开始脚本执行.”
如果当前时间在0点到7点59分59秒之间
if [ “$CURRENT_HOUR” -ge 0 ] & [ “$CURRENT_HOUR” -lt 8 ] & [ “$CURRENT_MINUTE” -lt 60 ];然后记录 “当前时间在0点到7点59分59秒之间,启动Docker容器...”的 CONTAINER in “${DOCKER_CONTAINERS[@]}”;do log “启动容器 $CONTAINER...” echo “启动容器 $CONTAINER...” 如果 docker start “$CONTAINER” >> “$LOG_FILE” 2>&1;然后记录 “容器 $CONTAINER 启动成功”。 echo “容器 $CONTAINER 启动成功。” sleep 5 # 等待5秒 else log “容器 $CONTAINER 启动失败.详细错误信息请查看日志." echo “容器 $CONTAINER 启动失败.详细错误信息请查看日志." fi done else log “当前时间不在0点到7点59分59秒之间,停止Docker容器...”对于CONTAINER in “${DOCKER_CONTAINERS[@]}”;do log “停止容器 $CONTAINER...” echo “停止容器 $CONTAINER...” 如果 docker stop “$CONTAINER” >> “$LOG_FILE” 2>&1;然后记录 “容器 $CONTAINER 停止成功”。 echo “容器 $CONTAINER 停止成功。” sleep 5 # 等待5秒 else log “容器 $CONTAINER 停止失败.详细错误信息请查看日志." echo “容器 $CONTAINER 停止失败.详细错误信息请查看日志."你好了
log “脚本执行结束。”
清理日志
cleanup_logs
这是我的docker定时开启,关闭脚本,我目前在写代码,准备使用类似 virsh shutdown命令来进行这个工作。
老哥,你的完整docker-compose方便发下吗,我是amd 5825u
to those interested in this ive written a script that automatically does this
Hello All,
I'm not sure if anyone is curious how to passthrough a GPU to the VM directly on an UnRaid system still but, if you are i have a quick hit guide listed below.
NOTES: This is an UnRaid Setup w/ NVIDIA | I have 2 GPUs on bare metal (1080 & 3060) & DEDICATING one (3060) to the Windows inside docker | Mileage may vary.
On UnRaid Terminal as root:
lspci -nnk | grep -i -A 3 'VGA'
Output:
lspci -nnk | grep -i -A 3 'VGA' 03:00.0 VGA compatible controller [0300]: NVIDIA Corporation GA106 [GeForce RTX 3060] [10de:2503] (rev a1) Subsystem: eVga.com. Corp. GA106 [GeForce RTX 3060] [3842:3657] Kernel driver in use: nvidia Kernel modules: nvidia_drm, nvidia
03:00.1 Audio device [0403]: NVIDIA Corporation GA106 High Definition Audio Controller [10de:228e] (rev a1) Subsystem: eVga.com. Corp. Device [3842:3657]
81:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [GeForce GTX 1080] [10de:1b80] (rev a1) Subsystem: Gigabyte Technology Co., Ltd GP104 [GeForce GTX 1080] [1458:3702] Kernel driver in use: nvidia Kernel modules: nvidia_drm, nvidia
Make Note of the Device you want to add to the VM, in my case its:
03:00.0 VGA compatible controller [0300]: NVIDIA Corporation GA106 [GeForce RTX 3060] [10de:2503] (rev a1)
03:00.1 Audio device [0403]: NVIDIA Corporation GA106 High Definition Audio Controller [10de:228e] (rev a1)
UnRaid Docker Setup:
How to Add the 3 Device Types & 1 Variable
The variable as you might expect is well......variable, change the code below based on your system output above in my case its built like this:
-device vfio-pci,host=03:00.0,multifunction=on -device vfio-pci,host=03:00.1,multifunction=on
If i wanted to use the 1080 itd be built like this:
-device vfio-pci,host=81:00.0,multifunction=on
ect.Save the docker, it will setup successful but NOT start successfully - this is expected!
you should see an error in the logs stating that it cant access the vfio device ect.
On UnRaid Terminal as root:
lspci -nnk | grep -i -A 3 'VGA'
NOTE: Kernel is nvidia
Kernel driver in use: nvidia
Time to unbind NVIDIA & bind to VFIO-PCI:
This will be the based on the above output, my GPU Video device ID is 03:00:0 & Vendor ID is 10de:2503
echo "0000:03:00.0" > /sys/bus/pci/devices/0000:03:00.0/driver/unbind
echo "10de 2503" > /sys/bus/pci/drivers/vfio-pci/new_id
ORecho "0000:03:00.0" > /sys/bus/pci/drivers/vfio-pci/bind
- Updated command with Unraid 6.12.13This will be the based on the above output, my GPU Audio device ID is 03:00:1 & Vendor ID is 10de:228e
echo "0000:03:00.1" > /sys/bus/pci/devices/0000:03:00.1/driver/unbind
echo "10de 228e" > /sys/bus/pci/drivers/vfio-pci/new_id
ORecho "0000:03:00.1" > /sys/bus/pci/drivers/vfio-pci/bind
- Updated command with Unraid 6.12.13NOTE: Kernel is VFIO-PCI
Kernel driver in use: vfio-pci
Before:
After:
Start the docker container & see if it boots:
Let it run through the install, once you hit the desktop - type
device manager
in the start menu, you should see your GPU in there. Add device drivers, reboot, device should now be in task manager as a dedicated GPU.Device Manager:
Task Manager:
ENDING NOTES:
The changes made in the unbind NVIDIA & bind to VFIO-PCI section stays in effect until reboot oh the host (UnRaid) after reboot you will need to redo that section. You can however run a script on start up or on-demand to help automate the process. I can add that onto this if enough people ask for it. Hope this helps & I didn't miss anything :)
ALSO HUGE THANK YOU FOR THIS PROJECT ITS EXACTLY WHAT I NEEDED!!!!!
to those interested in this I've written a script that automatically binds and unbinds #845 its still a work in progress so testers will be helpful, the current version needs to be run in user scripts {with modifications} as I need to find a way to run the script pre start and post stop of the container
you will still need to do the variables, except the arguments
once i have a gpu for my server i can test further
environment: GPU: "Y" devices:
Having an issue with my headless server running this docker container with an intel hd 530 gpu. VFIO is working and enabled. KVM is working and enabled. Using docker with the compose plugin.
Kernel driver in use: vfio-pci`
00:02.0 VGA compatible controller: Intel Corporation HD Graphics 530 (rev 06) (prog-if 00 [VGA controller])
However, it restarts with an error. This is my docker compose file:
services:
windows:
image: dockurr/windows
container_name: windows
privileged: true
environment:
VERSION: "https://www.microsoft.com/legitwindows.iso
DEBUG: Y
DISK_SIZE: "64G"
RAM_SIZE: "4G"
CPU_CORES: "2"
USERNAME: "NOSNOOPING"
PASSWORD: "NOSNOOPING"
REGION: "abc"
KEYBOARD: "abc"
GPU: "Y"
ARGUEMENTS: >
-device vfio-pci,host=00:02.0,multifunction=on
devices:
- /dev/kvm
- /dev/vfio/
cap_add:
- NET_ADMIN
ports:
- 8006:8006
- 3389:3389/tcp
- 3389:3389/udp
stop_grace_period: 2m
volumes:
- /home/user/docker/windows/data:/storage
- /home/user/docker/windows/shared:/data
restart: always
windows | -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pcie.0,addr=0x1c
windows | ❯ Booting Windows using QEMU v9.1.1...
windows | ❯ ERROR: qemu-system-x86_64: egl: no drm render node available
windows | qemu-system-x86_64: egl: render node init failed
My 530 IGP is the only one in IOMMU group 0 too. Running Debian Bookworm. Docker running privileged.
My grub.
ro net.ifnames=0 consoleblank=0 intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction initcall_blacklist=sysfb_init video=simplefb:off video=vesafb:off video=efifb:off video=vesa:off disable_vga=1 vfio_iommu_type1.allow_unsafe_interrupts=1 kvm.ignore_msrs=1 modprobe.blacklist=i915
dmesg shows IOMMU enabled and working with devices set to specified groups.
And finally, some information
IOMMU Group 0:
00:02.0 VGA compatible controller [0300]: Intel Corporation HD Graphics 530 [8086:1912] (rev 06)
IOMMU Group 1:
00:00.0 Host bridge [0600]: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor Host Bridge/DRAM Registers [8086:191f] (rev 07)
IOMMU Group 2:
00:14.0 USB controller [0c03]: Intel Corporation 100 Series/C230 Series Chipset Family USB 3.0 xHCI Controller [8086:a12f] (rev 31)
00:14.2 Signal processing controller [1180]: Intel Corporation 100 Series/C230 Series Chipset Family Thermal Subsystem [8086:a131] (rev 31)
IOMMU Group 3:
00:17.0 SATA controller [0106]: Intel Corporation Q170/Q150/B150/H170/H110/Z170/CM236 Chipset SATA Controller [AHCI Mode] [8086:a102] (rev 31)
IOMMU Group 4:
00:1f.0 ISA bridge [0601]: Intel Corporation Q150 Chipset LPC/eSPI Controller [8086:a147] (rev 31)
00:1f.2 Memory controller [0580]: Intel Corporation 100 Series/C230 Series Chipset Family Power Management Controller [8086:a121] (rev 31)
00:1f.3 Audio device [0403]: Intel Corporation 100 Series/C230 Series Chipset Family HD Audio Controller [8086:a170] (rev 31)
00:1f.4 SMBus [0c05]: Intel Corporation 100 Series/C230 Series Chipset Family SMBus [8086:a123] (rev 31)
IOMMU Group 5:
00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (2) I219-LM [8086:15b7] (rev 31)
Modules loaded :
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
Hey, would like to know if this container is capable of passing through a gpu to the vm inside the container. I have looked into the upstream docker container qemus/qemu-docker, which seems to have some logic to work with gpu passthrough, though some documentation for this here would be great, if it is possible.
I also tried to connect to the container using virtual machine manager, but unfortunately i wasnt able to connect to it. Any idea why?