Closed bernharl closed 2 years ago
Hi,
I am not exactly sure.
AFAIK tools like bumblebee set some environment variables.
If those variables are set in the container environment, too, it might work. (x11docker option --env
.)
Does x11docker already install the NVIDIA driver in container?
I found it out yesterday. The container does install the Nvidia driver, yes.
What I did was set the same environment variables as this script (originally part of the nvidia-prime arch linux package): https://github.com/archlinux/svntogit-packages/blob/packages/nvidia-prime/trunk/prime-run
Setting these variables in the container makes applications run on the Nvidia card.
By the way: With Nvidia 490 supporting GBM as well as Gnome 41 having support for Nvidia's GBM path, x11docker now works with Wayland and Xwayland, even on Nvidia!
Thank you for the feedback! Great that you already found a solution.
x11docker could set these variables if present in the environment and option --gpu
is set.
So prime-run x11docker --gpu [...]
would work immediately.
By the way: With Nvidia 490 supporting GBM as well as Gnome 41 having support for Nvidia's GBM path, x11docker now works with Wayland and Xwayland, even on Nvidia!
Interesting news. However this works only for the case Gnome>=41 and Nvidia>=490? It does not work for other desktop environments?
The latest commit adds prime-run
support.
Running prime-run x11docker --gpu [...]
should work now.
Please update and run a test.
Interesting news. However this works only for the case Gnome>=41 and Nvidia>=490? It does not work for other desktop environments?
I bet it works on any Wayland compositor that uses Nvidia's GBM implementation. Right now I think that is KDE and Gnome (unsure about Sway and other Wlroots based ones).
The latest commit adds
prime-run
support. Runningprime-run x11docker --gpu [...]
should work now. Please update and run a test.
This works, thank you!
One thing though: Is there any way to bypass the fact that you need to have the same username inside the container as on the host for the GPU to work? This happens both for my integrated GPU and discrete GPU. When not using prime-run having a different username leads to software rendering (llvmpipe), while with prime-run nothing works.
One thing though: Is there any way to bypass the fact that you need to have the same username inside the container as on the host for the GPU to work?
The container user chosen with option --user
should make no difference.
Only with options --weston
, --kwin
and --hostwayland
the container user must be the same as the host user. This is because the wayland socket resides in XDG_RUNTIME_DIR
that is owned by the host user.
X sockets (including those of Xwayland) reside in /tmp/.X11-unix
and are not bound to specific users.
In which setup does --gpu
not work for you (command example)?
I have added a check for NVIDIA>=470.x and Xwayland>=21.1.2 that should support Wayland and Xwayland setups with NVIDIA cards.
One thing though: Is there any way to bypass the fact that you need to have the same username inside the container as on the host for the GPU to work?
This should not happen and I'd still be interested to hear more about this. Can you give me an example?
One thing though: Is there any way to bypass the fact that you need to have the same username inside the container as on the host for the GPU to work?
The container user chosen with option
--user
should make no difference. Only with options--weston
,--kwin
and--hostwayland
the container user must be the same as the host user. This is because the wayland socket resides inXDG_RUNTIME_DIR
that is owned by the host user. X sockets (including those of Xwayland) reside in/tmp/.X11-unix
and are not bound to specific users.In which setup does
--gpu
not work for you (command example)?
Sorry for the late reply, totally forgot about this!
My command is:
x11docker --network=host -i -g --share=ros --env TERM=${TERM} --init --hostdisplay --group-add={video,render} --sudouser <image>
Thank you for responding!
My command is: x11docker --network=host -i -g --share=ros --env TERM=${TERM} --init --hostdisplay --group-add={video,render} --sudouser
I've tested this with:
x11docker --network=host -i -g --env TERM=${TERM} --init --hostdisplay --group-add={video,render} --sudouser x11docker/check bash
GPU acceleration works well here. Is this really an example where GPU does not work for you?
Side notes:
--init
and --group-add={video,render}
. x11docker sets them automatically.--network=host
should only be used if there is an urgent need to do so.GPU acceleration works well here. Is this really an example where GPU does not work for you?
No, you are right. I meant to have --user=RETAIN
, not --sudouser
.
You do not need to set options
--init
and--group-add={video,render}
. x11docker sets them automatically.
Thank you for the tip.
--network=host
should only be used if there is an urgent need to do so.
I know. I'm not using containers to actually containerize applications. I'm using it to run programs that would otherwise be troublesome to run on my distro, and it needs access to all ports on my system. In my mind using host network is no more dangerous than running a program outside of a container, which I would do if I could.
No, you are right. I meant to have --user=RETAIN, not --sudouser.
Thank you, I found a bug! Though, it caused to fail X access independent from --gpu
.
It is fixed now in v7.0.x releases. --gpu --user=RETAIN
works now.
Hi,
I have a laptop with an integrated Intel GPU and a discrete Nvidia GPU. I have set up the Nvidia driver to work with x11docker, but how do I make applications in a container actually use the Nvidia card and not the Intel one?