Open keeema opened 3 months ago
Which version do you use? Qt 4/5/6 OSMesa?
Qt5
Today I have also tried the osmesa. nvidia-smi still doesnt show any usage of GPUs, but it generates snapshots in same times as on my Mac M1 (no difference if I use AKS with GPU or without). So for me it doesnot make a sense that it still doesnt use GPUs even if I use node with GPUs, but the performance is OK for me now.
OSMesa uses software rendering, so it OSMesa version is working as designed.
Qt uses QOffscreenSurface class for rendering. It should use hardware acceleration.
Could you please check hw acceleration with other app to validate the environment?
The nvidia/cuda docker image check on start whether there asi available nvidia GPU or not and informs about it after pod is started. Checked on Mac - problem reported. Checked in AKS - successfully started without problem.
I guess you run LDView in headless mode in Docker.
What Qt Platform (QPA) do you use? It should be EGL.
Please refer to https://doc.qt.io/qt-5/embedded-linux.html https://forum.qt.io/topic/124174/qopenglfunctions-in-docker-headless-opengl/7?_=1725005495720&lang=en-US
@keeema Is the problem still there? Please provide more detail to be able to recreate.
@pbartfai sorry for no response for such a long time. Too much of work. Right now I have good results with mesa and there were no more requirements to increase it so I haven't tried it. But to help find possible issue I will give it a try.
List of Qt plugins contains eglfs
ls /usr/lib/x86_64-linux-gnu/qt5/plugins/platforms/
libqeglfs.so libqlinuxfb.so libqminimal.so libqminimalegl.so libqoffscreen.so libqvnc.so libqwayland-egl.so libqwayland-generic.so libqwayland-xcomposite-egl.so libqwayland-xcomposite-glx.so libqxcb.so
But I am not sure how to recognize if it is used . QT_QPA_PLATFORM
is not set.
Should I define it through the environment variable or is there some cli parameter in ldview which should be used for this purpose?
you can add command line argument to LDView: LDView -platform eglfs ...
OK, I'll try it as soon as I have something new to develop in the part that uses ldview. Thanks for your patience
Is your feature request related to a problem? Please describe. I am trying to run ldview from python script in nvidia/cuda:12.5.0-runtime-ubuntu22.04 docker image on NVIDIA Tesla T4 in AKS. I have working container but
watch -n0.25 nvidia-smi
shows no precesses on GPUs and rendering is slow.Describe the solution you'd like LDView is able to use GPUs in cuda container.
Describe alternatives you've considered