Open lewisgdljr opened 3 years ago
I have seen the same behavior for very simple apps like glxgears. Now try the benchmark with openarena. While the 3D application becomes more complex you begin to see the GPU improvement.
On my machine, openarena doesn't seem to work. It just crashed my whole WSL2 distro, with the message "[process exited with code 1]". I had two consoles open and they both exited with that message. With GALLIUM_DRIVER=llvmpipe, it does nothing at all - blank window that doesn't appear to accept input.
I tried glmark2. It also displays a blank window with the hardware driver, and gives a glmark2 score of 86. With llvmpipe, the glmark2 score is 213. This appears to be an average of fps for a bunch of shader tests. While this isn't as dramatic a difference, it's still significant. This is also intended to be a benchmarking application for OpenGL hardware.
UPDATE: I tried the OpenArena benchmark via phoronix-test-suite. The GPU renderer averaged 14.0fps @ 1024x768, although the screen was almost entirely white the whole time (only a few UI elements were displayed, the rest was white). llvm-pipe did 8.6 fps at the same resolution, although the result did display the actual scenes being rendered. So not nearly as large a difference, and this time there was a slight edge for the GPU in terms of speed. That said, the fact that the GPU didn't display the rendering results means it's still unusable for graphics.
Do new intel drivers change anything? https://downloadcenter.intel.com/download/30579/Intel-Graphics-Windows-DCH-Drivers
That's the driver I've been running for a week or so, since it came out. Although I didn't benchmark using older drivers, I don't remember noticing any difference when I installed those.
(Edited to note that it's only been out a week. I was misreading the text in the updater to think I installed them on the ninth, although they were apparently compiled and signed on the ninth but released on the 14th.)
Same problem for Intel UHD630. The mesa is compiled with d3d12 config. When running any GUI programs, the window is black and glxgears has a FPS of only 15....
Interestingly on Impish with the latest Intel drivers I get a black glxgears window with up to 90 fps as described by the OP. But when I upgrade mesa to kisak, let alone oibaf ppa it even drops to 15 and 10 fps respectively.
Is there a brother who understands how to solve this problem?
Environment
Hardware-accelerated OpenGL doesn't display anything but a black screen in some apps (like glxgears), but even when it doesn't show anything, the rendering speed maxes out at 72-75 frames/sec. With GALLIUM_DRIVER=llvmpipe, I get a rendering speed of around 1100-1200+ frames/sec.
Steps to reproduce
Run "glxgears" from the prompt, using the kisak-mesa PPA version of Mesa under Ubuntu 21.04. For comparison, also run "GALLIUM_DRIVER=llvmpipe glxgears". This is one of the simplest programs that shows the rendering rate.
Expected behavior
Hardware GPU rendering should be at least a little faster than software rendering. If nothing else, because it's not competing with the other software on the machine for CPU cycles. But it's not - it's a LOT slower. An order of magnitude slower.
Actual behavior
The software renderer blows the GPU out of the water. I know that there are overheads for using the GPU in WSLg, but an almost 15x speed difference seems a little extreme, and that's dividing the lowest software rendering speed by the highest GPU speed. Other apps that do show the results of rendering make that even more obvious. For a simple example from the same package, the glxheads demo using the hardware renderer shows that it's obviously drawing a triangle, and moving and rotating it. With GALLIUM_DRIVER=llvmpipe, it's hard to tell it's a triangle because it's moving so fast.
This same issue has occurred in previous Windows 11 builds (22000.51 and 22000.61), as well as the last two or so Windows 10 dev builds I've used (21390 and 21387). The GPU doesn't seem as slow when it's being used for rendering in native Windows applications, especially since the demos used only display a small number of polygons, and should therefore just about maximize throughput since they're not taxing the hardware in the slightest.