Open ro-i opened 3 years ago
I am on nvidia (proprietary driver) and also noticed that glx tends to be more laggy, but in other way - gpu accelerated terminals (alacritty, kitty) take ~1.5x more time to start compared to xrender:
Probably related: #641
Probably related: #641
This, specifically there have been changes since v8.2 that affect floating window performance. (I noticed this very clearly while doing my testing).
@ro-i If you try the current git branch (next), I think you will observe much less floating window lag.
@MahouShoujoMivutilde Which version of picom are you using? The release version (8.2) or the current branch? Also, since you're on NVIDIA, I think you'll benefit from both the current branch and flag testing I'm doing.
@kwand i am on latest next
, yes.
Driver version 460.56, from nvidia-all, kernel is linux-tkg 5.10.y lts with muqss
and not much else (i found this combination of versions avoids random freezes (which "been fixed", but in reality just became harder to trigger, but still happen sometimes on latest drivers)).
Also my gpu is quite old - gtx 970.
So i tried your patch, but sadly it seems there is no significant difference.
Also, you guys been talking about triple buffering, so i think it's may be worth noting that i have it enabled in xorg config:
/etc/X11/xorg.conf.d/20-nvidia.conf
Section "Screen"
Identifier "Screen0"
# for vsync without compositor
Option "metamodes" "nvidia-auto-select +0+0 { ForceCompositionPipeline=On, ForceFullCompositionPipeline = On }"
Option "TripleBuffer" "On"
Option "AllowIndirectGLXProtocol" "Off"
# Option "ConnectToAcpid" "Off"
EndSection
Also, you guys been talking about triple buffering, so i think it's may be worth noting that i have it enabled in xorg config:
I've been talking about disabling triple buffering. The fact that you have it enabled is might conflict with the patch, which tries to turn it off for picom.
There's no real reason to have triple buffering enabled for a compositor, unless you want to prioritize smoothness at the cost of responsiveness.
Though, I will note the difference with my patch is only slight. There is another method that I tried (it's mentioned in the first post of the PR), which may or may not benefit you, but I've abandoned it now since it introduces a bunch of old glitches and will require a major rewrite of the code, for just 5-10ms less latency. Don't get me wrong that this amount of latency reduction is still significant - it's just that the rewrite is many times more significant that I can't really justify it (at least for my implementation)
Also, I don't really believe your GPU is old enough to be the problem here. I think it's likely it is not even boosting while rendering picom, which should make it perform almost on par with newer GPUs.
Option "metamodes" "nvidia-auto-select +0+0 { ForceCompositionPipeline=On, ForceFullCompositionPipeline = On }"
Also, I just noticed that you have this line in your xorg config. Is this turned on? (i.e. ForceCompositionPipeline and ForceFullCompositionPipeline? I don't really understand the syntax of this config file unfortunately)
If it is, I would turn it off while testing - I've noticed increased lag when it's on and it's a bit superfluous as picom is already trying to do the same thing. The comment you have above seems to say the same thing: "for vsync without compositor"
I still can't quite explain why glX takes 1.5x longer though - even in theory. My best guess (without evidence) is that glX may be more graphically demanding than xrender? If that is the case, I would try re-performing the test with "Prefer Maximum Performance" enabled, as shown here:
This should force the GPU to run at its maximum boost clock.
unless you want to prioritize smoothness at the cost of responsiveness.
Though, I will note the difference with my patch is only slight
Doesn't hurt to try it anyway. And thank you for trying to come up with improvements :+1:
it's just that the rewrite is many times more significant that I can't really justify it
Yeah, makes sense.
Also, I just noticed that you have this line in your xorg config. Is this turned on?
Yes, but it doesn't actually add any significant latency (see above), in fact - the biggest increase in latency regarding xorg config was from disabling TripleBuffer
.
comment you have above seems to say the same thing: "for vsync without compositor"
I wrote that myself, and that's why all my tests are with --no-vsync
;)
Theoretically, having vsync on driver level should give less latency compared to picom --vsync
, but that i haven't tested yet.
However, at the moment, fastest runs were with ForceCompositionPipeline=On
.
I would try re-performing the test with "Prefer Maximum Performance" enabled, as shown here
I just tried that, and results are the same. P state switches to 4 even with default setting fast enough to not matter.
I wrote that myself, and that's why all my tests are with
--no-vsync
;)
Well, this was a complete oversight of mine. I didn't notice that at all! I can see why my patch probably has no effect then since I believe it would only work if vsync is enabled.
Thank you for doing the testing - I'm currently unable to do much due to time constraints. They're all quite interesting, and unfortunately, I have no idea how to explain the results.
(Actually, it's possible I was mistaken about enabling triple buffering in the xorg conf forces picom to use triple buffering as well. To reiterate, this does not really matter in your case since you disabled vsync, but it's possible you're seeing gains b/c of enabling/disabling triple buffering for alacritty.)
Theoretically, having vsync on driver level should give less latency compared to
picom --vsync
, but that i haven't tested yet.
Would love to see the results for this as well, whenever you have the time, since my PR mainly improves vsync inside picom.
@kwand Okay, i tested whole bunch of things.
So what i noticed:
--no-vsync
) as fast as xrender while keeping ffcp and
triple buffering. Nice!--vsync
doesn't actually work (there is still tearing), and
latency says the same.--vsync
(137ms) .--vsync
and ffcp).--vsync
(patched vs unpatched is about
10ms difference!), xrender and ffcp don't care.Note: If you use a tiling window manager, it is important to launch alacritty as floating window to minimize margin of error.
Here is a notebook selecting rows with various options to get a feel for latency.
Also, if you notice that after reboot everything is slow again, remember that
The NVIDIA X driver does not preserve values set with nvidia-settings between runs of the X server.
from nvidia-settings man page.
So don't forget to add nvidia-settings --load-config-only
in autostart. I don't yet know how to set allow flipping = 0 and sync to vblack = 0 in Xorg.conf .
@MahouShoujoMivutilde Very sorry for the late reply. This is not much of an update, but I just wanted to let you know that I have read your reply and actually switched to using settings that give the lowest latency (as per your results) a month ago.
The results seem to be right as I seem to notice some latency improvement. But it still puzzles me why there's such a huge difference and how we could improve picom's performance (OK, maybe not a "huge difference". 22-23ms seems to be just slightly more than one frame of latency worse than FFCP, assuming your display is 60Hz)
I have yet to run your tests on my own machine though, but I imagine I'll probably discover something once I do (as I'm now using a 160Hz display). I also have access to a laptop running AMD graphics now, so I want to do some investigation in that area to see if this is NVIDIA-specific problem or something inherent in picom.
Sorry, I've just been really short on time lately, though I would really love to fix this problem myself (I get pretty annoyed at the latency difference when I notice how fast everything is when I need to kill picom
sometimes, and when comparing picom
to Windows).
If anyone has the time, I think this is a pretty high-priority issue that could be looked into. Investigating how KDE handles their compositing v-sync algorithm might also be worthwhile, since apparently they have a superior algorithm than picom
's.
I haven't daily-driven Plasma on my main machine yet (running awesome-wm
now), so I can't say much about their claims of latency improvement - though I do know that they were quite infamous for terrible latency before that latency-improving update. Since many people are claiming that issue is fixed, I'd imagine it has to have been quite a significant improvement (or maybe they became too used to terrible latency that any improvement looked subjectively better. I don't know)
sorry to notify literally any one associated with this thread, but just a question: Is xrender better than GLX backend on nvidia propietary drivers? and what can i do to lower latency for both backends? i have disabled Flipping which made it much smoother,
Option "ForceFullCompositionPipeline" "on"
Option "AllowIndirectGLXProtocol" "off"
Option "TripleBuffer" "on"
for me picom can take up to 20% cpu (i use animations) and gpu takes 70% with 70W average.
Hi! :) This is not really a bug report, but rather a question out of curiosity. I wonder why with my current configuration and setup, the
xrender
backend is less laggy than theglx
backend. Both prevent tearing, but when I move a floating window around, it lags much more with theglx
backend than with thexrender
backend. Note: I am referring to the "experimental" backends! I would be really interested to learn why this could be the case or if I did something wrong in my config. I'm a long-term user of compton/picom, but most of the time I used theintel
Xorg driver with theTearFree
option and disabledvsync
in the compositor. But since a few months, I finally switched to themodesetting
driver because there has been a bug in theintel
driver that affected me (and because themodesetting
driver is said to be more performant anyway).Platform
Fedora 34 (pre-release), kernel 5.11.14-300.fc34.x86_64
GPU, drivers, and screen setup
Intel Corporation WhiskeyLake-U GT2 [UHD Graphics 620], modesetting driver for Xorg, external 1920x1080 monitor connected to laptop via DisplayPort (over USB-C). vainfo: Driver version: Intel i965 driver for Intel(R) Coffee Lake - 2.4.1
glxinfo -B
:Environment
i3 :)
picom version
picom --diagnostics
:Configuration:
grep '^[^#]' .config/picom.conf
:Thank you very much! :heart: