Closed kwand closed 3 years ago
Hmm, after taking a quick look through the code (using ripgrep + some manual file viewing in VSCode), it appears that refresh-rate only matters if sw-opti is turned on, which it clearly isn't here (unless Manjaro's default installation ships with other settings, being enabled somewhere).
If this is correct, I'm not quite sure what/where the improved input lag came from, and whether it was just illusory or not. Unfortunately, I don't have a measuring device to test objectively. Though, if it was real and didn't come from changing refresh-rate, could it possibly had anything to do with picom restarting every time the config is changed?
I'm clearly not familiar enough with the codebase to tell if any of this is true or not; I'm actually more confused now as to what this has to do with the experimental backends (unless sw-opti being deprecated is related to the new backends; I'm not sure as I haven't checked that)
OK, I'm almost certain that what I experienced earlier was either an illusion or caused by picom resetting upon changing the settings (which somehow affected input lag) - my unfamiliarity with the code base is the only thing holding me back from saying this too absolutely. This can be closed once confirmed (and I can open a separate issue if the below is worth exploring)
Went down a rabbit hole trying to figure out if there were other solutions to input lag; I didn't notice until looking more closely at the code and commits that glFinish() is already temporarily being used for NVIDIA since April. After some searching, I discovered these related issues from KDE [1], [2] and tried implementing the same change from [1] into picom, since the KDE removed usleep for allegedly better performance using a cap of 1 on __GL_MaxFramesAllowed.
In my patched picom
, it seemed to have solved the busy-waiting CPU usage problem as well and gives somewhat better smoothness, but it appeared to introduce somewhat more input lag? (I'm not quite sure - as it's too subjective for me to tell at this point. I originally thought my patched version had better input lag). Though, KDE seems to have changed their compositing strategy entirely with [2], as they deleted [1] with that commit.
@yshui Just wondering if you've tried setting __GL_MaxFramesAllowed as an alternative fix? I don't assume it's much, much better anymore now, with regards to input lag, though I could do some longer testing (even though results would admittedly be quite subjective).
Personally, after going through that rabbit hole, I've switched over to using ForceFullCompositionPipeline via the NVIDIA driver for now and disabling picom
's vsync (just because I personally haven't tried this setting. As I'm typing this right now, it seems 'somewhat' better?)
EDIT: I went back to trying my patched version again, and plan to use it for a bit (since setting ForceFullCompositionPipeline seems to have the added complication of doing it manually every time per launch; allegedly, there are some issues when doing it in an Xorg conf file). Though, theoretically-speaking (as I understand it now) apparently a lower value for MaxFramesAllowed should give better input lag?
Also, I suppose I should note that I've been using a very subjective test (and possibly a terrible idea to begin with) of typing fast to measure input lag - so it's very difficult for me to tell if it's truly picom input lag, application lag, or even keyboard lag.
Closed. I think this was mostly a fluke - though perhaps I did actually notice an improvement in input lag, but caused by restarting picom or something related to the USLEEP flag.
Started a new issue to discuss using __GL_MaxFramesAllowed and other possible improvements to frame timing and input lag.
Platform
Manjaro Linux x86_64 (Kernel 4.19.167-1-MANJARO)
GPU, drivers, and screen setup
NVIDIA TU104 (GeForce RTX 2080 Rev. A), driver version 4.60.32.03
Environment
i3 gaps
picom version
Configuration:
Steps of reproduction
picom --experimental-backends
refresh-rate
from 60 to 120 and then to 180 (preferably on a 60Hz monitor)refresh-rate
settings.Expected behavior
According to #173, the
refresh-rate
option should be ignored.Current Behavior
Changing
refresh-rate
seems to affect the input latency, as in it is markedly improved (for NVIDIA) after switching from 60 to higher settings of 180. (I'm currently using 180 to type this out, and it is much, much better experience compared to frequent typos/lags caused by my typing speed apparently being faster than the latency apparently introduced by picom)Some other notes
The TL;DR is that I'm wondering whether #173 is accurate or not, as it seems there is a noticeable change in input lag - at least to my subjective eyes; though it's also possible I'm not running picom correctly; it should be started up by i3 when the config is read:
According to #173, it seems that
refresh-rate
is slated for removal soon. I'm not sure ifrefresh-rate
affecting input latency is related to a change that's occurred in the code in the time that has past since #173 or that the option still affects the code in some ways (but not in other already-removed ways).I asked in #543 about improvements for NVIDIA drivers using experimental backends mainly due to this issue of input lag (it was pretty unbearable, but so was tearing without vsync turned on for picom), and it seems like there are still no improvements slated to be implemented soon for NVIDIA (other than #147, which now seems to have been pushed back?).
While I would be happy to see this option removed for the sake of improving
picom
, I would be very concerned if what remains ofrefresh-rate
is removed before input lag changes are made, since this is the main 'improvement' that's madepicom
much more usable to me, right now.