ishitatsuyuki / LatencyFleX

Vendor agnostic latency reduction middleware. An alternative to NVIDIA Reflex.
Apache License 2.0
797 stars 20 forks source link

LFX_MAX_FPS does not work #10

Open ryao opened 2 years ago

ryao commented 2 years ago

I have the following setup:

I am starting Apex Legends with LFX_MAX_FPS=140 PROTON_ENABLE_NVAPI=1 LFX=1 DXVK_HUD=compiler,scale=2 gamemoderun %command% -dev +fps_max=0. My dxvk.conf contains:

dxgi.nvapiHack = False
dxgi.maxFrameLatency = 1

I want to limit my FPS to 140 as per various recommendations for 144Hz gsync monitors, but when I do it via LFX_MAX_FPS in Apex Legends, my FPS often hits 144 or even rarely 145, which is outside of the gsync range and is said to disable gsync. Steam's FPS meter, Apex Legend's built-in FPS meter and DXVK's FPS meter all show FPS exceeding the value that I have set.

If I switch to DXVK_FRAME_RATE=140, then the frame rate is consistently limited to 140 FPS, but then I suspect that the DXVK limiter is causing latencyflex to misjudge how long it takes to render a frame.

I suspect that the frame rate limiter introduced in 7fed2864d5c6d4613b6aed2535e78965aa6b376f needs to be changed to do a better job of clamping the frame rate.

ryao commented 2 years ago

Also, my GPU cannot maintain 140 FPS most of the time in Apex Legends. That might be a contributing factor for the overshoot issue where FPS exceeds the limiter setting in areas where my GPU is able to do 140 FPS, although I am not 100% certain at this point.

ryao commented 2 years ago

I just set it to 120 and entered the firing range. I am seeing FPS up to 144 FPS according to the Steam overlay. The limiter seems to be completely broken. :/

ryao commented 2 years ago

PROTON_LOG=1 revealed the problem:

LatencyFleX: setting target frame time to 0

The game is requesting a target frame time of 0, which LatencyFlex will honor despite the environment variable being set. A small patch seems to fix this:

diff --git a/layer/latencyflex_layer.cpp b/layer/latencyflex_layer.cpp
index ffa5a64..a2a9483 100644
--- a/layer/latencyflex_layer.cpp
+++ b/layer/latencyflex_layer.cpp
@@ -486,6 +486,12 @@ extern "C" VK_LAYER_EXPORT void lfx_WaitAndBeginFrame() {

 extern "C" VK_LAYER_EXPORT void lfx_SetTargetFrameTime(uint64_t target_frame_time) {
   scoped_lock l(global_lock);
+  if (getenv("LFX_MAX_FPS") != NULL) {
+      std::cerr << "LatencyFleX: Asked to set target frame time to "
+                << target_frame_time << ", but LFX_MAX_FPS is set. Ignoring request."
+                << std::endl;
+       return;
+  }
   manager.target_frame_time = target_frame_time;
   std::cerr << "LatencyFleX: setting target frame time to " << manager.target_frame_time
             << std::endl;

However, setting LFX_MAX_FPS to 140 requests in the frame rate actually being capped at 137, so the logic needs some adjustment for this to work properly. Still, this is a massive improvement over LFX_MAX_FPS not working at all.

ryao commented 2 years ago

This issue appears to also be present in Overwatch.

ishitatsuyuki commented 2 years ago

The frame limiter is intentionally undocumented due to the caveats (Reflex API also allows a frame limit to be communicated, and I don’t want to track both). Please just use +fps_max.

ryao commented 2 years ago

Reflex API also allows a frame limit to be communicated, and I don’t want to track both

If LFX_MAX_FPS is set, would ignoring the frame limit from the reflex API avoid that problem? The patch I put inline in my other comment does exactly that, which allows LFX_MAX_FPS to work despite both Apex Legends and Overwatch using the Reflex API to disable the Reflex frame rate limiter.

I have spent a few hours testing that patch and with the exception of the effective frame rate limit being slightly inaccurate (the limit seems to be 137 when I set 140), I have been rather happy with it. The inaccuracy seems like a bug that would also be present when a game sets a frame rate limit via the reflex API.

Please just use +fps_max.

There are complaints online that "Apex's frame rate limit tool can produce inconsistent/unstable frametimes", so people seem to be looking into alternatives. For example:

https://www.reddit.com/r/linux_gaming/comments/tghdv0/any_differences_between_dxvk_frame_rate_mangohuds/

Would not doing frame rate limiting in LatencyFleX be ideal because:

If it is not ideal, do you mean that using a different frame rate limiter would not hurt input latency?

ishitatsuyuki commented 2 years ago

If LFX_MAX_FPS is set, would ignoring the frame limit from the reflex API avoid that problem? The patch I put inline in my other comment does exactly that.

Yes, you're right. I'll import that patch some time later.

There are complaints online that "Apex's frame rate limit tool can produce inconsistent/unstable frametimes"

IIRC, Source engine's fps_max limiter is generally considered to be accurate. I believe It's rather some other issues with frame time consistency on Linux that is causing "unstable frametimes" for them. Driver-level limiters adds more latency and buffering, and therefore conceals this issue. (I also suffer from this too, but I also play with fps_max on Windows and it works very fine there).

I have spent a few hours testing it and with the exception of it being slightly inaccurate (the limit seems to be 137 when I set 140), I have been rather happy with it. That seems to be a bug that would also be present when a game sets a frame rate limit via the reflex API.

This is not a bug, this is an intended behavior/limitation that is required to determine the "bottleneck" parameter. Hope that with FreeSync the difference should be unnoticable.

Would not doing frame rate limiting in LatencyFleX be ideal because:

The general opinion is that it's neither worse or better than an in-game frame limiter.

  • There is no chance of an external frame rate limiter causing LatencyFleX's frame time projections to be inaccurate

I thought about this but in the experiments it didn't seem that the in-game frame limiter affects LFX's operation much.

  • LatencyFleX's implementation of frame rate limiting would place the sleep at a point before input processing, such that it would have lower input latency than other frame rate limiters

The game also sleeps before input processing. Driver-level limiters like the ones in DXVK, MangoHud will cause additional latency, though.

  • Doing it in latencyflex would avoid making additional system calls per frame to implement the feature.

Doesn't really matter, but there's another thing that accurate timers doesn't exist on Windows so the games go through great length emulating that through spinning...

Calinou commented 2 years ago

IIRC, Source engine's fps_max limiter is generally considered to be accurate.

I don't think that's the case. For example, to target 145 FPS consistently, I've found that I need to use fps_max 150 in CS:GO. No other engine I know of requires this – using a FPS limit of 145 will do what I want instead.

user3287 commented 1 year ago

DO NOT use DXVK_FRAME_RATE=X (with superglide script) I found DXVK_FRAME_RATE to lag my game every time the framerate changes substantially (e.g coming out of a bad frame area, like dropping, or changing the fps to 30 for a tiny short time in order to perform superglides more consistently). It kind of takes a long time until the framerate catches up to the new situation, and I'm overall more happy with the performance of the ingame frame limiter