libsdl-org / SDL

Simple Directmedia Layer
https://libsdl.org
zlib License
8.7k stars 1.64k forks source link

Adaptive Sync Detection #8772

Open AI221 opened 6 months ago

AI221 commented 6 months ago

Adaptive Sync detection is relevant to some game engine's internal workings Suggested API: SDL_DisplayMode struct: Add "bool adaptive_sync_enabled"

I understand that detecting this may be difficult however hopefully this is something that can be coordinated with AMD and Nvidia. Perhaps using a struct with "UNKNOWN" and true/false would be a better interface since I'm not sure if most drivers currently support this query. However, it would be a boon if SDL could add this feature to take the hairiness out of the querying -- something SDL is strong at. SDL could then have backends per GPU driver.

Furthermore, it would be nice if we got events when the display mode changes -- since users may adjust the adaptive sync (or framerate!) of their monitors while running a game.

Thank you for developing this open-source library, and I hope you can trust the value in this feature!

slouken commented 6 months ago

That seems like a good idea. Feel free to submit a PR that would be helpful to you.

AI221 commented 6 months ago

Thanks for the quick reply!

I'm not very familiar with operating system / GPU-driver-talking stuff. Guess that's your fault ;)

Are you looking for a PR with the interface changes I mentioned? Like, perhaps just a STUB with "UNKNOWN"? I suppose I could try to coordinate with AMD/NVidia/Intel? I'm not affiliated with any company or anything.

Thanks

EDIT: /sys/kernel/debug/dri/0/DP-1/vrr_range (some variable) works on Linux, so on that platform we should definitely avoid talking to GPU drivers. If I can find a Windows version of this then that looks like the best path forward. I'll submit a pull with a Linux-only implementation at least.

Qwinci commented 6 months ago

Using a debug variable in SDL doesn't seem a good idea, as the kernel may be compiled without debugfs support. And it looks like that doesn't reflect the enable status of VRR, my guess is that it just shows its supported by the screen connected to that particular connector. Some VRR related stuff (like checking if a monitor is capable of it) can be checked in a vendor-agnostic way by using eg. libdrm on Linux, but I am not sure if it can get the enable status of it. I tried looking at the drm crtc prop VRR_ENABLED but it seems to always report 0? Unless it only reports one when running a fullscreen application with VRR enabled, thats something to test.

AI221 commented 6 months ago

Hmmm, true. (it still works well for a stub implementation). As far as /sys/kernel/debug/dri/0/DP-1/vrr_range goes, it DOES work for me in reporting enabled -- since it reports two zeros if not enabled. And using it made me realize the interface needed to have fps min/max, so that was a good thing.

~~I'm of the opinion that a file-based interface works best here, since we're just querying hardware info. So, something on /proc/ would be ideal. If we can't get libdrm working (and I will investigate that after making a stub implementation) then perhaps a new /proc/ interface that is always available would be the next best thing. If that's the case, then I would probably want to leave the debugFS implementation in as a fallback if it's available. It's not too much source code anyway, and kernel changes can take a while to propagate.~~

One thing to note -- Do you happen to have an Nvidia card? I don't have access to one at the moment. I'm running on AMD, and from Linux source code AMD and Intel both have vrr debugFS query implementations. You're saying /sys/kernel/debug/dri/0/DP-1/vrr_range doesn't work for you? It may simply be necessary to use NVAPI on Nvidia on all platforms. However, on Linux, it appears that it's likely only possible to use DRM to query AMD/Intel anyway.

Edit: If you do drm_info > /tmp/aa, toggle VRR, and do drm_info > /tmp/aa, what is your diff /tmp/aa /tmp/bb? Mine looks like this: https://gist.github.com/AI221/4e7029e635075d2008f3102e514e57ea

Edit2: It looks like while "vrr_enabled" is plumbed through drm props, min/max vfreq is not. So I'll make a PR for that. It's not complicated. Then, as I understand, the drm props system means that libdrm will immediately have that information available.

Edit3: I seem to have forgotten that DebugFS requires super user. Probably not relevant to include, but again, works nicely for testing.

Qwinci commented 6 months ago

Hmmm, true. (it still works well for a stub implementation). As far as /sys/kernel/debug/dri/0/DP-1/vrr_range goes, it DOES work for me in reporting enabled -- since it reports two zeros if not enabled. And using it made me realize the interface needed to have fps min/max, so that was a good thing.

~I'm of the opinion that a file-based interface works best here, since we're just querying hardware info. So, something on /proc/ would be ideal. If we can't get libdrm working (and I will investigate that after making a stub implementation) then perhaps a new /proc/ interface that is always available would be the next best thing. If that's the case, then I would probably want to leave the debugFS implementation in as a fallback if it's available. It's not too much source code anyway, and kernel changes can take a while to propagate.~

One thing to note -- Do you happen to have an Nvidia card? I don't have access to one at the moment. I'm running on AMD, and from Linux source code AMD and Intel both have vrr debugFS query implementations. You're saying /sys/kernel/debug/dri/0/DP-1/vrr_range doesn't work for you? It may simply be necessary to use NVAPI on Nvidia on all platforms. However, on Linux, it appears that it's likely only possible to use DRM to query AMD/Intel anyway.

Edit: If you do drm_info > /tmp/aa, toggle VRR, and do drm_info > /tmp/aa, what is your diff /tmp/aa /tmp/bb? Mine looks like this: https://gist.github.com/AI221/4e7029e635075d2008f3102e514e57ea

Edit2: It looks like while "vrr_enabled" is plumbed through drm props, min/max vfreq is not. So I'll make a PR for that. It's not complicated. Then, as I understand, the drm props system means that libdrm will immediately have that information available.

Edit3: I seem to have forgotten that DebugFS requires super user. Probably not relevant to include, but again, works nicely for testing.

Are you talking about disabling freesync on the monitor settings? I were talking about enabling variable refresh on software, eg. with the "VariableRefresh" xorg option (not sure what is it on wayland). Regarding the /sys/kernel/debug I do have it (on amd), but only because I have CONFIG_DEBUGFS enabled if I were to not have selected that kernel config option then I wouldn't have /sys/kernel/debug either.

AI221 commented 6 months ago

Hmmm, true. (it still works well for a stub implementation). As far as /sys/kernel/debug/dri/0/DP-1/vrr_range goes, it DOES work for me in reporting enabled -- since it reports two zeros if not enabled. And using it made me realize the interface needed to have fps min/max, so that was a good thing. ~I'm of the opinion that a file-based interface works best here, since we're just querying hardware info. So, something on /proc/ would be ideal. If we can't get libdrm working (and I will investigate that after making a stub implementation) then perhaps a new /proc/ interface that is always available would be the next best thing. If that's the case, then I would probably want to leave the debugFS implementation in as a fallback if it's available. It's not too much source code anyway, and kernel changes can take a while to propagate.~ One thing to note -- Do you happen to have an Nvidia card? I don't have access to one at the moment. I'm running on AMD, and from Linux source code AMD and Intel both have vrr debugFS query implementations. You're saying /sys/kernel/debug/dri/0/DP-1/vrr_range doesn't work for you? It may simply be necessary to use NVAPI on Nvidia on all platforms. However, on Linux, it appears that it's likely only possible to use DRM to query AMD/Intel anyway. Edit: If you do drm_info > /tmp/aa, toggle VRR, and do drm_info > /tmp/aa, what is your diff /tmp/aa /tmp/bb? Mine looks like this: https://gist.github.com/AI221/4e7029e635075d2008f3102e514e57ea Edit2: It looks like while "vrr_enabled" is plumbed through drm props, min/max vfreq is not. So I'll make a PR for that. It's not complicated. Then, as I understand, the drm props system means that libdrm will immediately have that information available. Edit3: I seem to have forgotten that DebugFS requires super user. Probably not relevant to include, but again, works nicely for testing.

Are you talking about disabling freesync on the monitor settings? I were talking about enabling variable refresh on software, eg. with the "VariableRefresh" xorg option (not sure what is it on wayland). Regarding the /sys/kernel/debug I do have it (on amd), but only because I have CONFIG_DEBUGFS enabled if I were to not have selected that kernel config option then I wouldn't have /sys/kernel/debug either.

The debugfs variable is based on DRM. From a friend's computer, I can see that the Nvidia driver under Linux does not use DRM (by default), and thus this debugfs variable does not appear on these setups.

The debugfs variable is tied to connector properties in linux DRM, specifically monitor_range->min_vfreq. However, with Nvidia not supporting DRM, there is no kernel patch which could expose this variable on non-DRM Nvidia.

I'm trying to investigate on the Linux kernel side of things how exactly this should be exposed in all kernels, with or without debugfs enabled. Basically, it's a matter of trying to expose this monitor range variable to non-root applications.

At any rate, a separate implementation for Nvidia will be needed. NVapi actually exposes vrr min/max, so this would work for Nvidia on both Linux and Windows. That would leave only Intel & AMD on Windows out of the implementation, though they probably have their own APIs.

Can SDL3 include NVapi without licensing issues?


To clear things up, we're trying to query monitor's:

Qwinci commented 6 months ago

so we don't want to get the software enable status of vrr? because freesync can be enabled on the monitor but disabled on software (like it is usually by default, unless you add Option "VariableRefresh" "true" to your xorg config.

AI221 commented 5 months ago

so we don't want to get the software enable status of vrr? because freesync can be enabled on the monitor but disabled on software (like it is usually by default, unless you add Option "VariableRefresh" "true" to your xorg config.

We want to get whether VRR is currently enabled on the computing device.

In other words, is VRR active right now. We don't care about why or why not; is our swapchain VRR?

XORG disabled, monitor enabled -> NO XOrg enabled, monitor disabled -> NO XOrg enabled, monitor enabled -> YES

Hopefully this answers your question!

~~EDIT: OK, I see what you're getting at now. DRM doesn't quite tell us whether variable refresh is enabled. This basically means that we need to query X11/Wayland. Which is a pain, but, ultimately probably doable.~~ Edit2: VRR_ENABLED in DRM already provides whether VRR is enabled (and in Wayland/X11).


After further investigation, it looks like querying the minimum refresh rate is going to have to be elided from this patch. However, fortunately, both Linux/DRM and Windows have pathways to query for VRR enabled. So I'll hopefully have this done in the coming weeks (I have other projects to work on as well).

Zamundaaa commented 5 months ago

Please don't try to use libdrm for anything outside of actually driving a display. A given display might not have a related drm connector, you might not know the related drm connector or match it wrongly, and on future versions of KWin it will always tell you adaptive sync is active (because it is, on a connector level), even when the refresh rate is entirely stable.

I don't know about Xorg, but on Wayland you can get this information properly through the presentation time protocol - the refresh value will be zero while adaptive sync is active.

AI221 commented 5 months ago

Please don't try to use libdrm for anything outside of actually driving a display. A given display might not have a related drm connector, you might not know the related drm connector or match it wrongly, and on future versions of KWin it will always tell you adaptive sync is active (because it is, on a connector level), even when the refresh rate is entirely stable.

I don't know about Xorg, but on Wayland you can get this information properly through the presentation time protocol - the refresh value will be zero while adaptive sync is active.

That sounds good. I will do that instead. I did talk about Nvidia drivers not having a DRM driver.

Anyway, yes I agree it's definitely better to go through Wayland. Thanks for pointing out a method to do so!

slouken commented 1 month ago

@Kontrabant, does it make sense to add an optional property on the SDL display for this?

Kontrabant commented 1 month ago

On Wayland, a presentation time feedback request needs to be submitted per-surface for each frame. It's not of much use if SDL isn't internally handling presentation, and of less use in a per-display sense as there is no way to query any of this before making a window and sending frames. Plus VRR could arbitrarily be enabled/disabled at any time, for any reason, by the desktop compositor, so that display property may not be accurate unless constantly polled.

I would imagine this is case across platforms, as just because a display has VRR enabled, it may not actually be active, which makes a display property for it unreliable.

AI221 commented 1 month ago

What we really need is a way to query the OS if VRR is being used by the entire swapchain or not. Basically, is the monitor being presented with VRR or not?

And then a way for SDL to query this per display. A game could then periodically (say, once every 100 frames) poll this and adjust accordingly.

However, I've been unable to find suitable methods for all OSes. The closest I can find on Windows, for example, is to query if "tearing" is enabled (which gets enabled with VRR I believe).

On Wed, May 22, 2024 at 4:49 PM Frank Praznik @.***> wrote:

On Wayland, a presentation time feedback request needs to be submitted per-surface for each frame. It's not of much use if SDL isn't internally handling presentation, and of less use in a per-display sense as there is no way to query any of this before making a window and sending frames. Plus VRR could arbitrarily be enabled/disabled at any time, for any reason, by the desktop compositor, so that display property may not be accurate unless constantly polled.

I would imagine this is case across platforms, as just because a display has VRR enabled, it's may not actually be used, which makes a display property for it unreliable.

— Reply to this email directly, view it on GitHub https://github.com/libsdl-org/SDL/issues/8772#issuecomment-2125828885, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACUDN7M4SUWNZMS5AU4VE73ZDUHG3AVCNFSM6AAAAABBKOH4SKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMRVHAZDQOBYGU . You are receiving this because you authored the thread.Message ID: @.***>