fholger / openvr_fsr

Add Image Upscaling via AMD FidelityFX SuperResolution or NVIDIA Image Scaling to SteamVR games
Other
1.69k stars 64 forks source link

Suggestion: DLDSR support #135

Open thehhmdb opened 2 years ago

thehhmdb commented 2 years ago

Would it be possible to add DLDSR support?

https://www.techradar.com/uk/news/nvidias-ai-powered-dldsr-allows-rtx-gpus-to-make-your-games-look-better

fholger commented 2 years ago

So far, it's only been announced as a driver feature. Unless NVIDIA also publishes an SDK to manually integrate it into applications, there's nothing I can do.

thehhmdb commented 2 years ago

Thanks for the explanation.

That's a shame, it seems like it would be very useful for VR image quality/performance

fholger commented 2 years ago

No, unfortunately, that's the DLSS SDK, which is not the same as DLDSR.

d0x360 commented 2 years ago

I know it can't be implemented into this project but couldn't you use it to set to resolution higher in either the SteamVR settings or in game (some have a resolution setting).

From what I understand it will automatically work in the game running. So it doesn't need explicit support in the game it just requires a game to be running with it's resolution set to whatever value you have checked in the nVidia cp.

Then that should mean the game is already being upsampled using that method. FSR could still be applied afterwards in high quality mode to get an even better final image.

I actually used it in god of war on PC. I play at 4k normally but I set DLDSR to 2.25x (59% smoothness) then set GOW to that resolution and also ran DLSS in quality mode with no sharpening and output that to a 65" LG C1 and while I lost performance(went from avg 110fps to 96 but I have VRR so) the game looked better with an acceptable hit to performance...

I really can't wait for DLAA. I'm hoping it will just work for everything via the control pane but it probably won't lol.

fholger commented 2 years ago

No, that's not how it works. The driver looks for the image that is submitted to the monitor and then does its magic behind the scenes. But VR does not go through that route and instead submits its images to the VR runtime. It will not trigger the DLDSR support in the driver.

d0x360 commented 2 years ago

Regardless of display output the "image" goes through the same rendering pipeline with each step doing it's magic until it reaches the framebuffer. The framebuffer is the final image that gets pushed to the display. The GPU hardware doesn't care if it's a tv, monitor or VR.

It should work and if it doesn't then it's because nVidia is blocking it from working. They are probably doing so because 99% of users don't understand the control panel and what each thing does or how massively they could kill performance.

I had a friend have me remote into his PC that had a 3080 ti and he was getting an average of 8 fps in GTA Online "even if I put all the settings to low!" - him

He had enabled super sampling aka DSR. His native res was 4k.

On Mon, Feb 21, 2022, 2:45 AM Holger Frydrych @.***> wrote:

No, that's not how it works. The driver looks for the image that is submitted to the monitor and then does its magic behind the scenes. But VR does not go through that route and instead submits its images to the VR runtime. It will not trigger the DLDSR support in the driver.

— Reply to this email directly, view it on GitHub https://github.com/fholger/openvr_fsr/issues/135#issuecomment-1046562410, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADLH3GOW2DWNBHV3W66NIN3U4HUTBANCNFSM5L7AAL7A . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you commented.Message ID: @.***>

fholger commented 2 years ago

Again, not how it works. Far as I know, DLDSR does not affect the rendering pipeline at all, it is a post-processing step that takes a rendered image and then uses its AI magic to transform the image back down to the output resolution. But this post-processing step is triggered in the driver precisely when the game submits an image to the monitor's swapchain. That's how the driver knows the image is finished and can go through the DLDSR scaling. It is not triggered when it is submitted to a VR runtime, instead, and there is nothing I can do about that.

d0x360 commented 2 years ago

If I can find a way to make it work would you consider implementing it?

I ask because I spent about 40 hours working on TAA injection and got it working in a decent number of games but I don't want to spend the time trying if it's for nothing. Although I might not need to try that hard since an nVidia r&d lab is 20 min away from me and I know a few people who work there.

However, it's a long tedious process and even if the engine explicitly supports it, it's not always possible to get it working. There were also some instances where the game received an update and it completed broke the injection.

Shame really because it made for a really nice and clean image and only had a 1-2 fps performance hit at 18 samples and 3-4 at 32 samples in the worst case scenario.

Crimson-foxGITHUB commented 2 years ago

@d0x360 DLDSR isn't something that can't be implemented within a generic driver, but, based on what I've seen of results of it, you can get pretty similar results by using NIS and increasing the renderscale past 1. DLDSR is supersampling anyways, so I wouldn't be surprised if NVIDIA is using their image-sharpening algorithm for NIS with it, or at least some of it's code.

also, IMO, using ReShade's SMAA shader gives better image quality than TAA, injected or native.

d0x360 commented 2 years ago

Ive started clean room reverse engineering DLDSR. It's using ai reconstruction like DLSS just without any temporal component.

This is something I absolutely could get working on basically any game and if I'm right then it would be half the cost of regular super sampling to a point so let's call it 30% less expensive without using tensor cores.

The nVidia implementation runs on the tensor cores like DLSS but it should be doable on any GPU from the 1000 (maybe 900) series on the nVidia side and GCN 3.0 and on from the AMD side with larger performance gains if you're using RDNA or RDNA2.

I can work on and test for the 1000, 2000, & 3000 series from nVidia and the 200 series (290x), RDNA & RDNA2 from AMD but it's very unlikely that I could test using Intel Arc because I have no plans on buying one when they get released end of Q2 2022. There is 1 laptop in South Korea that has an Arc GPU but that's it for now.

As for TAA vs SMAA... I think it depends on the game. I again go back to Alien Isolation which ships with SMAA but can have TAA Injection and the image is significantly cleaner using TAA and unlike SMAA it gets rid of all shimmer and jaggies in the game but without making it a blurry mess or an over sharpened mess.

There is already a mod on GitHub for that game to work in VR and another to inject TAA. Injection works flat or VR and you can judge for yourself but I think if you saw it you would agree, especially in VR.

I just want to help improve the VR experience any way I can. Now I've been told not to bother with TAA injection although I've restarted working on it because a large number of games allow for it and it will just be something I use for myself since I think a tool I've been working on can automate checking for compatibility then I can go from there myself.

As for using ML SS like DLDSR...that shouldn't be a big deal at all and if I'm right about compatibility across GPU architecture's then it should be fantastic...even better if the GPU has ML hardware like tensor core or an FPGA. There's a reason I bring up an FPGA but I can't really say why. I also can't say why this is easier than one might think but we can thank AMD for that and you will know why soon enough.

Anyways... I think all things considered that I'll keep what I've been working on to myself, at least for now. This isn't my project, I just wanted to help where possible, I understand why TAA injection isn't going to be part of it even at current compatibility levels and I have no issue with it.

The super sampling at 30% the normal cost but at higher quality than DSR is about 2 months from in any kind of testing phase beyond my own but so far it works flat and in VR but it's upper bounds at 2x. Anything past that starts causing issues... At least for now.

On Mon, Apr 11, 2022, 7:47 PM Crimson-foxGITHUB @.***> wrote:

@d0x360 https://github.com/d0x360 DLDSR isn't something that can't be implemented within a generic driver, but, based on what I've seen of results of it, you can get pretty similar results by using NIS and increasing the renderscale past 1. DLDSR is supersampling anyways, so I wouldn't be surprised if NVIDIA is using their image-sharpening algorithm for NIS with it, or at least some of it's code.

also, IMO, using ReShade's SMAA shader gives better image quality than TAA, injected or native.

— Reply to this email directly, view it on GitHub https://github.com/fholger/openvr_fsr/issues/135#issuecomment-1095702150, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADLH3GOMZ7JKABVBQWCC4HDVES2XFANCNFSM5L7AAL7A . You are receiving this because you were mentioned.Message ID: @.***>