Open doodlum opened 1 month ago
I can’t really tell from the video, but ghosting in dark areas is frequently exposure related. You can try enabling auto-exposure, but it’s best if you pass an correct exposure texture.
If you haven’t already, give the DLSS Programming Guide a look, it’s full of good info.
I can’t really tell from the video, but ghosting in dark areas is frequently exposure related. You can try enabling auto-exposure, but it’s best if you pass an correct exposure texture.
If you haven’t already, give the DLSS Programming Guide a look, it’s full of good info.
Thing is, in LDR mode we don't need exposure. The input matches what goes to the display. It could be that it needs to be in linear space?
I can’t really tell from the video, but ghosting in dark areas is frequently exposure related. You can try enabling auto-exposure, but it’s best if you pass an correct exposure texture. If you haven’t already, give the DLSS Programming Guide a look, it’s full of good info.
Thing is, in LDR mode we don't need exposure. The input matches what goes to the display. It could be that it needs to be in linear space?
Checking the docs again it says it should be in gamma space. So I don't understand where I am going wrong. I force the exposure value to 1 to ensure that is not creating an issue.
Indeed, I completely overlooked the LDR detail because ghosting is so frequently exposure related.
My next guess would be jitter (section 3.7). There’s some tips for investigating jitter in section 8. There’s not much there, but I’ll also mention the few SL-specific notes in https://github.com/NVIDIAGameWorks/Streamline/blob/main/docs/ProgrammingGuideDLSS.md#100-troubleshooting
That seems unlikely to me. We are using iirc a halton sequence with 8 phases. The movements on the mouth are much larger than a pixel so the history buffer for jitters would not even apply in this case as I understand it. It's 100% down to how reactive DLSS is to color data. With FSR 3.1 it is much more biased towards color changes whereas DLSS seems overly confident, meaning it does not bias and instead relies on trained patterns.
This is why I think I need dedicated support that understands what's going on here. It feels like DLSS needs some kind of trick or hack to get it to work properly in this use case, but because it's mainly only issues in low lighting environments it feels like a bug.
At most, jitters are wrong in that they are inversed. The game already has TAA support, so we are merely feeding in the same information in the required format per the documentation.
FSR 3.1.1 also even lets us customise how reactive it is (velocity factor) which stock DLSS does not offer us. We might just need something like Preset C but even more reactive. Maybe forcing a color bias works as a hack there.
We have a published file which can be used here if you want to see the debug buffers https://www.nexusmods.com/skyrimspecialedition/mods/130669
Fixed this issue. For some reason DLSS was using Quality mode instead of DLAA internally, even though the resolutions were identical. Another bug?
We are experiencing severe ghosting with all DLSS presets in dim lighting conditions.
We have tried all presets. DLSS is running in LDR mode. Motion vectors are unlikely to include perfect face deformation information. Using DLSS 3.7.20.0
Source code: https://github.com/doodlum/enb-anti-aliasing/blob/main/src/Streamline.cpp
Motion vector format is correct: https://github.com/doodlum/skyrim-community-shaders/blob/dev/package/Shaders/Common/MotionBlur.hlsli
Video of the issue: https://streamable.com/nhuhm6
An email has already been sent to NVIDIA but we had no response. No response on DIscord either. 🤷 FSR 3.1 works completely fine, only DLSS has issues for us. DLSS-G works fine. We do not have an application ID either, given that NVIDIA never responded.