Closed In-line closed 9 months ago
Can you try again with https://github.com/Saancreed/dxvk-nvapi/commit/234eaabc85d25db1c1ba3c9c4df48cf04fed0ce7 applied and let me know it this gets you any further? I haven't rebased this in months but at least the important part of this diff (the nvapi_gpu.cpp
one) should still apply fairly easily.
I've been spoofing NvAPI_GPU_GetArchInfo in effort to get LatencyFlex2 working. It does work but it would be nice to have an option of spoofing this externally, without needing to patch and compile.
Hi @In-line and @FakeMichau , does https://github.com/jp7677/dxvk-nvapi/pull/146 works for you? This is very similar to the commit Saancreed referred to.
An environment variable would be preferred because spoofing Turing is useful at times. Mainly with CyberFSR (a mod that replaces DLSS with FSR) in games that use nvapi for checking the GPU, like Cyberpunk or Witcher 3. Otherwise a game specific bypass is needed. But I understand that this might be a niche use case outside the scope of this issue.
@FakeMichau Since both games you mentioned already support FSR directly, I would consider that indeed an esoteric scenario.
The PR here to spoof Pascal is ready, would be cool if you could confirm that it solves the LFX issue.
As indicated by @ishitatsuyuki, https://github.com/ishitatsuyuki/LatencyFleX/issues/41#issuecomment-1747951932, there is currently no way to spoof GPU generation, which is needed to mask AMD GPU as NVIDIA GPU in Streamline-based games to enable NVIDIA Reflex.
steam-292030.log