Open gkjohnson opened 5 months ago
Adding an SVGF implementation sounds great. Can you explain the how this work work? Are you wanting to reproject the pathtraced colors as the camera moves? From what I understand SVGF won't work well with antialiasing, stochastic transparency, or depth of field effects, is that right? I guess I want to understand the scenarios in which this would and wouldn't work.
I will explain how they work in the order of algorithm application below. For svgf, it is common practice to use TAA after getting the final result, and the connection with depth of field needs more investigation. Here's a scenario where it doesn't work well SVGF introduces temporal blur, which means that even when the light source is turned off, the light is still present or glossy highlights leave traces.
Can you elaborate on what exactly you need for the SVGF implementation? You need a pass with no textures and no specular or transparency surfaces? Right now the path tracer only supports outputting a single final image. The bsdfEval function evaluates the ray scattering and color contributions of the materials. It would probably be possible to output each specular, & transmissive separately from that function along with weights so they could be combined separately - or at least saved as separate textures.
I would like to start with a simplified version of the SVGF algorithm, similar to the one implemented in the following repository (https://github.com/TheVaffel/spatiotemporal-variance-guided-filtering).
The inputs are
I've attached a screenshot below that is not a simplified version, but should give you an idea.
Reference: Real Time Path Tracing and Denoising in Quake II RTX (https://www.youtube.com/watch?v=FewqoJjHR0A)
cc @gkjohnson
Thanks for the references! And sorry for the delay - I wanted to take a look at the video and make sure I generally understood the approach.
From the video and the diagram, though, it looks like transparent / transmissive surfaces aren't handled using SVGF at all, is that correct? I'm wondering what your plans are for these surfaces. Will they be ignored and remain noisy? Ie only opaque surfaces will be denoised?
Use the rendered result of direct diffuse, indirect diffuse, and indirect specular. (except direct diffuse texture)
It sounds like getting these textures would be the next steps which can be done via MRT. Does modifying the bsdfEval function to output a set of weights and output color (or color premultilied by the weight) from each lobe / sampling path (specular, diffuse, etc) sound reasonable?
Cannot wait to see this implemented 🤩
https://github.com/gkjohnson/three-gpu-pathtracer/issues/292#issuecomment-1913160472
Adding an SVGF implementation sounds great. Can you explain the how this work work? Are you wanting to reproject the pathtraced colors as the camera moves? From what I understand SVGF won't work well with antialiasing, stochastic transparency, or depth of field effects, is that right? I guess I want to understand the scenarios in which this would and wouldn't work.
Can you elaborate on what exactly you need for the SVGF implementation? You need a pass with no textures and no specular or transparency surfaces? Right now the path tracer only supports outputting a single final image. The bsdfEval function evaluates the ray scattering and color contributions of the materials. It would probably be possible to output each specular, & transmissive separately from that function along with weights so they could be combined separately - or at least saved as separate textures.
cc @KihwanChoi12