nerfstudio-project / nerfacc

A General NeRF Acceleration Toolbox in PyTorch.
https://www.nerfacc.com/
Other
1.38k stars 112 forks source link

Consideration behind the transmittance implementation #138

Closed SevenLJY closed 1 year ago

SevenLJY commented 1 year ago

Hi Ruilong,

Thanks for your great work!

I noticed that in render_transmittance.cu, you choose to implement the transmittance accumulations with "cumsum" instead of "cumprod" (a more common way). I'm wondering if there is a performance consideration behind it. Is it mainly aimed to speed up the computation or a better rendering quality? I'm curious because my own implementation with cumprod seems to be less smooth than the results with the rendering function you provided during training. And I'm trying to debug my program. Thanks in advance!

Here is a peek at what I meant by less "smooth" on a depth visualization.

Screen Shot 2023-01-02 at 7 36 38 PM

(with your 'rendering' function)

Screen Shot 2023-01-02 at 7 37 06 PM

(my implementation)