NVlabs / nvdiffrecmc

Official code for the NeurIPS 2022 paper "Shape, Light, and Material Decomposition from Images using Monte Carlo Rendering and Denoising".
Other
363 stars 28 forks source link

About "pdf_weight" #33

Open lijunnankman opened 1 year ago

lijunnankman commented 1 year ago

Hi,It's a nice work. However,I have no idea about :https://github.com/NVlabs/nvdiffrecmc/blob/main/render/optixutils/c_src/envsampling/kernel.cu#L180. Why the numerator of pdf_weight is *params.cols.size(0) params.cols.size(1)**? My understanding is that the function of pdf_weight is to convert the probability density function p(u,v) in the uv coordinate system into the probability density function p(w_i) in the spatial coordinate system.

JHnvidia commented 1 year ago

Hi @lijunnankman,

The pdf_weight ratio is normalization between the area of the 2D sampling domain [w,h] and the unit sphere + an additional term to account for latlong mapping area warp. Intuitively the nominator / denominator is flipped because you'll end up dividing by the pdf. The sampling code is inspired by: https://cs184.eecs.berkeley.edu/sp18/article/25

lijunnankman commented 1 year ago

Hi, @JHnvidia ,thanks for quick reply. I got it in https://cs184.eecs.berkeley.edu/sp18/article/25 step 4. But ,I have another question about https://github.com/NVlabs/nvdiffrecmc/blob/b3089bba5fa52ece34ceac26965ebe744f4405f8/render/light.py#L50C41-L50C41 . Why is the value of pdf obtained from max() on the base channels? In https://cs184.eecs.berkeley.edu/sp18/article/25 step 1, they just only use E to calculate pdf.(They did not explain how to deal with E) Have you try another way to compute the pdf?