autonomousvision / differentiable_volumetric_rendering

This repository contains the code for the CVPR 2020 paper "Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision"
http://www.cvlibs.net/publications/Niemeyer2020CVPR.pdf
MIT License
798 stars 90 forks source link

question about your paper #7

Closed zdw-qingdao closed 4 years ago

zdw-qingdao commented 4 years ago

i am confused by the formular (4) and (7), why there is a addition in the differentiation?

m-niemeyer commented 4 years ago

Hi @zdw-qingdao , thanks a lot for your interest in the project.

Regarding your questions, in 4.), We have a sum because our loss function L is a sum of l1 losses over all sampled pixels u (see 3.)). Then, the gradient of L wrt. the network parameters also becomes a sum over these individual losses on each pixel. In 7.), the reason is different. If you differentiate f(p) wrt. the network parameters theta, you have to calculate the total derivate as both f as well as p are functions of theta. You have to sum over all partial derivatives, in this case (df / dtheta) as well as (df / dp * dp / dtheta).

neon5d commented 4 years ago

I was also very confused especially in (5). I think eq. (5) looks like this:

same way to eq. (7)

m-niemeyer commented 4 years ago

In equation 5, the reason for the sum is the same as in 7. The predicted RGB color value for pixel u is \hat{I}_u, let's just call it i_pred_u for simplicity here. We get our RGB color value by evaluating the texture field at the predicted 3D surface point t(p), so i_pred_u = t(p). If you now want to calculate the gradient, you have to calculate the total derivative (see above) because both, t as well as p depend on the network parameters theta.

zdw-qingdao commented 4 years ago

Thanks you!!!