jzengust / RGBD2Normal

Code for Deep Surface Normal Estimation with Hierarchical RGB-D Fusion (CVPR2019)
70 stars 12 forks source link

would it be possible to replace gradient(calculated from autograd function) backpropagation by scalar loss backpropagation? #4

Open Alisa-de opened 3 years ago

Alisa-de commented 3 years ago

Hello, I am currently using your code for surface normal estimation. I was wondering what is the benefit to calculate gradient(df) in loss function and do output.backward(gradient=df) in training process instead of use loss value do loss.backward()?

Btw, have you ever tried to let the depth branch do refinement for raw depth?

Thank you for your help in advance! Best regards Alisa

jzengust commented 3 years ago

Hi Alisa, backward(df) is intended to zero out the gradient of pixels in the masked region. I didn't really try to use the depth branch alone for raw depth refinement, but I believe it is also feasible to do so.

Best regards, Jin

Alisa-de commented 3 years ago

Hi Jin, many thanks for your reply!

If i got it correctly, is that means, this back propagation process(with backward(df)) is different as simply using the scalar loss from loss function and call loss.backward() in the training code? The loss.backward() will not make the masked region with zero gradient.

Sorry, I meant jointly do surface normal estimation and depth refinement or depth estimation. With the relationship between depth and surface normal these two tasks might be able to help each other during the training process. However, for combining theses two tasks i met another issue, i would like to try to use output.backward(df) for back propagation, may i ask, in your opinion would it make difference if i do output_d.backward(df) first or output_sn.backward(df) first?

Best regards, Alisa