facebookresearch / robust-dynrf

An algorithm for reconstructing the radiance field of a dynamic scene from a casually-captured video.
MIT License
219 stars 11 forks source link

problems about train.py #2

Open tb2-sy opened 1 year ago

tb2-sy commented 1 year ago

Nice work! When I read the code of train.py, I found that both static tensor and dynamic tensorf are forwarded four times, and each forward gets a different output. What is the purpose of doing this? I feel that this will greatly reduce the training speed. Why not just forward once and get all the desired output.

alex04072000 commented 1 year ago

Thank you so much for being interested in our work! The four forward passes serve different purposes. The first is to render the frame for single-frame loss, such as the RGB and monocular depth loss. The second is to render novel view and novel time results to regularize the density and blend masks to close to zeros. The third and fourth are to render neighbor frames for pair-wise losses such as the reprojection loss and the disparity loss. We agree the current implementation is not optimized for speed and could potentially speed up with a reformation.

tb2-sy commented 1 year ago

Thank you so much for being interested in our work! The four forward passes serve different purposes. The first is to render the frame for single-frame loss, such as the RGB and monocular depth loss. The second is to render novel view and novel time results to regularize the density and blend masks to close to zeros. The third and fourth are to render neighbor frames for pair-wise losses such as the reprojection loss and the disparity loss. We agree the current implementation is not optimized for speed and could potentially speed up with a reformation.

Thanks for your reply, I have understood the whole process, I will explore some acceleration solutions, thank you!