autonomousvision / differentiable_volumetric_rendering

This repository contains the code for the CVPR 2020 paper "Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision"
http://www.cvlibs.net/publications/Niemeyer2020CVPR.pdf
MIT License
798 stars 90 forks source link

Can I turn off depth_from_visual_hull? #18

Closed wenzhengchen closed 4 years ago

wenzhengchen commented 4 years ago

https://github.com/autonomousvision/differentiable_volumetric_rendering/blob/1ea03f36ebcfb8995c06f9ddb83a00593ae1d5f4/configs/single_view_reconstruction/multi_view_supervision/ours_rgb.yaml#L9

Hi, I am training single view image prediction from multiview supervision on NMR dataset. I notice that it needs to load the depth hull. I wonder how important it is? Can I turn off it, training purely from images?

Thank you!

m-niemeyer commented 4 years ago

Hi @wenzhengchen , thanks a lot for your interest in our project!

Yes, in our single-view reconstruction experiments with L_rgb, we use the depth of the visual hull for the Occupancy Loss. Please have a look at our paper at 3.4 Training - Occupancy Loss: "In the single-view reconstruction experiments, we instead use the first point on the ray which lies inside all object masks (depth of the visual hull). If we have additional depth supervision, we use the ground truth depth for the occupancy loss."

This loss encourages occupancy along the ray if we don't predict a surface point there, but the GT is occupied. By using the visual hull depth, it basically starts with a better estimate - please note that it further uses the L_rgb loss to improve over the visual hull (see e.g. Fig. 7). We didn't do an ablation study on this - I suspect that training might take longer if you disable this, and it might also degrade results - but we have no comparison on this. To give a fully-complete answer, for the multi-view reconstruction experiments (where we train a single model per object), we do not use the visual hull depth.

I hope this helps. Good luck with your research!

wenzhengchen commented 4 years ago

Thank you so much! It really helps :)