NVlabs / neuralangelo

Official implementation of "Neuralangelo: High-Fidelity Neural Surface Reconstruction" (CVPR 2023)
https://research.nvidia.com/labs/dir/neuralangelo/
Other
4.4k stars 390 forks source link

How to join the supervision information of Mask #58

Closed tkl2020 closed 1 year ago

tkl2020 commented 1 year ago

Thanks to the author for open-sourcing such an exciting code, I would like to ask how to add the supervision information of Mask. I tried Neus's Mask supervision and white background, but none of them worked. I would like to ask how I can solve it.

chenhsuanlin commented 1 year ago

Hi @tkl2020, while adding mask supervision is slightly of less interest in our use cases (and thus we have not investigated into this), I would be curious to learn how they don't work, as BCE loss on the opacity (as done in NeuS) seems to be a reasonable loss to me.

tkl2020 commented 1 year ago

Thanks for the author's reply, I have been trying to reconstruct specific objects by adding Mask's supervision information in the past two days. I tried something like this:

  1. Use white background: train 50k iters, but the vertices and faces in the result of extract_mesh are 0
  2. Add the Mask Loss of Neus: first set the enable of Background to False, and set the num_samples of background to 0. If I understand correctly, opacity_all in neuralangelo can replace weights_sum in Neus to calculate loss as mask and gt. But in the end, the mask loss did not converge and the obtained mesh was completely inconsistent.
  3. Only the image pixels corresponding to the mask are extracted for training, and the obtained mesh does not match the real object.
tkl2020 commented 1 year ago

I look forward to hearing from you.

chenhsuanlin commented 1 year ago

Could you share some of the visualization, including the rendered images of the initialization? You can also set model.background.enabled=False and it should disable everything related to background (see also #55), which should simplify the problem.

tkl2020 commented 1 year ago

Yeah, I'm willing to go ahead and share the visualisations from my side. This time I replaced another batch of data to train and this time the Mask loss got converged but there is redundancy in the results obtained. I guess it's a problem with the bounding sphere settings, I see that it's determined manually by blender and jupyter, is there a fully automated way to determine the bounding sphere region of the Mask object. image image

chenhsuanlin commented 1 year ago

There's currently no straightforward way to estimate the bounding sphere to precisely fit the object/scene of interest, without knowing beforehand what the user wants to reconstruct. If you have any suggestions, please do let us know!

mli0603 commented 1 year ago

Hi @tkl2020

During the development of Neuralangelo, I experimented with NeuS' setup with mask loss and it was successful. As @chenhsuanlin said, we eventually moved away from using masks as it is not of interest in our use cases.

tkl2020 commented 1 year ago

Thank you to the author's team for patiently explaining. I have recently resolved the issue and have successfully accomplished the reconstruction work for the masked objects.

chenhsuanlin commented 1 year ago

Great! Closing as complete. If you would like to contribute the support for masked objects, please feel free to submit a PR.

SWWdz commented 1 year ago

I also have interest in mask. How can I use mask to do supervision on my reconstruction? Can you give me a process in detail? Thanks !