Closed ZhouWeikun closed 1 year ago
Hi! The sparsity loss is proposed by SparseNeuS and is used to remove unnecessary occupancy in the few-shot setting:
I tried training with this loss but found no obvious difference in the dense-capture setting (e.g., NeRF-Synthetic).
The opaque loss is to encourage the opacity to be either 0 or 1. It is used in my VMesh paper:
I found this loss to be improving surface quality for NeuS in some scenarios.
Hope this helps!
Thanks for your reply !
First, thanks for your work. But i have some questions about the functions you used in neus: 1.
loss_sparsity = torch.exp(-self.config.system.loss.sparsity_scale * out['sdf_samples'].abs()).mean()
2.loss_opaque = binary_cross_entropy(opacity, opacity)
Could you give more info about them, or papers related to them?