There was no particular decision making when setting the weights of the losses. We didn't tune these weights so a better selection might improve results. The normal loss was set to 0 for ShapeNet because it caused training to diverge in some initial experiments but we didn't revisit this issue to be honest! We should maybe!
Laplacian smoothing didn't seem to add anything compared to the edge length regularizer. They are both regularizers and enforce smoothness of the resulting mesh.
At the time of the Mesh R-CNN paper, the mesh renderer wasn't in PyTorch3D. Mesh R-CNN is a supervised approach and the chamfer loss is a loss between the ground truth shape and the predicted shape. The mesh renderer would enforce 2D silhouette consistency. We haven't tried to add it to the chamfer loss, but we have a tech report coming soon where we train unsupervised models using the renderer (and no chamfer loss obviously).
I can see from voxmesh_R50.yaml that for the shapenet you are using Chamfer and Edge losses only, while the Normal loss weight is set to 0.0.
mesh_laplacian_smoothing
) loss to the mix? Why did you choose not to use it?thank you!