Can anyone share their opinion to help me clear up my confusion? I understand that the main idea behind DeepSDF is to use features that "embed" the underlying information of the mesh, which is driven by the sdf. These features can be optimized during testing. However, in the reconstruct.py file, I noticed that DeepSDF directly uses the ShapeNet processed SDF ground truth validation set (gt val) as the gt_sdf to supervise the optimization of the features during inference.
My question is whether it is accurate to say that DeepSDF uses ground truth testing meshes from ShapeNet to evaluate its performance. The reason I ask is that this seems to be the procedure: testing gt meshes -> testing gt sdfs -> using them as supervision for optimization during inference. If this is the case, then any compared methods should also use ground truth testing meshes for fairness, correct?
Can anyone share their opinion to help me clear up my confusion? I understand that the main idea behind DeepSDF is to use features that "embed" the underlying information of the mesh, which is driven by the sdf. These features can be optimized during testing. However, in the reconstruct.py file, I noticed that DeepSDF directly uses the ShapeNet processed SDF ground truth validation set (gt val) as the gt_sdf to supervise the optimization of the features during inference.
My question is whether it is accurate to say that DeepSDF uses ground truth testing meshes from ShapeNet to evaluate its performance. The reason I ask is that this seems to be the procedure: testing gt meshes -> testing gt sdfs -> using them as supervision for optimization during inference. If this is the case, then any compared methods should also use ground truth testing meshes for fairness, correct?