Thanks for releasing the amazing codes. I have two questions regarding the evaluation.
Your evaluation code seems to only consider one image (000.jpg) from each test model, although there are 20 rendered images per model. Is it the standard evaluation protocol or it does not make any difference when considering all images?
After running eval_meshes.py on your pretrained model, I got a much lower Chamfer distance (0.02) than reported in the paper (0.21). Is this due to the scaling factor (during the preprocessing step, points are scaled into (-0.5, 0.5)) of the point cloud? Moreover, in the Table 1 of the paper, you reported "Chamfer-L1", but actually your codes are computing the L2 distance. Can you please clarify?
Thanks for releasing the amazing codes. I have two questions regarding the evaluation.
Your evaluation code seems to only consider one image (000.jpg) from each test model, although there are 20 rendered images per model. Is it the standard evaluation protocol or it does not make any difference when considering all images?
After running eval_meshes.py on your pretrained model, I got a much lower Chamfer distance (0.02) than reported in the paper (0.21). Is this due to the scaling factor (during the preprocessing step, points are scaled into (-0.5, 0.5)) of the point cloud? Moreover, in the Table 1 of the paper, you reported "Chamfer-L1", but actually your codes are computing the L2 distance. Can you please clarify?