Closed Sentient07 closed 3 years ago
This repo is not the original repo for the paper. Please check here for the original repo in tensorflow 1: https://github.com/czq142857/implicit-decoder
Also, the weights provided in this repo and another tensorflow 1 repo (https://github.com/czq142857/IM-NET) are trained on 13 categories with one model. But the original paper trained models for each individual category.
Another thing is that CD can be evaluated with different settings: L1 or L2, the shapes could also be normalized in different ways. I do not expect the numbers reported in different papers for the same testing method to be consistent.
Hello,
First of all, thank you for open-sourcing the code and the amazing work! I am attempting to decode the trained latent vectors (as well as generate latent vectors using pre-generated voxels) provided by you for reconstructing the shape (by passing through the decoder). In my experiments, I only consider the Chair class of ShapeNet. However, I observe some imperfect reconstructions similar to the one attached (The attached reconstruction is for model
ce2ff5c3a103b2c17ad11050da24bb12
). I also attempted to compute the Chamfer's Distance using the implementation here. I observe a score of 6.1 (multiplied by 10^3), sampling 10000 points on the mesh. Isn't this higher than the value reported in the paper? Is the evaluation done differently from that of DeepSDF's computation of Chamfer Distance? Furthermore, I also observe similar difference in the CD reported in Table 1 of this paper (https://arxiv.org/pdf/1911.10949.pdf). While I understand that in the PQ-Net paper, comparison is provided over PartNet dataset while its ShapeNet in IM-Net paper, isn't the difference still large?I observe similar (imperfect) reconstruction for relatively complicated chairs such as the ones with wheel underneath. Is it due to shortage of such example in the training data? Thank you in advance for your time and clarification.
Regards, Ramana