Closed Gao-JT closed 3 years ago
Hi @Gao-JT ,
Thanks for your interest in our work.
Please note that the classification on ModelNet40 is an experiment with large randomness due to the simplicity of the dataset(not caused by seed since we have set the seed actually). If you run a same code (not limited to our model but other models such as DGCNN, PointNet, etc.) for several times, you will get different results and some SOTA methods may have larger variance in their results on ModelNet40 than ours.
Also, the voting strategy is a strategy with randomness, the results without the post-processing factor (i.e., voting) better reflects the performance gained purely from model designs. Thus it’s very normal that you can not get the best result.
So far we can make sure that you can test the pre-trained model we released in our README link to get 93.6% accuracy. And get 93.9% accuracy after using the voting if everything goes right.
Hopefully this is helpful to you.
Regards, Mino
Dears:
Thanks for your excellent work! Now I am trying to reproduce the results on the task of 3D Object Classification on ModelNet40 through the provided code, but the best reproduced result using DGCNN as the backbone is only 93.07. Do you know what's wrong with it? Is it the randomness of the training that causes this phenomenon? If so, can you provide a random seed for the training?
Looking forward to hearing from you. Thanks for your excellent work again!
兄弟,你知道为什么了吗,我和你的结果类似,backbone为DGCNN的话,分类的结果在93.1左右。。。
Hi there,
Here is the illustration of the classification results.
a. For the classification on ModelNet40, if you train our model from scratch, the concrete variance of is +- 0.5%, which means 93.1% is very normal. Some SOTA methods may have larger variance than us on ModelNet40 in our own re-productions, and we follow them to report the highest result. For instance, you can find larger variance in the reproductions by different people in one of the issues of DGCNN. In our experiments, when we replace our PAConv with pure MLP-backbones, we will get a much lower result compared with the results reported in the papers of the selected backbones, but we still report the highest results listed in their papers.
b. Since this is very normal across different SOTA methods, it is necessary to emphasize that the variance is mostly caused by the simplicity of the dataset (pure CAD model with a limited number of categories and samples, very easy to be overfitting). If you re-produce our code on more complex datasets in part_seg or scene_seg tasks, the results will be stable.
c. For the classification task, I may recommend you to try to verify your models on ScanObjectNN, a real-world classification dataset, where the results will be more stable than ModelNet40.
d. By the way, we can make sure that you can test the pre-trained model we released in our README link to get 93.6% accuracy.
About voting: The voting strategy during the test may also have a variance of +- 0.5% (we also report the results of w/ and w/o voting in our paper). Our voting code is similar with RSCNN, whose released model is 92.4 w/o voting, while the reported result in their paper is 93.6 w/ voting. By eliminating the post-processing factor, the results without voting better reflects the performance gained purely from model designs and show the effectiveness of our PAConv.
Possible adjustments: In your experiments, I recommend you to adjust the batchsize, the number of GPUs, the training epochs, learning rate, .etc,_ to run several times, which may give you a better result.
Hope this is helpful to you guys!
Thanks, Mino
好的,谢谢
Dears:
Thanks for your excellent work! Now I am trying to reproduce the results on the task of 3D Object Classification on ModelNet40 through the provided code, but the best reproduced result using DGCNN as the backbone is only 93.07. Do you know what's wrong with it? Is it the randomness of the training that causes this phenomenon? If so, can you provide a random seed for the training?
Looking forward to hearing from you. Thanks for your excellent work again!