Closed cslxiao closed 4 years ago
I also encountered such a problem, I used the tensorflow code, the experiment was repeated three times, the highest accuracy is only 91%, compared with the paper said 92.9% of the accuracy has a large gap. Excuse me, is something wrong there? hope to get your reply, thank you very much!
I also encountered such a problem, I used the tensorflow code, the experiment was repeated three times, the highest accuracy is only 91%, compared with the paper said 92.9% of the accuracy has a large gap. Excuse me, is something wrong there? hope to get your reply, thank you very much!
Hi,
Thank you for your question. The pytorch code is consistent with what we have in the paper. That said we didn't update the tensorflow code to reflect the new results.
I run the pytorch code with the script
python main.py --exp_name=dgcnn_1024 --model=dgcnn --num_points=1024 --k=20 --use_sgd=True
And I always get results slightly worse than the reported results in the paper. I used the best test results in the training process. Especially, for average acc (mean class acc), the gap with the reported ones is larger. Are there any special settings or tricks in running the code? Thanks in advance.
Can you tell me how much is the gap you get? I re-run the code and can get the same number (even better).
I got 92.3 for test acc and 88.9 for test avg acc
Hi @cslxiao, we have released pretrained models and evaluation details here: https://github.com/princeton-vl/SimpleView. You might find them useful.
I run the pytorch code with the script
python main.py --exp_name=dgcnn_1024 --model=dgcnn --num_points=1024 --k=20 --use_sgd=True
And I always get results slightly worse than the reported results in the paper. I used the best test results in the training process. Especially, for average acc (mean class acc), the gap with the reported ones is larger. Are there any special settings or tricks in running the code? Thanks in advance.