Closed wd-hub closed 5 years ago
Sorry for the delayed response. I found that the data shuffling didn’t work with the latest version of pytorch, which is now fixed. I also uploaded multi-view images of the full set of ModelNet40 (https://data.airc.aist.go.jp/kanezaki.asako/data/modelnet40v2png_ori4.tar). The performance could be still worse than the reported ones in our paper because I used the caffe library for them. Please see https://github.com/kanezaki/rotationnet for more details. Thanks!
Do you think the performance diffference between Caffe and Pytorch is caused by pre-trained model. As far as I know, the pre-trained AlexNet provided in pytorch is not as good as that provided in caffe.
Yes, it could be. Plus, I didn't tune the parameters (learning rate, batch size, epochs, etc.) for the pytorch codes, and that might also be the reason.
I tried to train the network with the released pytorch code, I did this with the recommended one, i.e. 3-1. Case (2), with the default configuration. The final classification accuracy @1 is 87.75 after 1000 epochs, which is much worse than that presented in the paper ( 96.39 with AlexNet).
I'm wondering why I get so different result, how about your testing result?
Thanks!