Open sailor-z opened 5 years ago
Same question. And without data augmentation, the the highest accuracy I got is 89.5%.
I realize this issue is quite old, but for anyone frustrated by seeing this with the same problem as I was until recently: I was finally able to reproduce the results of the paper in my own implementation by making the following modifications compared to what is stated in the paper:
0.001*tf.nn.l2_loss(...)
, but tf.nn.l2_loss
has a sneaky factor of 1/2 that isn't included in the paper.I - A.T @ A
rather than I - A @ A.T
(since the output of the network is used as x @ A
it is really an A.T
). I doubt that will make a large difference.0.7
as in the code (paper states 0.5).Most importantly, use exactly the data provided here, and don't shuffle the point order. If you want to train with 1024 points (as in the paper/code here), use the first 1024. The order is significant, and the first 1024 points will be more evenly spread out than a random sample of the 2048 provided. See histograms below. All the shuffling in the code is related to the batch ordering, not the point ordering. Pointnet itself is order invariant, so shuffling after slicing won't make a difference, but if you slice after shuffling your input set will be different and performance will suffer.
Thank you so much for your excellent work!
I have a question about the data augmentation. I implement a Pytorch version of PointNet. The classification accuracy on ModelNet40 is able to be 89.2% without data augmentation, the same as your result in your experiments. But the performance decreases to 87.5% when data augmentation is added. The code of data augmentation I employ is the released version in your "provider.py" file.