Closed liruihui closed 5 years ago
I meet the same issue, and could I see your test code in ModelNet40? @liruihui @Yochengliu
This is my test code based the paper.
@liruihui 'scale' in Table 4 of @Yochengliu 's paper should refer to multi-scale grouping (different from MSG in PointNet++), not data processing. Thus you should retrain the ModelNet40 codes with multi-scale grouping, noticing the performance achieve 92.9% or not. Then you could test the results with ten votes.
They conduct the testing with random scaling and average the predictions. I just use the release code with a single scale. But after the voting, the performance drops. It should not be that case.
@liruihui I do know that, but I think the model with MSG training would be better than your model without MSG training when runing ten votes with random scaling.
@Hlxwk I do not think the MSG is so decisive. Even you replace the RS-CNN with raw shared-mlps, it also works very well.
@liruihui Maybe, you are right. I am also curious about ten votes, but I met the same accuracy drop when runing ten-votes tests. Hope @Yochengliu could share his ten-votes codes.
For the released codes, I also got 92.2%.
For the scale part, it seems like extracting rs features from different scales of neighborhood.
For the ten votes, I also think it is not reliable. Acturally the codes for multi voting have been implemented in both PointNet and PointNet++, and I tried them. According to my experience, sometimes ten voting would boost your accuracy a little bit, while sometimes would lower down the accuracy. It depends on the "robustness" of your trained model.
Hi, all, we have uploaded the voting script, please have a check and try it. Hope that it would be helpful.
Thanks for your sharing. @Yochengliu Could you release your multi-scale classification code? Because I can not find any config details about that.
@liruihui can you please release the multi-scale classification code if you coded it yourself using the configurations provided in the arxiv version of RS-CNN ?
I have got the best validation accuracy with 92.1777%. But when I test the model with voting with random scaling as paper, the accuracy drops to 91.1%.
Could you share your voting code? @Yochengliu
Thanks.