Closed ethanhe42 closed 6 years ago
Hi Yihui, There may be some fluctuation in training performance from time to time. That's the number we get in our experiment, it may take you some trial to reach the same ones.
PS: to get the same evaluation results, it's recommended to use the evaluation script which evaluates test shapes in all rotations instead of only a single default rotation for each shape. The test set only has 2468 shapes thus evaluating without rotations will be very unstable.
closing due to no continuing conversation.
Hi @charlesq34, Thanks for the code! Great work! A couple of questions:
HI @Tgaaly
It has been a while since I checked the repo's issues. sorry for the delay. Firstly thanks for your interest!
There is some variance of the accuracies so it's more stable if we evaluate on several rotated version of the point clouds. The accuracy on the testing set during training process can fluctuate from around 88.6 to 89.1 as I remember. I think I used evaluate.py with num_votes=12 to get the final accuracy number.
The best model is trained with Adam. Both BN and LR has decays. I used 20 epochs for the step size for both of the decays.
Hope it helps.
@charlesq34 For the train.py for running point_cls model, I found the decay_step is out of the range, but you mentioned to @Tgaaly
I used 20 epochs for the step size for both of the decays.
parser.add_argument('--decay_step', type=int, default=200000, help='Decay step for lr decay [default: 200000]')
which one is correct? Thanks.
after traing, I got 88.65% accuracy and 85.62% for avg class accuracy. why it's not consistent with 89.2% and 86.2% as paper suggested?