Closed heumchri closed 3 years ago
The pretrained model is trained with more epoches and data augmentation, such as instance-level rotation and scale. Hence, it performs slightly better.
Thanks for your answer. The provided code contains the data augmentations rotate, flip, scale, and noise translation transform only for the whole pointcloud and is trained for 40 epochs.
So those are the settings to reproduce the model that achieves 65.9 mIoU?
Thanks for your answer. The provided code contains the data augmentations rotate, flip, scale, and noise translation transform only for the whole pointcloud and is trained for 40 epochs.
So those are the settings to reproduce the model that achieves 65.9 mIoU?
Yes
Hello again,
I have the same questions for the nuScenes dataset. The provided weights achieve 74.7% on the validation set, while in the paper you report 76.1%.
This time, the pretrained weights perform worse than what is reported in the paper. Was there also a difference in training parameters?
Thank you.
Maybe it is a wrong checkpoint trained with less epoch. I will check and fix it soon. Thanks very much.
Thanks,
have you found the cause?
excuse me, when I test the pretrained model on KITTI 08 sequence, the results are all 0. Have you encountered this problem before?
@GaloisWang Same hear. Except 'vegetation' label(30.61%) :(
Is there any update on this issue? I'm getting the same results as @DonghoonPark12
Hi,
in your paper you report a mIoU of 65.9 on the semanticKITTI val set. However when i run the demo_folder.py on the validation set with the pretrained weights you provide, it achieves a validation mIoU of 66.911.
How is this difference explained?