V2AI / Det3D

World's first general purpose 3D object detection codebse.
https://arxiv.org/abs/1908.09492
Apache License 2.0
1.49k stars 298 forks source link

PointPillar's performance is not so good compared with published results #35

Closed Son-Goku-gpu closed 4 years ago

Son-Goku-gpu commented 4 years ago

Hi @poodarchu Thanks for your great code! I trained pointpillar with default config, while the performance is as follow, which is similar to the results post by @s-ryosky in #18 .

default

For the category of car, the published mAP of moderate level on kitti 3D test dataset is 74.99, while my trained one is only 75.66 on kitti val dataset. It seems cannot exceed the published one. As far as I konw, other researchers could achieve about 77 on val with pointpillar, I wonder if there exists any problem in the configs? and can you pubulish your results? Thanks a lot!

s-ryosky commented 4 years ago

PointPillars's config have been updated recently. Which one did you use to train?

In old config the voxel_size was set as [0.2, 0.2, 4.0]. But in published results the voxel_size seems to be set as [0.16, 0.16, 4.0]. Please use smaller voxel size.

poodarchu commented 4 years ago

currently all versions of PyTorch's sync bn implementations have bugs. According to my experiments, APEX's implementation seem to be able to achieve the same performance as single-gpu versions on some models. So I think if you train Point Pillars using a single GPU might get a higher mAP.

muzi2045 commented 4 years ago

training CBGS, the loss value are normal? it looks like the x,y velocity loss take a major role in loss compute.

2020-01-07 11:27:01,209 - INFO - Epoch [5/20][5850/21350]   lr: 0.00060, eta: 4 days, 14:32:00, time: 1.150, data_time: 0.035, transfer_time: 0.048, forward_time: 0.367, loss_parse_time: 0.000 memory: 4948, 
2020-01-07 11:27:01,209 - INFO - task : ['car'], loss: 3.2919, cls_pos_loss: 0.3375, cls_neg_loss: 0.0345, dir_loss_reduced: 0.3267, cls_loss_reduced: 0.4064, loc_loss_reduced: 2.8202, loc_loss_elem: ['0.0188', '0.0287', '0.2736', '0.0352', '0.0346', '0.0624', '0.7662', '1.3994', '0.2013'], num_pos: 19.6800, num_neg: 31705.3800
poodarchu commented 4 years ago

training CBGS, the loss value are normal? it looks like the x,y velocity loss take a major role in loss compute.

2020-01-07 11:27:01,209 - INFO - Epoch [5/20][5850/21350] lr: 0.00060, eta: 4 days, 14:32:00, time: 1.150, data_time: 0.035, transfer_time: 0.048, forward_time: 0.367, loss_parse_time: 0.000 memory: 4948, 
2020-01-07 11:27:01,209 - INFO - task : ['car'], loss: 3.2919, cls_pos_loss: 0.3375, cls_neg_loss: 0.0345, dir_loss_reduced: 0.3267, cls_loss_reduced: 0.4064, loc_loss_reduced: 2.8202, loc_loss_elem: ['0.0188', '0.0287', '0.2736', '0.0352', '0.0346', '0.0624', '0.7662', '1.3994', '0.2013'], num_pos: 19.6800, num_neg: 31705.3800

It seems to be correct.

Son-Goku-gpu commented 4 years ago

Thanks! @poodarchu @s-ryosky I did train the model with a single gpu, while I didn't update the voxel_size and range in config as mentioned by @s-ryosky. I am not sure how much increase this modification could bring, but I 'll try it and release the results later.

Son-Goku-gpu commented 4 years ago

@poodarchu BTW, will you implement PointRCNN and release the code later?

poodarchu commented 4 years ago

@poodarchu BTW, will you implement PointRCNN and release the code later?

Yes. It will be released soon.

Son-Goku-gpu commented 4 years ago

@poodarchu Thanks! Looking forward...

poodarchu commented 4 years ago

@poodarchu Thanks! Looking forward...

Are you interested in reproduce other models, such as VoteNet based on Det3D?

Son-Goku-gpu commented 4 years ago

@poodarchu I am doing research on 3d detection and want to implement some ideas based on Det3D, maybe I will create some models based on VoteNet later, but I am not sure now. If I need it, I will implement it based on Det3D and pull a request.

poodarchu commented 4 years ago

@poodarchu I am doing research on 3d detection and want to implement some ideas based on Det3D, maybe I will create some models based on VoteNet later, but I am not sure now. If I need it, I will implement it based on Det3D and pull a request.

thanks

abhigoku10 commented 4 years ago

@Son-Goku-gpu @poodarchu i had found this paper few months back but no implementation ,this has alround functionality https://arxiv.org/abs/1904.07537 please share your views on this @Son-Goku-gpu cna you share your mail id i have few queries which i would like to ask through mail if you have no issues

Son-Goku-gpu commented 4 years ago

@poodarchu As mentioned by @s-ryosky, after changing the range and voxelsize in config file, I can achieve the mAP with PointPillar as follows: default_range_voxelsize_reset The results seems more reasonable. Thank you!

Son-Goku-gpu commented 4 years ago

@abhigoku10 I remember it's a workshop paper, the results are so poor that I didn't spend much time on it. You may contact me with my email: 1143883958@qq.com.

GYE19970220 commented 3 years ago

I follow the newest instructions train pointpillrs,then run python test.py ../examples/point_pillars/configs/kitti_point_pillars_mghead_syncbn.py epoch_100.pth --show to test, the result are as follows,there is a big difference from the above 3d results,and the result is exactly the same as the val result after the 100th epoch.Is there a problem with my command? Any help would be appreciated. @poodarchu image