traveller59 / second.pytorch

SECOND for KITTI/NuScenes object detection
MIT License
1.72k stars 722 forks source link

how's the performance on 16 beams lidar data? #155

Open lucasjinreal opened 5 years ago

lucasjinreal commented 5 years ago

Does there any performance demonstration videos or gif to show detection result on 16 beams data?

muzi2045 commented 5 years ago

Peek 2019-04-12 18-38

lucasjinreal commented 5 years ago

@muzi2045 Seems a little slow..

muzi2045 commented 5 years ago

record problem, inference_time between 30ms~50ms

lucasjinreal commented 5 years ago

Then why get this blocked effect?

jeannotes commented 5 years ago

@muzi2045 how to prepare 16-beam lidar?make it kitti like format?

wyjforwjy commented 5 years ago

do you still use the pre_train model for the 16 beam lidar?@muzi2045 @muzi2045

muzi2045 commented 5 years ago

No, that's not pertained model released by Author

wyjforwjy commented 5 years ago

Thanks!!

Liheng notifications@github.com 于2019年5月17日周五 上午11:16写道:

No, that's not pertained model released by Author

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/traveller59/second.pytorch/issues/155?email_source=notifications&email_token=AKEYD45YTUONGSENWBIUUNTPVYPRNA5CNFSM4HFOXAY2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODVTTRLY#issuecomment-493303983, or mute the thread https://github.com/notifications/unsubscribe-auth/AKEYD46XXPLC2G3UP4BLZX3PVYPRNANCNFSM4HFOXAYQ .

wyjforwjy commented 5 years ago

You pretrain the model by KITTI?or by nuscenes @muzi2045

muzi2045 commented 5 years ago

both dataset are trained, nuscenes performs better.

wyjforwjy commented 5 years ago

thank you very much!

turboxin commented 5 years ago

@muzi2045 Hi Muzi,for pretraing your 16 beam model with kitti and nuscene, did you use the original 64/32 beam data , or downsample them to 16 beam?

Thank you in advance!

muzi2045 commented 5 years ago

trained with 64/32 beam lidar data, inference with 16 beam lidar data, don't need to downsample @turboxin

turboxin commented 5 years ago

@muzi2045 thank you very much!

turboxin commented 5 years ago

@muzi2045 hello!You mentioned that inference_time is between 30ms~50ms, may I ask what GPU are you using? Could you please also provide some quantitative performance data on your results on 16 beam lidar? Thanks a lot!

muzi2045 commented 5 years ago

if you using 1050Ti , inference time between 40ms ~ 60ms (without tensorrt speed up) with 1080TI , inference time between 15ms~30ms(without tensorrt)

mmxiami commented 5 years ago

trained with 64/32 beam lidar data, inference with 16 beam lidar data, don't need to downsample @turboxin

how to run the demo on my own dataset?(32 beams data),could you help me?Thank you in advance!

khanln commented 5 years ago

@muzi2045 Hi, could u share ur pretrained model. I have trained Kiiti dataset and inference on velodyne 16, but the result seems not good. very appreciate

dhellfeld commented 5 years ago

@muzi2045 Hi, could you show an example of how you converted the model to tensorrt?

muzi2045 commented 5 years ago

there has some problem in pytorch-> onnx -> tensorRT, I can't successfully convert this model in tensorrt to speed up the inference cost time. But you can refer this repo, the author looks like convert success. nutonomy_pointpillars Good Luck to you @dhellfeld

ryontang commented 3 years ago

@muzi2045 Hi, Thanks for your share, I am a newer in this area, could you give some advice on how to use SECOND do the inference and visualization work as the GIF you shown.