traveller59 / second.pytorch

SECOND for KITTI/NuScenes object detection
MIT License
1.72k stars 722 forks source link

0 mAP for Nuscenes eval? #266

Open deeptir18 opened 5 years ago

deeptir18 commented 5 years ago

Hi, I'm trying to train a model from scratch on the nuscenes data using the nuscenes/all.pp.lowa.config (nothing changed). I might have not trained the model for long enough (only ~ 13000 steps so far), but I'm a little suspicious of the results so far, because it repeatedly shows 0 average precision at all theshold levels for all classes (it doesn't find any true positive matches between any of the predictions).

The model does seem to identify boxes reasonably from inspecting the inference on one of the files manually -- do I just need to train it for more steps before I see a non zero average precision value for any of the classes?

What config file should be used with the nuscenes model checkpoint provided? I tried all the available configs, but none seem to work. I'd like to check that the evaluation works, if possible, with a pretrained model.

Thank you!

qchenclaire commented 5 years ago

My mAP was also 0 when lr_max was 1.5e-4, but when I set it to 3e-3, it worked. Unfortunately I still couldn't reproduce the author's pointpillar result. Second is also a disaster.

HenryJunW commented 4 years ago

@qchenclaire How long it takes for you to train the model in total? For me with the largea.config, I just started training from scratch, and spent ~4 hours for the first 2 epochs.

tjucwb commented 4 years ago

@HenryJunW it should not be so long, but which config do you use, My results are far away from the results in leaderboard

tjucwb commented 4 years ago

@HenryJunW Have you get the results now? my results AP2.0 on val dataset is just 49%, far away from the author get

vatsal-shah commented 4 years ago

@HenryJunW Have you get the results now? my results AP2.0 on val dataset is just 49%, far away from the author get

@tjucwb Could you explain what the numbers AP@0.5, 1.0, 2.0, 4.0 represent?

triasamo1 commented 3 years ago

@HenryJunW Have you get the results now? my results AP2.0 on val dataset is just 49%, far away from the author get

@tjucwb Could you explain what the numbers AP@0.5, 1.0, 2.0, 4.0 represent?

This site really helps: https://blog.zenggyu.com/en/post/2018-12-16/an-introduction-to-evaluation-metrics-for-object-detection/

HenryJunW commented 3 years ago

@tjucwb I am using https://github.com/traveller59/second.pytorch/blob/master/second/configs/nuscenes/all.pp.mhead.config. I forgot the specific numbers for AP2.0, but the mAP is 29.5 . You can refer to our paper https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123550409.pdf.