zhulf0804 / PointPillars

A Simple PointPillars PyTorch Implementation for 3D LiDAR(KITTI) Detection.
MIT License
473 stars 116 forks source link

mAP calculation in evaluate.py #6

Closed hongduc2307 closed 2 years ago

hongduc2307 commented 2 years ago

I can confirm that this repo is really easy to follow and execute pointpillar as expected.

I wonder why "11" is hard-coded here. I assume that len(score_thresholds) is expected to be 41 but it is not correct. Can you please check again? https://github.com/zhulf0804/PointPillars/blob/main/evaluate.py#L241

zhulf0804 commented 2 years ago

Hello @hongduc2307,

11 recall positions are calculated on validation set here.

Because sampling precisions every 4 intervals as follows, so "11" is hard-coded here. https://github.com/zhulf0804/PointPillars/blob/a0e8af7204168a1823d8679cf2379576b4795a27/evaluate.py#L238-L241

For evaluation, we keep consistent with mmdet3d, as implemented in the following lines.

https://github.com/open-mmlab/mmdetection3d/blob/eb5a5a2d166d2d60dfbdffe63b925cb37a6541e3/mmdet3d/core/evaluation/kitti_utils/eval.py#L573-L577

hongduc2307 commented 2 years ago

@zhulf0804 Thank you for quick reply.

Yes I see your point here, but when I print len(score_thresholds) it is not always 41 so the loop size is not 11.

I see the score_thresholds could be taken from the following mmdet3d function. The third argument num_sample_pts=41 may not ensure that len(score_thresholds) is 41.

https://github.com/open-mmlab/mmdetection3d/blob/eb5a5a2d166d2d60dfbdffe63b925cb37a6541e3/mmdet3d/core/evaluation/kitti_utils/eval.py#L10

zhulf0804 commented 2 years ago

Yes, you are right.

P-R curve draws (precision, recall) point at the specific scores. However, the recall can't be 100% even if given a very small score. Just like the following figure, the highest recall is slightly higher than 0.8. So it's not always 41 or 11. PR-Curve

hongduc2307 commented 2 years ago

@zhulf0804 Yes you're alright.

Now it's clear that the hard-code "11" should be changed depending on len(score_thresholds) so the result AP will change a bit. Thank you so much for kind answers. This repo is a really great work.