open-mmlab / OpenPCDet

OpenPCDet Toolbox for LiDAR-based 3D Object Detection.
Apache License 2.0
4.66k stars 1.3k forks source link

where can I find the means of the evaluation? #432

Closed zhufeng888 closed 2 years ago

zhufeng888 commented 3 years ago

how to understand the four evaluation indicators as follows: ①Car AP@0.70, 0.70, 0.70 ②Car AP_R40@0.70, 0.70, 0.70 ③Car AP@0.70, 0.50, 0.50 ④Car AP_R40@0.70, 0.50, 0.50 the first Car AP@0.70, 0.70, 0.70 and the third Car AP@0.70, 0.50, 0.50 have the same number 0.70 at begin, but AP of the former is 89.3476,while it of the latter is 96.2342.

`2021-01-12 23:10:45,241 INFO * EPOCH 8369 EVALUATION *** 2021-01-12 23:17:41,700 INFO * Performance of EPOCH 8369 *** 2021-01-12 23:17:41,713 INFO Generate label finished(sec_per_example: 0.1105 second). 2021-01-12 23:17:41,713 INFO recall_roi_0.3: 0.968447 2021-01-12 23:17:41,713 INFO recall_rcnn_0.3: 0.968561 2021-01-12 23:17:41,713 INFO recall_roi_0.5: 0.928466 2021-01-12 23:17:41,713 INFO recall_rcnn_0.5: 0.934389 2021-01-12 23:17:41,713 INFO recall_roi_0.7: 0.717394 2021-01-12 23:17:41,713 INFO recall_rcnn_0.7: 0.759483 2021-01-12 23:17:41,716 INFO Average predicted number of objects(3769 samples): 9.230 2021-01-12 23:18:04,213 INFO Car AP@0.70, 0.70, 0.70: bbox AP:96.2470, 89.4992, 89.2430 bev AP:90.0894, 87.9004, 87.4072 3d AP:89.3476, 83.6901, 78.7028 aos AP:96.22, 89.39, 89.07 Car AP_R40@0.70, 0.70, 0.70: bbox AP:98.2662, 94.4210, 92.2765 bev AP:93.0239, 90.3255, 88.5319 3d AP:92.1047, 84.3605, 82.4830 aos AP:98.25, 94.26, 92.07 Car AP@0.70, 0.50, 0.50: bbox AP:96.2470, 89.4992, 89.2430 bev AP:96.2810, 89.4982, 89.2886 3d AP:96.2342, 89.4774, 89.2535 aos AP:96.22, 89.39, 89.07 Car AP_R40@0.70, 0.50, 0.50: bbox AP:98.2662, 94.4210, 92.2765 bev AP:98.2607, 94.5896, 94.4319 3d AP:98.2422, 94.5277, 94.3272 aos AP:98.25, 94.26, 92.07 Pedestrian AP@0.50, 0.50, 0.50: bbox AP:73.1477, 68.0799, 64.3542 bev AP:65.1821, 59.4169, 54.5101 3d AP:63.1230, 54.8428, 51.7816 aos AP:67.84, 62.49, 58.73 Pedestrian AP_R40@0.50, 0.50, 0.50: bbox AP:73.6837, 68.2715, 64.3622 bev AP:65.9365, 58.5166, 54.1258 3d AP:62.7110, 54.4902, 49.8798 aos AP:67.82, 62.17, 58.07 Pedestrian AP@0.50, 0.25, 0.25: bbox AP:73.1477, 68.0799, 64.3542 bev AP:76.2555, 71.8445, 69.4931 3d AP:76.2398, 71.8001, 69.4345 aos AP:67.84, 62.49, 58.73 Pedestrian AP_R40@0.50, 0.25, 0.25: bbox AP:73.6837, 68.2715, 64.3622 bev AP:78.2616, 73.1740, 69.9717 3d AP:78.2458, 73.0349, 69.8725 aos AP:67.82, 62.17, 58.07 Cyclist AP@0.50, 0.50, 0.50: bbox AP:96.1222, 81.3613, 76.4936 bev AP:88.5292, 73.3251, 70.3690 3d AP:86.0637, 69.4789, 64.5046 aos AP:95.98, 81.07, 76.17 Cyclist AP_R40@0.50, 0.50, 0.50: bbox AP:97.1514, 82.4180, 78.2196 bev AP:93.4584, 74.5322, 70.1025 3d AP:89.1011, 70.3809, 66.0168 aos AP:97.04, 82.12, 77.88 Cyclist AP@0.50, 0.25, 0.25: bbox AP:96.1222, 81.3613, 76.4936 bev AP:95.0958, 78.2760, 73.3191 3d AP:95.0958, 78.2670, 73.3121 aos AP:95.98, 81.07, 76.17 Cyclist AP_R40@0.50, 0.25, 0.25: bbox AP:97.1514, 82.4180, 78.2196 bev AP:96.2402, 79.1335, 75.8222 3d AP:96.2402, 79.1278, 75.7990 aos AP:97.04, 82.12, 77.88

2021-01-12 23:18:04,217 INFO Result is save to /home/hby/hdd/chenyanbin/OpenPCDet/output/kitti_models/pv_rcnn/default/eval/epoch_8369/val/default 2021-01-12 23:18:04,217 INFO ****Evaluation done.*****`

triasamo1 commented 3 years ago

Hey man,

andraspalffy commented 3 years ago

Hello,

Unfortunately, the eval print could be a bit clearer indeed. Here is my take on it. What influences the number of results?

You sent 60 lines. 12 of those are "titles. The Remaining 48 is:

48 lines = 3 classes (pedestrian, cyclist, car) * 4 metrics (box, 3d, bev, aos) * 2 calculation modes (AP or AP40) * 2 threshold_sets

And each line has 3 columns for the 3 difficulties: Easy, Medium, Hard.

The eval output is misleading as the "title lines" are only referring to the bbox lines directly below. For example,

bbox AP:96.2470, 89.4992, 89.2430
bev AP:96.2810, 89.4982, 89.2886
3d AP:96.2342, 89.4774, 89.2535
aos AP:96.22, 89.39, 89.07

Here the 0.70, 0.50 and 0.50 only refers to the bbox metric. The other metrics have different values, you can check them at the function I linked above. This is very confusing if you are not aware of this. See issue #307 too.

Hope this helps!

usergxx commented 3 years ago

@paland3 i think i understand you reply,but i dont think you resolve the question,at the eval result,Car AP@0.70, 0.70, 0.70:3d AP:89.3476, Car AP@0.70, 0.50, 0.50:3d AP:96.2342, the mode is easy, class is car, overlap threshold is 0.7,ap compute is 11,but why they are different

andraspalffy commented 3 years ago

I think I did answer this. While you think it uses the same threshold 0.7, it actually does not, see thresholds here: https://github.com/open-mmlab/OpenPCDet/blob/0642cf06d0fd84f50cc4c6c01ea28edbc72ea810/pcdet/datasets/kitti/kitti_object_eval_python/eval.py#L639

The "Car AP@0.70, 0.70, 0.70:" and "Car AP@0.70, 0.50, 0.50" lines do not apply to the 3d AP lines. It is confusing indeed.

usergxx commented 3 years ago

@paland3 thank you, i understand

curiousboy20 commented 2 years ago

@paland3 @usergxx Sorry, I don't understand about this. Is "Car AP@0.70, 0.50, 0.50:" means that the IoU threshold in x-axis 0.7 , y-axis = 0.5 and z-axis=0.5?

andraspalffy commented 2 years ago

No, IoU is calculated either in camera (2D bounding boxes), BEV (2D bounding boxes, top view), or 3D (rectangular cuboids). The IoU threshold is applied once you got the IoU, no need to define a separate threshold for different axes. Please read my answer carefully, I tried to explain the different number of factors influencing the number of results.

github-actions[bot] commented 2 years ago

This issue is stale because it has been open for 30 days with no activity.

github-actions[bot] commented 2 years ago

This issue was closed because it has been inactive for 14 days since being marked as stale.

VsionQing commented 2 years ago

Mean of 'Average predicted number of objects ' is?

andraspalffy commented 2 years ago

I think it means the average predicted number of 3D bounding box per frame.