Open repa1022 opened 5 years ago
Car@0.70, 0.70, 0.70 means evaluate car performance in easy, moderate, hard and use 0.7 (easy), 0.7 (mod), 0.7 (hard) as overlap threshold. bbox means calculate overlap (intersection of union area) by bbox overlap, bev means bev overlap, 3d means 3d overlap. the aos... you can ignore this.
ok thank you! So Car@0.70, 0.50, 0.50 is 0.70 (easy), 0.50 (moderate) and 0.50 (hard) as IoU? I am a little bit confused as the mAPs of Car with 0.70 (easy) in the different evaluation types (Car@0.70,0.50,0.50 and Car@0.70,0.70,0.70) aren’t the same
Hello @traveller59 and @repa1022 , I have the same problem, The evaluation result in Car@0.7,0.7,0.7 and Car@0.7,0.5,0.5 are totally different, i understand that for 0.5 is different from 0.7, but why even for 0.7(easy) is different. Many thanks.
Hello @traveller59 and @repa1022 , I have the same problem, The evaluation result in Car@0.7,0.7,0.7 and Car@0.7,0.5,0.5 are totally different, i understand that for 0.5 is different from 0.7, but why even for 0.7(easy) is different. Many thanks.
Have you understood why they are different? I have the same doubts, can you tell me how to understand this? Thank you
Hi @traveller59 and @repa1022, I know it's an old issue, but I was using this today and I got some results, but I'm not sure how to interpret them.
My first question is what do you mean exactly, when you mention easy, moderate and hard? Is this supposed to describe the size of the object or some kind of occlusion (or lack thereof) and how exactly is it defined?
I also had two results, namely the official evaluation and coco evaluation. What is the difference between them?
Hello @mnik17 , easy, moderate and hard are the difficulty levels defined by KITTI object detection benchmark, you can check the definition of them in the KITTI website and the devkit readme file. Basically it is related to the object's height in the image plane, the occlusion level and truncation levels. The official evaluation results are the results getting from KITTI evaluation script, COCO is another object detection benchmark, their evaluation metrics are different from KITTI. In my opinion, if you do work mainly on KITTI, you can ignore the COCO results.
Hello @pangsu0613, thank you for the fast reply, so basically if I have a 0.7 threshold for an easy object it will not consider any detections with less overlap than that for this type of objects?
@mnik17, 0.7 is the IoU threshold for car class in KITTI dataset (0.5 is for pedestrian and cyclist). If the IoU between a ground truth and a predicted object is larger than 0.7, it will be treated as a true positive, if it is smaller than 0.7, it will be treated as a false positive.
Ok, thank you very much @pangsu0613
hello @pangsu0613 I have the same problem, The evaluation result in Car@0.7,0.7,0.7 and Car@0.7,0.5,0.5 are totally different, i understand that for 0.5 is different from 0.7, but why even for 0.7(easy) is different. Many thanks. Now do you understand? Can you help me
Hi @pangsu0613 , my understanding is that the first 0.7 in Car@0.7,0.5,0.5 is a typo, it should be all 0.5. basically, for Car@0.7,0.7,0.7, it shows all the easy, moderate and hard results with IoU thresholding at 0.7, and for Car@0.5,0.5,0.5, it shows all the easy, moderate and hard results with IoU thresholding at 0.5, because 0.7 is larger than 0.5, which means 0.7 is a more strict (harsh) criteria, so the numbers under 0.7 are smaller than the numbers under 0.5 (of course, comparison must be done under ths same difficulty level, 0.7 easy vs 0.5 easy, 0.7 moderate vs 0.5 moderate, etc).
Hi
can someone help me understanding the different evaluations? There is one "Car@0.70, 0.70, 0.70" and one "Car@0.70, 0.50, 0.50" ans also a "Car coco..." And which of the contents (bbox, bev, 3d, aos) describe the Detection Precision? I can not find anything about that and the values in KITTI Benchmark on their website are different.
Thank you!