Closed alessandro-montanari closed 6 years ago
Thank you! I did not start any implementation, but that could be the push I needed to finish it. I may have some time this week to work on this, I will close this issue once it is done.
Cool, thanks!
I just started using your library to evaluate a CNN for object detection on Keras and it works fine but I noticed that it doesn't handle the case where in the GT I have 1 or more BB and the model instead does not predict any BB. In this case the pred_bb
is an array with shape (0, 4) and the he code crashes in the intersect
function with this error:
File "predict.py", line 172, in <module>
_main_(args)
File "predict.py", line 166, in _main_
mAP.evaluate(pred_bb, pred_classes, pred_conf, gt_bb, gt_classes)
File "../mean_average_precision/mean_average_precision/detection_map.py", line 42, in evaluate
self.evaluate_(accumulators, pred_bb, pred_classes, pred_conf, gt_bb, gt_classes, r, self.overlap_threshold)
File "../mean_average_precision/mean_average_precision/detection_map.py", line 49, in evaluate_
IoU = jaccard(pred_bb, gt_bb)
File "../mean_average_precision/mean_average_precision/bbox_utils.py", line 46, in jaccard
inter = intersect(box_a, box_b)
File "../mean_average_precision/mean_average_precision/bbox_utils.py", line 25, in intersect
inter = np.clip(diff_xy, a_min=0, a_max=np.max(diff_xy))
File "/Users/alessandro/Envs/tensor+kerasPy3/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 2272, in amax
out=out, **kwargs)
File "/Users/alessandro/Envs/tensor+kerasPy3/lib/python3.6/site-packages/numpy/core/_methods.py", line 26, in _amax
return umr_maximum(a, axis, None, out, keepdims)
ValueError: zero-size array to reduction operation maximum which has no identity
At the moment I fixed the problem by initialising pred_bb
to an array with shape (1, 4) where the all the values are at zero (same thing for the class and confidence array). Is it correct?
Another thing, I noticed that usually in papers people report the mAP as a value between 0 and 100 (it seems), is it just because they use the percentage or they manipulate the metric somehow?
Thank you.
Thanks for the report, Indeed the code does not handle this case! Good catch, I tested it mostly with an SSD network which always outputs a fixed number of predictions... And totally forgot about others methods that output a different amount of predictions. Adding a zero box with 0 confidence can be a temporary solution but may affect slightly the pr curve (the first threshold is 0) and it will be considered as a false positive of class 0, which it is not. I will fix this later this week.
For the 0 to 100, I don't see what else than a percentage it could be, precision and recall are both ratios (0 to 1) and average precision is the area under the precision and recall curve. So I am quite sure that they use a percentage ;). If you have some doubts you can link me one of the papers so I can take a closer look!
Just fixed the no prediction bug (#3). If there is anything do not hesitate!
Finally implemented the interpolated average precision.
It is now the default computed when plot is called, as VOC uses it.
Hi! First of all thank you for the code :) I see you modified the code to deal with the no prediction case and I wanted to ask you if and how in this case we have to pass the first three arrays to the evaluate function
Hi, nice work and thank you! I was wondering if you were already working on adding the interpolated version of the metric. I will look into it in the following days and I thought maybe you had something to share.