MathGaron / mean_average_precision

Small and simple python/numpy utility to compute mean average precision (mAP) on detection task.
MIT License
117 stars 45 forks source link

Change how false/true positives/negatives are calculated #18

Closed mrshurik closed 6 years ago

mrshurik commented 6 years ago
  1. Only compute TP, FP = pred_count - TP, FN - gt_count - TP
  2. Move IoU computation outside of class/confidence loop => much faster
  3. Use boolean mask of IoU instead of IoU => faster
mrshurik commented 6 years ago

You may not like this change since I change the definition of FP you give in README. I don't want to count class z predicts gt x in class x. The problem is that: 1) It's not a problem of class x, rather problem of class z, so it shouldn't affect precision of class x. 2) When you count it, for one prediction you may get 2 FP. One in class z, one in class x. Kind of wrong IMHO. So, it's up to you to accept it or not. Thanks for the code, anyway.

MathGaron commented 6 years ago

Hi mrshurik,

I believe you are right, thanks for pointing that out, but indeed class z prediction should not affect the precision of x. I merge your code, update the tests and ReadMe.

Thanks for your contribution!