Closed Edwardf0t1 closed 4 years ago
The AP is obtained as the AUC of the Recall/Precision curve. If you want a single Recall and Precision, you need to set a threshold to your predictions. My guess is that this threshold is set too low hence provides you with a perfect recall (you have spotted all instances) but your precision is very low (lot of false positive). I would not recommend using that piece of code for recall/precision, it was commented fora reason. The mAP metrics makes more sense for spotting.
Hope that help
Hi Silvio,
I was able to run your Minute classifier program, and got reasonable performance in terms of mAP. I am also interested to see the precision and recall metric for each of "goal, card, and subs" event, so I enabled the precision and recall codes in your Network.py:
'
What I got is very strange. On the test set: the precision is very low, and recall is 1 for each event: auc: 0.623 (auc_PR_0: 0.974 auc_PR_1: 0.532 auc_PR_2: 0.657 auc_PR_3: 0.680) precision: 0.251 (precision_0: 0.857 precision_1: 0.049 precision_2: 0.059 precision_3: 0.039) recall: 0.623 (recall_0: 1.000 recall_1: 1.000 recall_2: 1.000 recall_3: 1.000) Loss: 34.6 Accuracy: 0.645 mAP: 0.623
This is also the case for the train set and validation set.
Just wonder if I missed anything to use precision and recall metrics? Thanks.