Open shanalikhan opened 3 weeks ago
Hi! thanks for your contribution!, great first issue!
I have used the following code
self.log('precision', map_results['precision'].mean().float().item(),on_step=True, on_epoch=True, prog_bar=True, logger=True)
self.log('recall', map_results['recall'].mean().float().item(),on_step=True, on_epoch=True, prog_bar=True, logger=True)
Its overall score and its in negative, how come its in negative? Also since I'm finetuning the model for binary class therefore I think mean is eventually not suitable for binary class here and I must take mean for 2 classes instead.
@shanalikhan is it the mean average recall you are looking for e.g. the MAR value per class?
I assume so because that is one of the more commonly used metrics within detection tasks. If this is the case then you just need to set class_metrics=True
and then look for the map_results["mar_100_per_class"]
which is the mean average recall at 100 detections per image (maximum number of detection per class with default settings) per class. Assuming that your classes are simply 0 and 1 then
map_results["mar_100_per_class"][0] # mar value for class 0
map_results["mar_100_per_class"][1] # mar value for class 1
@SkafteNicki Thanks for sharing the details. One quick question: Why the map_* values are negative sometimes, Is it really possible to have negative MAP. for example:
🚀 Feature
How to find the recall of class 0 and class 1 for this code. Sorry the documentation is not clear to me. I can set micro but how to identify overall P and R by class name?
Motivation