Lightning-AI / torchmetrics

Machine learning metrics for distributed, scalable PyTorch applications.
https://lightning.ai/docs/torchmetrics/
Apache License 2.0
2.15k stars 408 forks source link

How to find recall for each class using MeanAveragePrecision #2821

Open shanalikhan opened 3 weeks ago

shanalikhan commented 3 weeks ago

🚀 Feature

How to find the recall of class 0 and class 1 for this code. Sorry the documentation is not clear to me. I can set micro but how to identify overall P and R by class name?

Motivation


class CocoDNN(L.LightningModule):
    def __init__(self):
        super().__init__()
        self.model = models.detection.fasterrcnn_mobilenet_v3_large_fpn(weights="DEFAULT")
        self.metric = MeanAveragePrecision(iou_type="bbox",average="macro",class_metrics = True, iou_thresholds=[0.5, 0.75],extended_summary=True)  

    def training_step(self, batch, batch_idx):
      #### Some code here

    def validation_step(self, batch, batch_idx):
        imgs, annot = batch
        targets ,preds = [], []
        for img_b, annot_b in zip(imgs, annot):
            if len(img_b) == 0:
                continue
            if len(annot_b)> 1:
                targets.extend(annot_b)
            else:
                targets.append(annot_b[0])

            #print(f"Annotated : {len(annot_b)} - {annot_b}")
            #print("")
            loss_dict = self.model(img_b, annot_b)

            #print(f"Predicted : {len(loss_dict)} -  {loss_dict}")
            if len(loss_dict)> 1:
                preds.extend(loss_dict)
            else:
                preds.append(loss_dict[0])
            #preds.append(loss_dict)

        self.metric.update(preds, targets)
        map_results = self.metric.compute()
        #self.log_dict('logs',map_results)
        print("RECALL")
        print(map_results['recall'])
        #print(map_results['map_50'].float().item())
        self.log('map_50', map_results['map_50'].float().item(),on_step=True, on_epoch=True, prog_bar=True, logger=True)
        self.log('map_75', map_results['map_75'].float().item(),on_step=True, on_epoch=True, prog_bar=True, logger=True)
        return map_results['map_75']

### Pitch

<!-- A clear and concise description of what you want to happen. -->

### Alternatives

<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->

### Additional context

<!-- Add any other context or screenshots about the feature request here. -->
github-actions[bot] commented 3 weeks ago

Hi! thanks for your contribution!, great first issue!

shanalikhan commented 3 weeks ago

I have used the following code

self.log('precision', map_results['precision'].mean().float().item(),on_step=True, on_epoch=True, prog_bar=True, logger=True)
self.log('recall', map_results['recall'].mean().float().item(),on_step=True, on_epoch=True, prog_bar=True, logger=True)

Its overall score and its in negative, how come its in negative? Also since I'm finetuning the model for binary class therefore I think mean is eventually not suitable for binary class here and I must take mean for 2 classes instead.

SkafteNicki commented 2 weeks ago

@shanalikhan is it the mean average recall you are looking for e.g. the MAR value per class? I assume so because that is one of the more commonly used metrics within detection tasks. If this is the case then you just need to set class_metrics=True and then look for the map_results["mar_100_per_class"] which is the mean average recall at 100 detections per image (maximum number of detection per class with default settings) per class. Assuming that your classes are simply 0 and 1 then

map_results["mar_100_per_class"][0]  # mar value for class 0
map_results["mar_100_per_class"][1]  # mar value for class 1
shanalikhan commented 2 weeks ago

@SkafteNicki Thanks for sharing the details. One quick question: Why the map_* values are negative sometimes, Is it really possible to have negative MAP. for example:

image