Lightning-AI / torchmetrics

Machine learning metrics for distributed, scalable PyTorch applications.
https://lightning.ai/docs/torchmetrics/
Apache License 2.0
2.13k stars 407 forks source link

How to find recall for each class using MeanAveragePrecision #2821

Open shanalikhan opened 1 day ago

shanalikhan commented 1 day ago

🚀 Feature

How to find the recall of class 0 and class 1 for this code. Sorry the documentation is not clear to me. I can set micro but how to identify overall P and R by class name?

Motivation


class CocoDNN(L.LightningModule):
    def __init__(self):
        super().__init__()
        self.model = models.detection.fasterrcnn_mobilenet_v3_large_fpn(weights="DEFAULT")
        self.metric = MeanAveragePrecision(iou_type="bbox",average="macro",class_metrics = True, iou_thresholds=[0.5, 0.75],extended_summary=True)  

    def training_step(self, batch, batch_idx):
      #### Some code here

    def validation_step(self, batch, batch_idx):
        imgs, annot = batch
        targets ,preds = [], []
        for img_b, annot_b in zip(imgs, annot):
            if len(img_b) == 0:
                continue
            if len(annot_b)> 1:
                targets.extend(annot_b)
            else:
                targets.append(annot_b[0])

            #print(f"Annotated : {len(annot_b)} - {annot_b}")
            #print("")
            loss_dict = self.model(img_b, annot_b)

            #print(f"Predicted : {len(loss_dict)} -  {loss_dict}")
            if len(loss_dict)> 1:
                preds.extend(loss_dict)
            else:
                preds.append(loss_dict[0])
            #preds.append(loss_dict)

        self.metric.update(preds, targets)
        map_results = self.metric.compute()
        #self.log_dict('logs',map_results)
        print("RECALL")
        print(map_results['recall'])
        #print(map_results['map_50'].float().item())
        self.log('map_50', map_results['map_50'].float().item(),on_step=True, on_epoch=True, prog_bar=True, logger=True)
        self.log('map_75', map_results['map_75'].float().item(),on_step=True, on_epoch=True, prog_bar=True, logger=True)
        return map_results['map_75']

### Pitch

<!-- A clear and concise description of what you want to happen. -->

### Alternatives

<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->

### Additional context

<!-- Add any other context or screenshots about the feature request here. -->
github-actions[bot] commented 1 day ago

Hi! thanks for your contribution!, great first issue!

shanalikhan commented 12 hours ago

I have used the following code

self.log('precision', map_results['precision'].mean().float().item(),on_step=True, on_epoch=True, prog_bar=True, logger=True)
self.log('recall', map_results['recall'].mean().float().item(),on_step=True, on_epoch=True, prog_bar=True, logger=True)

Its overall score and its in negative, how come its in negative? Also since I'm finetuning the model for binary class therefore I think mean is eventually not suitable for binary class here and I must take mean for 2 classes instead.