megvii-research / AnchorDETR

An official implementation of the Anchor DETR.
Other
333 stars 35 forks source link

How extract precision, recall and f1-score metrics #26

Closed jackonealll closed 2 years ago

jackonealll commented 2 years ago

Hello thank you for sharing the code.

I would like to know how to extract precision, recall and f1-score metrics. I already have the AP and AR metrics.

I am trying to use the following code but it gives me a numpy matrix:

precision = coco_eval.eval['precision']
recall = coco_eval.eval['recall']

Can you help me?

tangjiuqi097 commented 2 years ago

Hi, you can refer to the following codes:

# Copyright (c) 2021 megvii-model. All Rights Reserved.

iou_thr=0.5
iou_thrs=[0.5+i/20. for i in range(10)]
iou_idx=iou_thrs.index(iou_thr)

precision=torch.tensor(coco_evaluator.coco_eval['bbox'].eval['precision'])
precision_list = precision[iou_idx,:,:,0,-1].mean(-1)
recall_list = torch.linspace(0,1.0,101)
f1_scores = 2 * precision_list * recall_list / (precision_list+recall_list)
f1_score, idx = f1_scores.max(0)
p, r = precision_list[idx], recall_list[idx]
jackonealll commented 2 years ago

Thank you so much!! It has worked.

jackonealll commented 2 years ago

@tangjiuqi097 Why recall was get from a randomy list?

Why not using this approach? recall_list = torch.tensor(coco_evaluator.coco_eval['bbox'].eval['recall'])

tangjiuqi097 commented 2 years ago

@jackonealll The recall_list is not a random list. It is the corresponding recall of the precision. The coco_evaluator.coco_eval['bbox'].eval['recall'] does not consider the precision. You can refer to the COCOAPI documents for more detail.

jackonealll commented 2 years ago

Thank you again!!