david8862 / keras-YOLOv3-model-set

end-to-end YOLOv4/v3/v2 object detection pipeline, implemented on tf.keras with different technologies
MIT License
640 stars 222 forks source link

Percison计算问题请教 #208

Closed ShihuaiXu closed 3 years ago

ShihuaiXu commented 3 years ago

David,您好: 有一个关于percison和recall计算的问题想向你请教,比如说我有3个类别的物体,分别为A,B,C, 在测试集的一张图中,正好有A,B,C这三个目标 在一次检测中,A和B都被检测到了,且满足IOU阈值,C没有误检,也没有检测到,也就说C有GT,但是没有TP和FP,那么在算Persion的时候应该是2/3=0.75还是2/2=1呢? 我看到您的代码里面是按2/3=0.75算的,但是我觉得既然没有误检,那么C不应该算到分母里面,所以应该是2/2=1,希望您有空能帮我解答一下 代码如下 def get_mean_metric(metric_records, gt_classes_records): ''' Calculate mean metric, but only count classes which have ground truth object

Param
    metric_records: metric dict like:
        metric_records = {
            'aeroplane': 0.79,
            'bicycle': 0.79,
                ...
            'tvmonitor': 0.71,
        }
    gt_classes_records: ground truth class dict like:
        gt_classes_records = {
            'car': [
                ['000001.jpg','100,120,200,235'],
                ['000002.jpg','85,63,156,128'],
                ...
                ],
            ...
        }
Return
     mean_metric: float value of mean metric
'''
mean_metric = 0.0
count = 0
for (class_name, metric) in metric_records.items():
    if (class_name in gt_classes_records) and (len(gt_classes_records[class_name]) != 0):
        mean_metric += metric
        count += 1
mean_metric = (mean_metric / count) * 100 if count != 0 else 0.0
return mean_metric
david8862 commented 3 years ago

@ShihuaiXu get_mean_metric()最初是为了准确计算mAP而添加的,目的是求平均时只统计包含groundtruth的类别。对于precision/recall而言,在一般的目标检测metric中通常不会去求各类别的平均,所以这里只是参照mAP的计算方式得到一个均值作为参考

ShihuaiXu commented 3 years ago

thanks david