OCR-D / ocrd_segment

OCR-D-compliant page segmentation
MIT License
67 stars 15 forks source link

evaluate: explain/document metrics #57

Open bertsky opened 2 years ago

bertsky commented 2 years ago

If I understand correctly the idea behind these metrics are taken from "rethinking semantic segmentation evaluation" paper, but could you explain to me how could I obtain AP,TPs,FPs,FNs for instance segmentation task?

Originally posted by @andreaceruti in https://github.com/cocodataset/cocoapi/issues/564#issuecomment-1064223428

bertsky commented 2 years ago

Yes, that paper lent the idea for the oversegmentation and undersegmentation measures – but only these two (not the others), and I took the liberty to deviate from the exact definition of Zhang et al. 2021: https://github.com/OCR-D/ocrd_segment/blob/81923495648c346a84436fb7d08727d9c13eb88d/ocrd_segment/evaluate.py#L440-L444

So in my implementation these measures are merely raw ratios, i.e. the share of regions in GT and DT which have been oversegmented (or undersegmented, resp.).

My notion of a match is somewhat arbitrary, but IMO more adequate than averaging over different IoU thresholds for various confidence thresholds:

(All area values under consideration are numbers of pixels in the polygon-masked segments, not just bounding box sizes.)

So in all, you get the following metrics here:

For each metric, there is a page-wise (or even segment-wise) and an aggregated measure; the latter always uses micro-averaging over all (matching pairs in all) pages.