andreasveit / coco-text

COCO-Text API http://vision.cornell.edu/se3/coco-text/
152 stars 69 forks source link

Average Precision in evaluation script? #16

Open sravya8 opened 6 years ago

sravya8 commented 6 years ago

Seems like Coco-text - ICDAR17 is using VOC style AP as an evaluation metric, so curious why is it not supported in the evaluation API?

sravya8 commented 6 years ago

I see there is an offline evaluation script provided in the competition website in the "My methods page". Here is the snippet for AP calculation, comments are mine:

for n in range(len(confList)): #Num predictions
                match = matchList[n]
                if match:
                    correct += 1
                    AP += float(correct)/(n + 1) #rel(n) missing?
            if numGtCare>0:
                AP /= numGtCare

Is there a rel(n) term missing ? Also, from competition page it seems like evaluation is based on VOC style AP. In that case, should'nt the script use interpolated Precision for intervals of confidence?