pjreddie / darknet

Convolutional Neural Networks
http://pjreddie.com/darknet/
Other
25.76k stars 21.33k forks source link

calculating mAP by cocoApi and getting only 0.164 on coco with yolov3-tiny.weights #1230

Open hhh151671 opened 5 years ago

hhh151671 commented 5 years ago

I calculated the mAp by cocoApi,but got only 0.164 on coco I did as beblow:

./darknet detector valid cfg/coco.data cfg/yolov3-tiny.cfg /home/ubuntu/hua/darknet/yolov3-tiny.weights -out coco  -thresh .25

coco.data:

classes= 80
train  = /home/pjreddie/data/coco/trainvalno5k.txt
#valid  = /home/ubuntu/hua/test/val_data/train.txt
valid = /home/ubuntu/hua/darknet/data/coco/5k.txt
names = data/coco.names
backup = /home/pjreddie/backup/
eval=coco

And the code of mAp computation:

import matplotlib.pyplot as plt
from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval
import numpy as np
import skimage.io as io
import pylab
pylab.rcParams['figure.figsize'] = (10.0, 8.0)

annType = ['segm','bbox','keypoints']
annType = annType[1]      #specify type here
prefix = 'person_keypoints' if annType=='keypoints' else 'instances'
print 'Running demo for *%s* results.'%(annType)

#initialize COCO ground truth api
dataDir='../'
dataType='val2014'
annFile = '/home/ubuntu/hua/instances_val2014.json'
cocoGt=COCO(annFile)

#initialize COCO detections api
resFile='/home/ubuntu/hua/darknet/results/coco.json'
# resFile = resFile%(dataDir, prefix, dataType, annType)
cocoDt=cocoGt.loadRes(resFile)

# imgIds=sorted(cocoGt.getImgIds())
# imgIds=imgIds[0:100]
# imgId = imgIds[np.random.randint(100)]

import json
dts = json.load(open(resFile,'r'))
imgIds = [imid["image_id"] for imid in dts]
imgIds = sorted(list(set(imgIds)))
del dts
# print(imgIds)

# running evaluation
cocoEval = COCOeval(cocoGt,cocoDt,annType)
cocoEval.params.imgIds  = imgIds
cocoEval.evaluate()
cocoEval.accumulate()
cocoEval.summarize()

then got the result:

Loading and preparing results...
DONE (t=6.88s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=42.91s).
Accumulating evaluation results...
DONE (t=5.95s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.083
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.164
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.077
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.038
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.229
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.107
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.172
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.185
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.011
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.142
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.401

It's very low.Is there anything wrong? @pjreddie

muye5 commented 5 years ago

you can try it again with threshold=1e-2 or lower, I tried it with yolov3-wieghts on test-dev2017 and found this param. But I cannot retrain a new model as good as yolov3-weights. Do you have any advices?

wonchulSon commented 5 years ago

What is the the code of mAp computation file's name? Did you make it?

XinchaoCheng commented 5 years ago

Did you finally find the reason? Please help! I can only get 0.08@0.5 when testing 5k.txt

hujunchao commented 3 years ago

In coco2014,I only get0.194@0.5 in val2014 with yolov3-tiny.weights. Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.095 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.194 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.084 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.001 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.044 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.275 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.123 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.200 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.216 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.015 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.173 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.481