Open CS-Jackson opened 6 years ago
I got mAP: 0.409 too.
same here
I believe this commit change the test accuracy https://github.com/eriklindernoren/PyTorch-YOLOv3/commit/e9994d6a18f018e2c76985e038b669113aa44468
I got mAP 0.4648 with confidence threshold 0.2(which I think is basic for original implementation)
Same here. How to fix it?
I notice that ap of some classes is 0 , what is the problem
Same Here.
same here. with
python3.6 test.py --batch_size 10 --n_cpu 8
gave me an mAP of 0.4856844428591997
Same here. How to fix it?
mAP: 0.40963824871843535 how to improve the accuracy
The metric, which is now calculated, seems to be the COCO mAP, not the mAP_50. In the original tech note, the mAP is 33.0 for YOLOv3 608 × 608 Darknet-53. In this code, however, images are converted to 416 x 416, and when I set --img_size to 608 I get mAP almost zero. I am not sure, why.
anyone know how to improve this acc?
Same here...How to fix it? Thanks
The metric, which is now calculated, seems to be the COCO mAP, not the mAP_50. In the original tech note, the mAP is 33.0 for YOLOv3 608 × 608 Darknet-53. In this code, however, images are converted to 416 x 416, and when I set --img_size to 608 I get mAP almost zero. I am not sure, why.
The metric, which is now calculated, seems to be the COCO mAP, not the mAP_50. In the original tech note, the mAP is 33.0 for YOLOv3 608 × 608 Darknet-53. In this code, however, images are converted to 416 x 416, and when I set --img_size to 608 I get mAP almost zero. I am not sure, why.
set --img_size cannot work here. Because the init of dataloader/Dataset is wrong with img_size, you can fix it by your own.
Same here. pytorch=0.4.1,use default parameter to test,class '70' and class '78' got zero,why?
same here!
Did anyone here resolve this issue?
how to resolve this problem?
The repo below tests to about 0.58 mAP on COCO using the original YOLOv3 weights: https://github.com/ultralytics/yolov3
If you run python3 test.py
you should see:
Image Total Precision Recall mAP
5000 5000 0.633 0.598 0.589
mAP Per Class:
person: 0.7397
bicycle: 0.4354
car: 0.4884
motorbike: 0.6372
aeroplane: 0.8263
bus: 0.7101
train: 0.7713
truck: 0.3599
boat: 0.3982
traffic light: 0.4359
fire hydrant: 0.7410
stop sign: 0.7251
parking meter: 0.4293
bench: 0.2846
bird: 0.4764
cat: 0.6460
dog: 0.5972
horse: 0.6855
sheep: 0.4297
cow: 0.4343
elephant: 0.8016
bear: 0.6418
zebra: 0.7726
giraffe: 0.8707
backpack: 0.2034
umbrella: 0.5101
handbag: 0.1676
tie: 0.5130
suitcase: 0.3754
frisbee: 0.6494
skis: 0.4402
snowboard: 0.5657
sports ball: 0.5956
kite: 0.5647
baseball bat: 0.5436
baseball glove: 0.5312
skateboard: 0.7109
surfboard: 0.6562
tennis racket: 0.7707
bottle: 0.3868
wine glass: 0.4738
cup: 0.4165
fork: 0.3319
knife: 0.2303
spoon: 0.2031
bowl: 0.3590
banana: 0.3034
apple: 0.1920
sandwich: 0.3489
orange: 0.2760
broccoli: 0.3100
carrot: 0.1926
hot dog: 0.4404
pizza: 0.5814
donut: 0.4284
cake: 0.4452
chair: 0.3541
sofa: 0.4362
pottedplant: 0.3704
bed: 0.5254
diningtable: 0.3670
toilet: 0.8059
tvmonitor: 0.6290
laptop: 0.6277
mouse: 0.6213
remote: 0.3764
keyboard: 0.5638
cell phone: 0.2963
microwave: 0.5795
oven: 0.4246
toaster: 0.0000
sink: 0.5452
refrigerator: 0.5449
book: 0.1520
clock: 0.6236
vase: 0.4339
scissors: 0.2896
teddy bear: 0.5438
hair drier: 0.0000
toothbrush: 0.2697
The repo below tests to about 0.58 mAP on COCO using the original YOLOv3 weights: https://github.com/ultralytics/yolov3
If you run
python3 test.py
you should see:Image Total Precision Recall mAP 5000 5000 0.633 0.598 0.589 mAP Per Class: person: 0.7397 bicycle: 0.4354 car: 0.4884 motorbike: 0.6372 aeroplane: 0.8263 bus: 0.7101 train: 0.7713 truck: 0.3599 boat: 0.3982 traffic light: 0.4359 fire hydrant: 0.7410 stop sign: 0.7251 parking meter: 0.4293 bench: 0.2846 bird: 0.4764 cat: 0.6460 dog: 0.5972 horse: 0.6855 sheep: 0.4297 cow: 0.4343 elephant: 0.8016 bear: 0.6418 zebra: 0.7726 giraffe: 0.8707 backpack: 0.2034 umbrella: 0.5101 handbag: 0.1676 tie: 0.5130 suitcase: 0.3754 frisbee: 0.6494 skis: 0.4402 snowboard: 0.5657 sports ball: 0.5956 kite: 0.5647 baseball bat: 0.5436 baseball glove: 0.5312 skateboard: 0.7109 surfboard: 0.6562 tennis racket: 0.7707 bottle: 0.3868 wine glass: 0.4738 cup: 0.4165 fork: 0.3319 knife: 0.2303 spoon: 0.2031 bowl: 0.3590 banana: 0.3034 apple: 0.1920 sandwich: 0.3489 orange: 0.2760 broccoli: 0.3100 carrot: 0.1926 hot dog: 0.4404 pizza: 0.5814 donut: 0.4284 cake: 0.4452 chair: 0.3541 sofa: 0.4362 pottedplant: 0.3704 bed: 0.5254 diningtable: 0.3670 toilet: 0.8059 tvmonitor: 0.6290 laptop: 0.6277 mouse: 0.6213 remote: 0.3764 keyboard: 0.5638 cell phone: 0.2963 microwave: 0.5795 oven: 0.4246 toaster: 0.0000 sink: 0.5452 refrigerator: 0.5449 book: 0.1520 clock: 0.6236 vase: 0.4339 scissors: 0.2896 teddy bear: 0.5438 hair drier: 0.0000 toothbrush: 0.2697
The mAP calculation func is wrong in the repo you pointed out, it has been binged up in https://github.com/ultralytics/yolov3/issues/7. It calculate mAP per image, then average this mAP, which could lead to the mAP higher than the trul value.
@houweidong yes I think the repo computes 1 mAP per image (this 1 mAP is the average of all the mAPs for all the classes present in the image), then averages the 5000 mAPs to get the overall mAP.
What should the correct mAP method be? Maybe I can submit a PR.
Hi, this should be resolved in the latest version. You can see the updated measurements in the README.
I got the mAP: 0.5145
I got the mAP: 0.5145
me too..
Also, training for 70 epochs from pretrained weights brings to 0.18mAP only :(
I got the mAP: 0.5145
me too..
metooooo
I got the mAP: 0.5145
me, too. What's wrong on earth?
I tried the
python test.py --weights_path weights/yolov3.weights
, but get mAp: 0.409