eriklindernoren / PyTorch-YOLOv3

Minimal PyTorch implementation of YOLOv3
GNU General Public License v3.0
7.34k stars 2.63k forks source link

Why did my test only achieve mAP: 0.409? #74

Open CS-Jackson opened 6 years ago

CS-Jackson commented 6 years ago

I tried the python test.py --weights_path weights/yolov3.weights, but get mAp: 0.409

underfitting commented 6 years ago

I got mAP: 0.409 too.

ThatAIGeek commented 6 years ago

same here

ThatAIGeek commented 6 years ago

I believe this commit change the test accuracy https://github.com/eriklindernoren/PyTorch-YOLOv3/commit/e9994d6a18f018e2c76985e038b669113aa44468

ThatAIGeek commented 6 years ago

I got mAP 0.4648 with confidence threshold 0.2(which I think is basic for original implementation)

AVK636 commented 6 years ago

Same here. How to fix it?

GNAYUOHZ commented 6 years ago

I notice that ap of some classes is 0 , what is the problem

WJtomcat commented 6 years ago

Same Here.

lolongcovas commented 6 years ago

same here. with

python3.6 test.py --batch_size 10 --n_cpu 8

gave me an mAP of 0.4856844428591997

92ypli commented 6 years ago

Same here. How to fix it?

shayxurui commented 6 years ago

mAP: 0.40963824871843535 how to improve the accuracy

okdimok commented 6 years ago

The metric, which is now calculated, seems to be the COCO mAP, not the mAP_50. In the original tech note, the mAP is 33.0 for YOLOv3 608 × 608 Darknet-53. In this code, however, images are converted to 416 x 416, and when I set --img_size to 608 I get mAP almost zero. I am not sure, why.

perrywu1989 commented 6 years ago

anyone know how to improve this acc?

Dev2022 commented 6 years ago

Same here...How to fix it? Thanks

1243France commented 6 years ago

The metric, which is now calculated, seems to be the COCO mAP, not the mAP_50. In the original tech note, the mAP is 33.0 for YOLOv3 608 × 608 Darknet-53. In this code, however, images are converted to 416 x 416, and when I set --img_size to 608 I get mAP almost zero. I am not sure, why.

The metric, which is now calculated, seems to be the COCO mAP, not the mAP_50. In the original tech note, the mAP is 33.0 for YOLOv3 608 × 608 Darknet-53. In this code, however, images are converted to 416 x 416, and when I set --img_size to 608 I get mAP almost zero. I am not sure, why.

set --img_size cannot work here. Because the init of dataloader/Dataset is wrong with img_size, you can fix it by your own.

sloan96 commented 5 years ago

Same here. pytorch=0.4.1,use default parameter to test,class '70' and class '78' got zero,why?

Heath-zyl commented 5 years ago

same here!

normster commented 5 years ago

Did anyone here resolve this issue?

houweidong commented 5 years ago

how to resolve this problem?

fourth-archive commented 5 years ago

The repo below tests to about 0.58 mAP on COCO using the original YOLOv3 weights: https://github.com/ultralytics/yolov3

If you run python3 test.py you should see:

      Image      Total  Precision     Recall        mAP
       5000       5000      0.633      0.598      0.589

mAP Per Class:
         person: 0.7397
        bicycle: 0.4354
            car: 0.4884
      motorbike: 0.6372
      aeroplane: 0.8263
            bus: 0.7101
          train: 0.7713
          truck: 0.3599
           boat: 0.3982
  traffic light: 0.4359
   fire hydrant: 0.7410
      stop sign: 0.7251
  parking meter: 0.4293
          bench: 0.2846
           bird: 0.4764
            cat: 0.6460
            dog: 0.5972
          horse: 0.6855
          sheep: 0.4297
            cow: 0.4343
       elephant: 0.8016
           bear: 0.6418
          zebra: 0.7726
        giraffe: 0.8707
       backpack: 0.2034
       umbrella: 0.5101
        handbag: 0.1676
            tie: 0.5130
       suitcase: 0.3754
        frisbee: 0.6494
           skis: 0.4402
      snowboard: 0.5657
    sports ball: 0.5956
           kite: 0.5647
   baseball bat: 0.5436
 baseball glove: 0.5312
     skateboard: 0.7109
      surfboard: 0.6562
  tennis racket: 0.7707
         bottle: 0.3868
     wine glass: 0.4738
            cup: 0.4165
           fork: 0.3319
          knife: 0.2303
          spoon: 0.2031
           bowl: 0.3590
         banana: 0.3034
          apple: 0.1920
       sandwich: 0.3489
         orange: 0.2760
       broccoli: 0.3100
         carrot: 0.1926
        hot dog: 0.4404
          pizza: 0.5814
          donut: 0.4284
           cake: 0.4452
          chair: 0.3541
           sofa: 0.4362
    pottedplant: 0.3704
            bed: 0.5254
    diningtable: 0.3670
         toilet: 0.8059
      tvmonitor: 0.6290
         laptop: 0.6277
          mouse: 0.6213
         remote: 0.3764
       keyboard: 0.5638
     cell phone: 0.2963
      microwave: 0.5795
           oven: 0.4246
        toaster: 0.0000
           sink: 0.5452
   refrigerator: 0.5449
           book: 0.1520
          clock: 0.6236
           vase: 0.4339
       scissors: 0.2896
     teddy bear: 0.5438
     hair drier: 0.0000
     toothbrush: 0.2697
houweidong commented 5 years ago

The repo below tests to about 0.58 mAP on COCO using the original YOLOv3 weights: https://github.com/ultralytics/yolov3

If you run python3 test.py you should see:

      Image      Total  Precision     Recall        mAP
       5000       5000      0.633      0.598      0.589

mAP Per Class:
         person: 0.7397
        bicycle: 0.4354
            car: 0.4884
      motorbike: 0.6372
      aeroplane: 0.8263
            bus: 0.7101
          train: 0.7713
          truck: 0.3599
           boat: 0.3982
  traffic light: 0.4359
   fire hydrant: 0.7410
      stop sign: 0.7251
  parking meter: 0.4293
          bench: 0.2846
           bird: 0.4764
            cat: 0.6460
            dog: 0.5972
          horse: 0.6855
          sheep: 0.4297
            cow: 0.4343
       elephant: 0.8016
           bear: 0.6418
          zebra: 0.7726
        giraffe: 0.8707
       backpack: 0.2034
       umbrella: 0.5101
        handbag: 0.1676
            tie: 0.5130
       suitcase: 0.3754
        frisbee: 0.6494
           skis: 0.4402
      snowboard: 0.5657
    sports ball: 0.5956
           kite: 0.5647
   baseball bat: 0.5436
 baseball glove: 0.5312
     skateboard: 0.7109
      surfboard: 0.6562
  tennis racket: 0.7707
         bottle: 0.3868
     wine glass: 0.4738
            cup: 0.4165
           fork: 0.3319
          knife: 0.2303
          spoon: 0.2031
           bowl: 0.3590
         banana: 0.3034
          apple: 0.1920
       sandwich: 0.3489
         orange: 0.2760
       broccoli: 0.3100
         carrot: 0.1926
        hot dog: 0.4404
          pizza: 0.5814
          donut: 0.4284
           cake: 0.4452
          chair: 0.3541
           sofa: 0.4362
    pottedplant: 0.3704
            bed: 0.5254
    diningtable: 0.3670
         toilet: 0.8059
      tvmonitor: 0.6290
         laptop: 0.6277
          mouse: 0.6213
         remote: 0.3764
       keyboard: 0.5638
     cell phone: 0.2963
      microwave: 0.5795
           oven: 0.4246
        toaster: 0.0000
           sink: 0.5452
   refrigerator: 0.5449
           book: 0.1520
          clock: 0.6236
           vase: 0.4339
       scissors: 0.2896
     teddy bear: 0.5438
     hair drier: 0.0000
     toothbrush: 0.2697

The mAP calculation func is wrong in the repo you pointed out, it has been binged up in https://github.com/ultralytics/yolov3/issues/7. It calculate mAP per image, then average this mAP, which could lead to the mAP higher than the trul value.

fourth-archive commented 5 years ago

@houweidong yes I think the repo computes 1 mAP per image (this 1 mAP is the average of all the mAPs for all the classes present in the image), then averages the 5000 mAPs to get the overall mAP.

What should the correct mAP method be? Maybe I can submit a PR.

eriklindernoren commented 5 years ago

Hi, this should be resolved in the latest version. You can see the updated measurements in the README.

nanhui69 commented 5 years ago

I got the mAP: 0.5145

falex-ml commented 5 years ago

I got the mAP: 0.5145

me too..

falex-ml commented 5 years ago

Also, training for 70 epochs from pretrained weights brings to 0.18mAP only :(

soldier828 commented 4 years ago

I got the mAP: 0.5145

me too..

metooooo

densechen commented 4 years ago

I got the mAP: 0.5145

me, too. What's wrong on earth?

densechen commented 4 years ago

Refer to here: issue