Open AlexGrig opened 5 years ago
Hi, thanks a lot for the repo )
I have encountered a problem that testing with the different batch size gives different mAP. For instance: batch_size = 1: mAP = 0.5492690892808874, Class '0' (person) - AP: 0.742873972191351 batch_size = 2: mAP = 0.5447464805100768, Class '0' (person) - AP: 0.7361206980964793 batch_size = 8: 0.5145213961562642, Class '0' (person) - AP: 0.69071605170708
Actually, the batch_size=1 does not work currently, so I made a pull request #163
It seems to be a bug, that batch size influence the testing precision. Also, why these numbers are different from the original yolov3 results (and from the numbers in README) even though the weights are exactly the same?
I have try batch_size about 1,2,4,8,16, however mAP was all equal to 0.5145196513137332 I have not seen your different results about different batch_size
I also tried different batch_size, all equal to 0.5145. Why can't I reach mAP from the numbers in README?
Hi, you have the same output because of this line in the test.py: https://github.com/eriklindernoren/PyTorch-YOLOv3/blob/47b7c912877ca69db35b8af3a38d6522681b3bb3/test.py#L98
The batch size is not propagated from the command line. If you change batch_size=opt.batch_size
then you'll probably see what I see.
A fix is available in issue https://github.com/eriklindernoren/PyTorch-YOLOv3/issues/243
set batch_size=1 will bring new problem, the solution list here https://github.com/eriklindernoren/PyTorch-YOLOv3/pull/163/commits/83c3d8b2f440f78715fb674b3d318c63ffe3eb16
@AlexGrig hi, have you found the reason for this phenomenon? I have the same problem
Hi, thanks a lot for the repo )
I have encountered a problem that testing with the different batch size gives different mAP. For instance: batch_size = 1: mAP = 0.5492690892808874, Class '0' (person) - AP: 0.742873972191351 batch_size = 2: mAP = 0.5447464805100768, Class '0' (person) - AP: 0.7361206980964793 batch_size = 8: 0.5145213961562642, Class '0' (person) - AP: 0.69071605170708
Actually, the batch_size=1 does not work currently, so I made a pull request #163
It seems to be a bug, that batch size influence the testing precision. Also, why these numbers are different from the original yolov3 results (and from the numbers in README) even though the weights are exactly the same?