eriklindernoren / PyTorch-YOLOv3

Minimal PyTorch implementation of YOLOv3
GNU General Public License v3.0
7.29k stars 2.64k forks source link

i'm using yolov3.weights to test data/sample/, i found there are some wrong bboxes which are different from this projects #821

Closed J-LINC closed 1 year ago

J-LINC commented 1 year ago

I am testing on/data/sample using the weights trained on the coco dataset, but the results obtained are somewhat different from those shown by the author on the homepage. That is why? 2023-03-29 12-18-00 的屏幕截图 2023-03-29 12-18-22 的屏幕截图 2023-03-29 12-18-38 的屏幕截图

Flova commented 1 year ago

What weights did you use? Did you train from scratch or did you use the official darknet yolov3 ones?

Flova commented 1 year ago

Also make sure that the image size is 608 and not 416 (default).

J-LINC commented 1 year ago

What weights did you use? Did you train from scratch or did you use the official darknet yolov3 ones?

I'm not using my own training, it's the official training weight yolov3.weights

J-LINC commented 1 year ago

Also make sure that the image size is 608 and not 416 (default).

I really use 416 416, and I want to know if the model trained on 416 416 predicts that the image of 608 * 608 will work better, just like here.

Flova commented 1 year ago

I think the full size v3 is trained on 608 iirc..

J-LINC commented 1 year ago

I think the full size v3 is trained on 608 iirc..

I tried and the effect has improved, but there are still some problems. Do you think this is normal?

2023-03-29 19-22-21 的屏幕截图

2023-03-29 19-22-38 的屏幕截图

Flova commented 1 year ago

I think the errors are in levels that I expect from the v3 model. But I'm wondering why the boxes are slightly different now. It could be a numerical thing due to newer library versions etc., as I can not think of any significant changes to this part of the code. I also evaluated the weight on coco right now and got an mAP of 0.57653, which is slightly better compared to the value in the README. image

J-LINC commented 1 year ago

I think the errors are in levels that I expect from the v3 model. But I'm wondering why the boxes are slightly different now. It could be a numerical thing due to newer library versions etc., as I can not think of any significant changes to this part of the code. I also evaluated the weight on coco right now and got an mAP of 0.57653, which is slightly better compared to the value in the README. image

Okay,Thank you for your timely response!