Closed HeinZingerRyo closed 1 day ago
I've converted coco format annotations to yolo format, and put them in my_coco_dir/labels, and the evaluation results with yolo are as below:
It's slightly different from evaluation results with coco json annotations, but is still about 4% mAP lower than the reported 66.2% mAP.
Sry I got it, I forgot to check the model size. It seems that this result refers to the 23.1M model result in Table 1. Kindly provide those pretrained model weights with larger parameter numbers, if available?
Hi, I've downloaded your pretrained model weight 'best.pt' and ran test.py on COCO2017 validation set, but the performance is somehow different from the reported value in your paper: in which mAP with iou=0.50 is only 62.8%
To reproduce: Git clone this repo and set up the environment; downloaded COCO 2017 validation set and extracted; generated val2017.txt with all 5000 validation images; only modified some code for file paths e.g. model = YOLO('/path/to/weight/.pt') and run test.py
Here are some logs with regard to this result: