THU-MIG / yolov10

YOLOv10: Real-Time End-to-End Object Detection
https://arxiv.org/abs/2405.14458
GNU Affero General Public License v3.0
8.58k stars 749 forks source link

Performance alignment about YOLOV10-S #223

Open cdy-for-grad opened 1 month ago

cdy-for-grad commented 1 month ago

hi,it‘s a great project about the YOLO-series. And i tried the training setting with COCO dataset, the result is behind. The performance is 0.45365[mAP50-95]. However, the published performance in this project is 46.3. How can i get the performance in your project.

the scripts we use yolo detect train data=coco.yaml model=yolov10s.yaml epochs=500 batch=256 imgsz=640 device=0,1,2,3,4,5,6,7 --cache

the result image

and we retrain the model without the cache operation

yolo detect train data=coco.yaml model=yolov10s.yaml epochs=500 batch=256 imgsz=640 device=0,1,2,3,4,5,6,7

the result is similar with cache, it looks like worse than the performance in your paper. Is there any other trick? image

Thanks !

jameslahm commented 1 month ago

Thanks for your interest! There is no other trick. We previously released our training logs here. Could you please take a look at the differences between your training logs and ours? Besides, do you apply any changes in the codebase? Thanks.

cdy-for-grad commented 1 month ago

Thank you for reply! First, i do not apply any change in your code base. Specify the coco dataset, and training the model. Here is my training log
YOLOV10s.csv

jameslahm commented 1 month ago

Thanks! Do you configure the environment according to the requirements.txt? Besides, would you mind sharing your checkpoint with us? Thanks.

cdy-for-grad commented 1 month ago

Yes, i install the environment with your operation in your project. The best check point is YOLOV10_best.zip

Thank you

jameslahm commented 1 month ago

Thanks! We notice that:

  1. There is a warning when evaluating your checkpoint, as below.
    WARNING ⚠️ YOLOV10_best.pt appears to require 'dill', which is not in ultralytics requirements.
    AutoInstall will run now for 'dill' but this feature will be removed in the future.
    Recommend fixes are to train a new model using the latest 'ultralytics' package or to run a command with an official YOLOv8 model, i.e. 'yolo predict model=yolov8n.pt'
    requirements: Ultralytics requirement ['dill'] not found, attempting AutoUpdate...

    Is there any difference between your codebase and this codebase?

  2. The APval result is 45.8 in our environment, as below.
    Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.458
    Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.627
    Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.498
    Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.271
    Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.505
    Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.623
    Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.360
    Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.603
    Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.672
    Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.479
    Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.728
    Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.833

    Is your code the latest version or do you use the coco api for evaluation?

Thanks.

cdy-for-grad commented 1 month ago

The validation process and the result are all automatic show after the training process. I did not add any extra operation. And the dill package is contained in docker. Could it affect the final performance?

jameslahm commented 1 month ago

Thanks! The dill package should have no impact on the performance. Does your evaluation using coco api show the AP of 45.4?

cdy-for-grad commented 1 month ago

How can i use coco api to get the final performance?
use this? yolo val model=jameslahm/yolov10{n/s/m/b/l/x} data=coco.yaml batch=256

If like that, i am wondering why the validation process after the training process is not aligned with the coco api evaluation result ?