Open cdy-for-grad opened 1 month ago
Thanks for your interest! There is no other trick. We previously released our training logs here. Could you please take a look at the differences between your training logs and ours? Besides, do you apply any changes in the codebase? Thanks.
Thank you for reply! First, i do not apply any change in your code base. Specify the coco dataset, and training the model. Here is my training log
YOLOV10s.csv
Thanks! Do you configure the environment according to the requirements.txt
? Besides, would you mind sharing your checkpoint with us? Thanks.
Yes, i install the environment with your operation in your project. The best check point is YOLOV10_best.zip
Thank you
Thanks! We notice that:
WARNING ⚠️ YOLOV10_best.pt appears to require 'dill', which is not in ultralytics requirements.
AutoInstall will run now for 'dill' but this feature will be removed in the future.
Recommend fixes are to train a new model using the latest 'ultralytics' package or to run a command with an official YOLOv8 model, i.e. 'yolo predict model=yolov8n.pt'
requirements: Ultralytics requirement ['dill'] not found, attempting AutoUpdate...
Is there any difference between your codebase and this codebase?
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.458
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.627
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.498
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.271
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.505
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.623
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.360
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.603
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.672
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.479
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.728
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.833
Is your code the latest version or do you use the coco api for evaluation?
Thanks.
The validation process and the result are all automatic show after the training process. I did not add any extra operation. And the dill package is contained in docker. Could it affect the final performance?
Thanks! The dill package should have no impact on the performance. Does your evaluation using coco api show the AP of 45.4?
How can i use coco api to get the final performance?
use this?
yolo val model=jameslahm/yolov10{n/s/m/b/l/x} data=coco.yaml batch=256
If like that, i am wondering why the validation process after the training process is not aligned with the coco api evaluation result ?
hi,it‘s a great project about the YOLO-series. And i tried the training setting with COCO dataset, the result is behind. The performance is 0.45365[mAP50-95]. However, the published performance in this project is 46.3. How can i get the performance in your project.
the scripts we use yolo detect train data=coco.yaml model=yolov10s.yaml epochs=500 batch=256 imgsz=640 device=0,1,2,3,4,5,6,7 --cache
the result![image](https://github.com/THU-MIG/yolov10/assets/140789053/3da4ea70-f34f-4d3f-bd4f-ff695b411b40)
and we retrain the model without the cache operation
yolo detect train data=coco.yaml model=yolov10s.yaml epochs=500 batch=256 imgsz=640 device=0,1,2,3,4,5,6,7
the result is similar with cache, it looks like worse than the performance in your paper. Is there any other trick?![image](https://github.com/THU-MIG/yolov10/assets/140789053/4a99a28a-a4cd-45df-abff-a47076a92c32)
Thanks !