Open SMDR1412 opened 2 months ago
đ Hello @SMDR1412, thank you for your interest in Ultralytics đ! We understand the curiosity surrounding different model performances.
For a detailed analysis of your issue, it's helpful to provide more context. If this is a đ Bug Report, please share a minimum reproducible example that clearly demonstrates the issue you're encountering.
If this involves custom training or model-specific â questions, please provide detailed information such as datasets, configurations, and any training logs you have. Ensure you're following our Tips for Best Training Results.
Join our community for more interaction:
Ensure your ultralytics
package and dependencies are up-to-date to eliminate version-related issues:
pip install -U ultralytics
Verify your setup in one of the following environments, each preloaded with necessary dependencies:
Check the CI status to ensure core functionalities operate correctly:
An Ultralytics engineer will review and assist you soon. Thank you for your patience! đ
Hey @SMDR1412 can I have your yolo check
output, please?
@SMDR1412 The speed really depends on the architecture design and the device, not just the params and flops. :) And yes yolo11n is slightly slower than yolov8 in our tests as well, however it comes with 2.1 mAP improvement and has better speed accuracy trade-off so we stick with the design. :)
Search before asking
Question
I compared YOLOv8 and YOLO11 on a fire detection dataset and found that YOLO11n took longer for both training and inference compared to YOLOv8n. In theory, YOLO11 should be faster due to having fewer parameters and lower FLOPs, but this was not the case in practice. Why is this happening?
Additional
1