Closed andualemw1 closed 3 weeks ago
๐ Hello @andualemw1, thank you for your interest in YOLOv5 ๐! An Ultralytics engineer will assist you soon.
To get started with cropping images while retaining annotations, you might find our โญ๏ธ Tutorials helpful. You can explore guides for tasks such as Custom Data Training where managing image sizes and annotations is discussed.
If this is a ๐ Bug Report, please provide a minimum reproducible example to help us debug it.
For custom training questions, provide as much information as possible, including dataset image examples and training logs. Verify you are following our Tips for Best Training Results.
Ensure you have Python>=3.8.0 with all requirements.txt installed, including PyTorch>=1.8. To get started, run:
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install
YOLOv5 can be run in verified environments, including:
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export, and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
Explore our latest object detection model, YOLOv8 ๐! Designed for speed and accuracy, perfect for a wide range of tasks. Discover more in our YOLOv8 Docs and get started with:
pip install ultralytics
Feel free to provide further details to help us address your question! ๐
@andualemw1 to crop your images from 1280x1280 to 640x640 while maintaining correct YOLO annotations, you'll need to adjust the annotations to match the new image dimensions. This involves recalculating the bounding box coordinates based on the new crop position. You can automate this process using a script to update the annotations accordingly. If you need further guidance, consider checking out image processing libraries like OpenCV or PIL for assistance with cropping and recalculating coordinates.
Thank you so much for your reply support, this has been my quest for quite a while, I will do accordingly.
You're welcome! If you have any more questions as you proceed, feel free to ask.
Hello, sorry for coming back again. I am always confused about how to establish YOLOv5 benchmarking. Is it correct to use a confidence threshold of 0.25 on the test dataset for a custom model and for the standard Yolo model?
python val.py --data
thank you in advance!
Yes, your approach to benchmarking with val.py
and a confidence threshold of 0.25 is correct. The --conf-thres 0.25
sets the minimum confidence for detections, which is standard for YOLOv5 evaluation. Ensure you're using the same command and thresholds consistently across both custom and standard models for fair comparisons. You can refer to the YOLOv5 documentation for additional details. Let me know if you have further questions!
Thanks for your in-time response! YOLOv5 has default hyperparameters for confidence threshold (conf-thres) and Intersection over Union (IoU) threshold during training mode, typically set to conf-thres=0.001 and IoU=0.6 for better evaluation of val dataset. However, to optimize performance and evaluate your model effectively, it is often recommended to fine-tune the confidence threshold based on the F1 score.
As noted in the YOLOv5 discussion, determining an optimal conf-thres value for your specific dataset can be opt by oneself. By analyzing the F1 score, which balances precision and recall, you can select a confidence threshold that suits your modelโs accuracy while minimizing false positives and false negatives.
You're absolutely right! The confidence threshold (conf-thres
) can significantly impact model performance, and fine-tuning it based on the F1 score is a great approach. As YOLOv5 defaults to conf-thres=0.001
during training, adjusting this for evaluation or testing is often necessary. Running validation with varied thresholds and analyzing the F1 score curve is a practical way to identify the optimal value for your specific dataset. For more information, you can refer to the Validation Guide. Let us know if you need further assistance!
Search before asking
Question
As we know, YOLO supports both square and rectangular images. However, for speed and dataset size considerations, I want to crop an image from 1280x1280 to 640x640. YOLO annotations/labels are originally created based on the imageโs width and height. How can I bridge the gap in training the dataset before and after cropping the image while keeping the annotations unchanged?
thanks in advance!
Additional
No response