Closed Hank-lu-88 closed 6 months ago
π Hello @Hank-lu-88, thank you for your interest in YOLOv5 π! Please visit our βοΈ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.
If this is a π Bug Report, please provide a minimum reproducible example to help us debug it.
If this is a custom training β Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.
Python>=3.8.0 with all requirements.txt installed including PyTorch>=1.8. To get started:
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 π!
Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects.
Check out our YOLOv8 Docs for details and get started with:
pip install ultralytics
@Hank-lu-88 hello there! π
Training YOLOv5 on such small images (32x24 pixels) is indeed challenging due to the limited amount of detail each image can convey. YOLOv5 models typically perform best with larger images (e.g., 640x640 pixels) as the architecture is designed to detect objects across various scales, but it relies on having sufficient resolution to identify features accurately.
One approach to tackle this is to upscale your thermal images to a larger size before training while keeping in mind this may introduce some artifacts. Additionally, adjusting the --img-size
to a larger dimension that's still divisible by 32 might help (e.g., --img-size 640
). This won't change the native resolution of your thermal images but will provide the model with a resized input that's more in line with its optimal operating conditions. Here's the adjusted command, considering the upscaling:
python train.py --weights v5lite-s.pt --cfg models/v5Lite-s.yaml --img-size 640 640 --rect --batch-size 16 --data data/mydata.yaml --device 0 --epochs 300
Remember, upscaling and adjusting --img-size
requires you to balance between model performance and the computational cost. You might need to experiment with different sizes to find the sweet spot for your specific use case.
If performance issues persist, consider exploring custom model adjustments or consulting with our team for further guidance by checking the Ultralytics Docs.
Keep up the great work, and we're here to support every step of the way in your YOLOv5 journey!
π Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO π and Vision AI β
Search before asking
Question
Hello: My project is to make a thermal image detector to detect people using thermal images. I use yolov5s to train the model. My lens uses a 32x24 pixel thermal image camera, so I use 32x24 Pictures are used for training.Here is my commandοΌ
python train.py --weights v5lite-s.pt --cfg models/v5Lite-s.yaml --img-size 32 24 --rect --batch-size 64 --data data/mydata.yaml --device 0 --epochs 300
but the model produces weird results.
The picture below is my label. I expected this result like that
I'm wondering if yolov5 has a minimum image size limit, please help me, thank you.
Additional
No response