Open ceegartner opened 1 month ago
👋 Hello @ceegartner, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.
If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.
If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.
Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.
Pip install the ultralytics
package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.
pip install ultralytics
YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
Classification uses CenterCrop
Search before asking
Question
Hi!
I am training a YOLOv8 model for classification, the aim is to detect if a title block (from an architectural plan) is rotated. I have 3 classes : 0, 90 and 270 (the possible rotation angles) Most of my input images are much wider than they are tall. In my experience, YOLO automatically resizes images so that the long side matches the imgsz, and keep the aspect-ratio. However I am suspecting YOLOv8 to work differently for classification models.
When I looked at the val_batch_labels file in the output folder, it seemed like the images were resized so that the shorter side matches the imgsz, and my images appeared cropped
So I tried to resize the images by myself, adding a black padding around so that the longer side matches the imgsz, and I trained the model again (same parameters as before) It turns out that the results are now way better!
And when I look at the val_batch_labels file, it looks not cropped :
Is it a bug of YOLOv8 ? Or is it supposed work like this and I am missing something ?
I used the yolov8n-cls.pt model, with this line to train : model = YOLO("yolov8n-cls.pt") model.train(data=R'....../data/resized', epochs=100, imgsz=640)
I am using ultralytics==8.1.23
Thanks !
Additional
No response