Closed KZLZG closed 8 months ago
👋 Hello @KZLZG, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.
If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.
If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.
Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.
Pip install the ultralytics
package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.
pip install ultralytics
YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
@KZLZG hello! The error you're encountering, where stack expects tensors of equal size, suggests that your augmentation pipeline is producing images of different sizes, likely due to the fact that RandomCrop
is set with always_apply=True
. When you're using RandomCrop
, it's essential that all images are resized to a consistent size either before or after cropping, to ensure that subsequent batch formation can stack them uniformly.
Looking at the code, your augmentation might conflict with the reshaping done by LetterBox
. If you're applying a random crop directly, you need to make sure that the pre_transform
(here, LetterBox
) accommodates your new augmentation or that you resize images to the same shape after the crop operation.
When integrating custom augmentations with fixed output sizes, you also need to consider their impact on the aspect ratio of bounding boxes for object detection. If you need further help with specific implementations, please see Ultralytics Docs on custom augmentations or reach out with additional context.
Good luck with your project, and thank you for contributing to the YOLOv8 community! 🚀
Thank you for your answer @glenn-jocher. I git reseted to the last version so it would be easier to check for everyone. Can you please tell me how to use fixed size albumentations without conflict in yolov8 or how to work with LetterBox
class to resolve the conflict with such augmentations. I've tried to change manually the size that's going to LetterBox
class by the pre_tr ansform=None if stretch else LetterBox(new_shape=(480, 480)),
(check picture) the error doesn't occure but the augmentation doesn't work as it should and not all photos in batch are being cropped
The only solution in docs that i found is by using a callback to override the trainer attributes. Is this the only way? The CLI is preferable for me.
Update: i've tried the way with overriding the trainer attributes and the same error has occured.
@KZLZG to use fixed-size Albumentations without conflict in YOLOv8, ensure that all images are resized to the same dimensions post-augmentation. The LetterBox
class is designed to resize and pad images to a target size without changing the aspect ratio, which might not be compatible with fixed-size cropping if not managed correctly.
If you want to use RandomCrop
with a fixed size, you might consider disabling LetterBox
or adjusting it to work with your fixed-size output. Ensure that after the RandomCrop
, all images are padded or resized to the same batch size expected by the model.
The CLI is designed to work with the default augmentation pipeline, and custom augmentations may require additional adjustments. If the documentation's solution of overriding trainer attributes didn't resolve the issue, it might be necessary to review the integration of your custom augmentation step by step to ensure compatibility with the batch formation process.
Remember, the key is to maintain consistent image and batch sizes throughout the entire augmentation and training pipeline. If you continue to face issues, consider reaching out with a more detailed error log or context for further assistance.
Error occures on the 3rd or 4th iteration of the first epoch. I checked the train train_batch0.jpg, train_batch1.jpg, train_batch2.jpg and only the train_batch1.jpg is cropped and there are 2 of them in batch as it should be, other 2 are 1920x1920 and only 1 in batch.
If there is any further assistance that you can provide it would be great. So there are screenshots of my Albumentations class
then my error log
(it's the same for) and my 2 versions of disabling the LetterBox
class, none of which helped.
I would be grateful if you provide me some insight on how does the yolov8 code for augmentation and training pipeline work since i haven't found it in docs and the only way i see is adjusting LetterBox
.
P.S. i understand the source of the problem, but i switched on Yolov8 from 7 2 weeks ago and i don't know where to find the whole code responsible for error in aug pipeline
@KZLZG, it seems like the issue is with inconsistent image sizes after augmentation, which is causing problems during batch formation. The LetterBox
class is typically used to ensure that all images have the same dimensions by resizing and padding them appropriately. When you introduce a fixed-size crop, you must ensure that the output of your augmentation pipeline is consistent across all images.
To resolve this, you can either adjust the LetterBox
class to handle the fixed-size crops properly or ensure that after the RandomCrop
, all images are resized to the same dimensions expected by the model. It's crucial that the final batch sent to the model has images of the same size.
The augmentation and training pipeline in YOLOv8 is designed to work seamlessly with the provided augmentations, and custom augmentations may require a deeper understanding of the codebase. Unfortunately, without the ability to see your screenshots or error logs, I can only provide general advice.
If you're still having trouble, I recommend carefully reviewing the augmentation pipeline to ensure that all images are consistently processed. If necessary, consider reaching out with a detailed error log or context for further assistance.
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
Search before asking
YOLOv8 Component
Train, Augmentation
Bug
Hello, i'm facing a trouble with adding Crop Albumentation. After replacing albumentations list in Albumentations class with:
T = [ A.RandomCrop(480, 480, always_apply=True, p=1.0)]
in augment.py i'm getting an error:RuntimeError: stack expects each tensor to be equal size, but got [3, 480, 480] at entry 0 and [3, 1920, 1920] at entry 1
After 7 hours of debugging i've understood that it probably has to do smth withdef v8_transforms(dataset, imgsz, hyp, stretch=False):
function and LetterBox class. The error occures whenfor i, batch in pbar:
`class Albumentations: """ Albumentations transformations.
`
v8_transforms current state(the error also occures in default state): `def v8_transforms(dataset, imgsz, hyp, stretch=False):
"""Convert images to a size suitable for YOLOv8 training.""" print("stretch: ", stretch, "\n") '''pre_transform = Compose([ Mosaic(dataset, imgsz=imgsz, p=hyp.mosaic), CopyPaste(p=hyp.copy_paste), RandomPerspective( degrees=hyp.degrees, translate=hyp.translate, scale=hyp.scale, shear=hyp.shear, perspective=hyp.perspective, pre_transform=None if stretch else LetterBox(new_shape=(imgsz, imgsz)), #баг наверняка здесь, скретч False on train ) ])''' pre_transform=None if stretch else LetterBox(new_shape=(imgsz, imgsz))
flip_idx = dataset.data.get('flip_idx', []) # for keypoints augmentation print("use_keypoints: ", dataset.use_keypoints, "\n") if dataset.use_keypoints: kpt_shape = dataset.data.get('kpt_shape', None) if len(flip_idx) == 0 and hyp.fliplr > 0.0: hyp.fliplr = 0.0 LOGGER.warning("WARNING ⚠️ No 'flip_idx' array defined in data.yaml, setting augmentation 'fliplr=0.0'") elif flip_idx and (len(flip_idx) != kpt_shape[0]): raise ValueError(f'data.yaml flip_idx={flip_idx} length must be equal to kpt_shape[0]={kpt_shape[0]}')
return Compose([ pre_transform,
MixUp(dataset, pre_transform=pre_transform, p=hyp.mixup),
]) # transforms`
Environment
No response
Minimal Reproducible Example
No response
Additional
No response
Are you willing to submit a PR?