Open YounghoJo01 opened 2 months ago
π Hello @YounghoJo01, thank you for reaching out and your interest in YOLOv5 π!
It looks like you've encountered a runtime error while working with tensor dimensions. Don't worry, an Ultralytics engineer will assist you shortly. In the meantime, could you please provide a minimum reproducible example (MRE) to help us better understand and debug the issue? This should include any relevant code snippets and specific configurations youβre using.
To ensure everything is set up correctly, please check that your environment meets the following requirements:
Python>=3.8.0 with all requirements.txt installed, including PyTorch>=1.8. To get started:
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install
YOLOv5 can be run in several verified environments with pre-installed dependencies:
While you wait, you might also be interested in trying out our latest model:
Check out YOLOv8, our state-of-the-art model for 2023, featuring improvements in speed and accuracy for various tasks. Get started with:
pip install ultralytics
Thanks again for your patience and detailed report! π
@YounghoJo01 the error you're encountering suggests a mismatch in tensor dimensions during the loss computation. Please ensure your label format matches the expected dimensions in the compute_loss
function. Verify that your dataset and model configurations are correctly set up, particularly the number of classes and label dimensions. If the issue persists, try updating to the latest YOLOv5 version to see if the problem resolves.
Search before asking
Question
whdudgh@whdudgh-G5-KE:~/yolov5$ python3 load_dataset.py Scanning /home/whdudgh/datasets/my_dataset/labels/train.cache... 1038 images, 12 Overriding model.yaml nc=80 with nc=3
0 -1 1 5280 models.common.Conv [3, 48, 6, 2, 2]
1 -1 1 41664 models.common.Conv [48, 96, 3, 2]
2 -1 2 65280 models.common.C3 [96, 96, 2]
3 -1 1 166272 models.common.Conv [96, 192, 3, 2]
4 -1 4 444672 models.common.C3 [192, 192, 4]
5 -1 1 664320 models.common.Conv [192, 384, 3, 2]
6 -1 6 2512896 models.common.C3 [384, 384, 6]
7 -1 1 2655744 models.common.Conv [384, 768, 3, 2]
8 -1 2 4134912 models.common.C3 [768, 768, 2]
9 -1 1 1476864 models.common.SPPF [768, 768, 5]
10 -1 1 295680 models.common.Conv [768, 384, 1, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 models.common.Concat [1]
13 -1 2 1182720 models.common.C3 [768, 384, 2, False]
14 -1 1 74112 models.common.Conv [384, 192, 1, 1]
15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
16 [-1, 4] 1 0 models.common.Concat [1]
17 -1 2 296448 models.common.C3 [384, 192, 2, False]
18 -1 1 332160 models.common.Conv [192, 192, 3, 2]
19 [-1, 14] 1 0 models.common.Concat [1]
20 -1 2 1035264 models.common.C3 [384, 384, 2, False]
21 -1 1 1327872 models.common.Conv [384, 384, 3, 2]
22 [-1, 10] 1 0 models.common.Concat [1]
23 -1 2 4134912 models.common.C3 [768, 768, 2, False]
24 [17, 20, 23] 1 32328 models.yolo.Detect [3, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [192, 384, 768]] YOLOv5m summary: 291 layers, 20879400 parameters, 20879400 gradients, 48.2 GFLOPs
Processed Shapes: tensor([1080., 1920., 1080., 1920.], device='cuda:0') Labels size: torch.Size([1, 3, 6]) Labels: tensor([[[0.00000, 2.00000, 0.35182, 0.43099, 0.03490, 0.01302], [0.00000, 2.00000, 0.50338, 0.42891, 0.02656, 0.00990], [0.00000, 2.00000, 0.55469, 0.42630, 0.02500, 0.01094]]], device='cuda:0') targets shape: torch.Size([3, 5]) gain shape: torch.Size([7]) targets: tensor([[0.00000, 2.00000, 0.35182, 0.43099, 0.03490], [0.00000, 2.00000, 0.50338, 0.42891, 0.02656], [0.00000, 2.00000, 0.55469, 0.42630, 0.02500]], device='cuda:0') Traceback (most recent call last): File "load_dataset.py", line 73, in
loss, loss_items = compute_loss(outputs, labels)
File "/home/whdudgh/yolov5/utils/loss.py", line 144, in call
tcls, tbox, indices, anchors = self.build_targets(p, targets) # targets
File "/home/whdudgh/yolov5/utils/loss.py", line 240, in build_targets
t = targets * gain # shape(3,n,7)
RuntimeError: The size of tensor a (6) must match the size of tensor b (7) at non-singleton dimension 2
Additional
No response