DAMO-YOLO: a fast and accurate object detection method with some new techs, including NAS backbones, efficient RepGFPN, ZeroHead, AlignedOTA, and distillation enhancement.
[X] I have read the README carefully. 我已经仔细阅读了README上的操作指引。
[x] I want to train my custom dataset, and I have read the tutorials for finetune on your data carefully and organize my dataset correctly; 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。
[X] I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
Search before asking
[X] I have searched the DAMO-YOLO issues and found no similar questions.
Before Asking
[X] I have read the README carefully. 我已经仔细阅读了README上的操作指引。
[x] I want to train my custom dataset, and I have read the tutorials for finetune on your data carefully and organize my dataset correctly; 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。
[X] I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
Search before asking
Question
这是我训练的一部分日志,我设置的batch size是28 我在有问题的代码前加了行代码打印出tensor size 在最后的数据加载的bz之前都是14 0/549 0.04637 1.608 0 1.652: 89%|████████▊ | 94/106 [01torch.Size([14, 256, 20, 20])torch.Size([14, 256, 20, 20]) torch.Size([14, 128, 20, 20]) torch.Size([14, 512, 20, 20])torch.Size([14, 128, 20, 20]) torch.Size([14, 512, 20, 20]) 0/549 0.04637 1.607 0 1.653: 90%|████████▉ | 95/106 [01torch.Size([14, 256, 20, 20]) torch.Size([14, 128, 20, 20]) torch.Size([14, 512, 20, 20]) torch.Size([14, 256, 20, 20]) torch.Size([14, 128, 20, 20]) torch.Size([14, 512, 20, 20]) 0/549 0.04637 1.606 0 1.655: 91%|█████████ | 96/106 [01torch.Size([14, 256, 20, 20])torch.Size([14, 256, 20, 20]) torch.Size([14, 128, 20, 20]) torch.Size([14, 512, 20, 20]) torch.Size([14, 128, 20, 20]) torch.Size([14, 512, 20, 20])
torch.Size([14, 256, 20, 20]) torch.Size([14, 128, 20, 20]) torch.Size([14, 512, 20, 20]) 0/549 0.04637 1.606 0 1.654: 92%|█████████▏| 98/106 [01torch.Size([14, 256, 20, 20]) torch.Size([14, 128, 20, 20]) torch.Size([14, 512, 20, 20]) torch.Size([14, 256, 20, 20]) torch.Size([14, 128, 20, 20]) torch.Size([14, 512, 20, 20]) 0/549 0.04637 1.606 0 1.654: 93%|█████████▎| 99/106 [01torch.Size([14, 256, 20, 20]) torch.Size([14, 128, 20, 20]) torch.Size([14, 512, 20, 20]) torch.Size([14, 256, 20, 20]) torch.Size([14, 128, 20, 20]) torch.Size([14, 512, 20, 20]) 0/549 0.04637 1.605 0 1.654: 94%|█████████▍| 100/106 [0torch.Size([14, 256, 20, 20]) torch.Size([14, 128, 20, 20]) torch.Size([14, 256, 20, 20])torch.Size([14, 512, 20, 20]) torch.Size([14, 128, 20, 20]) torch.Size([14, 512, 20, 20]) 0/549 0.04637 1.603 0 1.656: 95%|█████████▌| 101/106 [0torch.Size([14, 256, 20, 20])torch.Size([14, 256, 20, 20]) torch.Size([14, 128, 20, 20]) torch.Size([14, 512, 20, 20]) torch.Size([14, 128, 20, 20]) torch.Size([14, 512, 20, 20]) 0/549 0.04637 1.602 0 1.656: 96%|█████████▌| 102/106 [0torch.Size([14, 256, 20, 20])torch.Size([14, 256, 20, 20]) torch.Size([14, 128, 20, 20]) torch.Size([14, 512, 20, 20]) torch.Size([14, 128, 20, 20]) torch.Size([14, 512, 20, 20]) 0/549 0.04637 1.602 0 1.656: 97%|█████████▋| 103/106 [0torch.Size([14, 256, 20, 20])torch.Size([14, 256, 20, 20]) torch.Size([14, 128, 20, 20]) torch.Size([14, 512, 20, 20])torch.Size([14, 128, 20, 20]) torch.Size([14, 512, 20, 20]) 0/549 0.04637 1.602 0 1.656: 98%|█████████▊| 104/106 [0torch.Size([14, 256, 20, 20])torch.Size([14, 256, 20, 20]) torch.Size([14, 128, 20, 20]) torch.Size([14, 512, 20, 20])torch.Size([14, 128, 20, 20]) torch.Size([14, 512, 20, 20]) 0/549 0.04637 1.601 0 1.656: 99%|█████████▉| 105/106 [0torch.Size([14, 256, 20, 20]) torch.Size([14, 128, 20, 20]) torch.Size([14, 512, 20, 20]) torch.Size([14, 256, 20, 20]) torch.Size([14, 128, 20, 20]) torch.Size([14, 512, 20, 20]) 0/549 0.04637 1.6 0 1.656: 100%|██████████| 106/106 [0 torch.Size([1, 256, 20, 20]) torch.Size([1, 128, 20, 20]) torch.Size([1, 512, 20, 20]) Inferencing model in train datasets.: 0%| | 0/7 [00:00<?, ?it/s]torch.Size([56, 256, 21, 21]) torch.Size([56, 128, 21, 21]) torch.Size([56, 512, 22, 22]) Inferencing model in train datasets.: 0%| | 0/7 [00:00<?, ?it/s] ERROR in training loop or eval/save model. Traceback (most recent call last): File "D:/zzz/YOLOv6-main/YOLOv6-main/tools/train2.py", line 145, in
main(args)
File "D:/zzz/YOLOv6-main/YOLOv6-main/tools/train2.py", line 135, in main
trainer.train()
File "D:\zzz\YOLOv6-main\YOLOv6-main\yolov6\core\engine.py", line 127, in train
self.after_epoch()
File "D:\zzz\YOLOv6-main\YOLOv6-main\yolov6\core\engine.py", line 193, in after_epoch
self.eval_model()
File "D:\zzz\YOLOv6-main\YOLOv6-main\yolov6\core\engine.py", line 229, in eval_model
results, vis_outputs, vis_paths = eval.run(self.data_dict,
File "D:\anaconda\envs\hxs\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, kwargs)
File "D:\zzz\YOLOv6-main\YOLOv6-main\tools\eval.py", line 158, in run
pred_result, vis_outputs, vis_paths = val.predict_model(model, dataloader, task)
File "D:\zzz\YOLOv6-main\YOLOv6-main\yolov6\core\evaler.py", line 128, in predictmodel
outputs, = model(imgs)
File "D:\anaconda\envs\hxs\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, *kwargs)
File "D:\zzz\YOLOv6-main\YOLOv6-main\yolov6\models\yolo.py", line 37, in forward
x = self.neck(x)
File "D:\anaconda\envs\hxs\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(input, kwargs)
File "D:\zzz\YOLOv6-main\YOLOv6-main\yolov6\models\giraffefpn.py", line 239, in forward
x4 = torch.cat([x1, x24, x34], 1)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 21 but got size 22 for tensor number 2 in the list.
进程已结束,退出代码1
请问我该怎么解决这个问题呢?
Additional
No response