Open CYJ-GH opened 1 day ago
出现这个错误的原因是在数据处理阶段,代码将分割任务的一部分样本也当作检测任务了,因为分割任务只有mask,没有bound ing box,所以会出现这个错误,需要debug一下,查看哪里出错了。
可能的解决方案,你将single_cls设置为True试试,然后把检测任务数量换成1,因为当时要跟YOLOP去做比较,他们是把car, truck, bus, train合并成vehicle去检测的, 为了公平,我是按照single_cls=True去开发的这个代码,如果设置为False可能就会出现这样的错误,需要debug修改一下数据处理逻辑。
您好,我尝试训练自己找到的其他数据集,我将数据集标注转换成YOLO格式,文件夹结构如下,其中detection文件夹中是目标检测的YOLO格式标注,segement_area文件夹中是可行区域的YOLO格式标注,segment_lane是车道线的YOLO格式标注。 Hello, I am trying to train on another dataset I found. I have converted the dataset annotations into YOLO format. The folder structure is as follows: The detection folder contains YOLO format annotations for object detection, the segement_area folder contains YOLO format annotations for feasible areas, and the segment_lane folder contains YOLO format annotations for lane markings.
The id represent the correspondence relation
├─dataset root │ ├─images │ │ ├─train │ │ ├─val │ ├─detection │ │ ├─labels │ │ │ ├─train │ │ │ ├─val │ ├─segment_area │ │ ├─labels │ │ │ ├─train │ │ │ ├─val │ ├─segment_lane │ │ ├─labels │ │ │ ├─train │ │ │ ├─val
例如sement_area/labels/val中的一个txt文件内容是: For example, the content of a .txt file in sement_area/labels/val is: 10 0.1306242515625 0.7815792680555556 0.0 0.8370350055555555 0.0 0.9999362277777777 0.46675068984375 1.0009756875 0.4173604421875 0.9570732458333334 0.1978859921875 0.9548784416666667 0.19983560859375 0.8006421791666666 0.1306242515625 0.7815792680555556 10 0.38896105 0.8978049486111112 0.50300084921875 0.8734278319444444 0.6151037515625 0.9306165583333332 0.6131541359375 0.9999362277777777 0.49021105781249996 1.0009756875 0.38896105 0.8978049486111112
同时,我修改了datasets中的yaml文件: At the same time, I modified the YAML file in the datasets: ` path: D:\file\graduate\data\guoneng_dataset\datasets\datasets\YOLOformat # dataset root dir
train: images\train # train images for object detection (relative to 'path')
val: images\val # val images for object detection (relative to 'path')
test: images\val # test images for object detection (relative to 'path')
labels_list:
tnc: 7 # number of classes nc_list: [5,1,1] map: [None,{'9':'0'},{'10':'0'}]
names: 0: person # 1 1: rider # 2 2: car # 3 3: bus # 3 4: truck # 3 5: bike # 3 6: motorcycle # 3 7: traffic sign # 4 8: traffic light # 5 9: lane # Combine all lane-related categories into one 6 10: area # Combine all area-related categories into one 7 `
以及模型的配置yaml文件: and the model's configuration YAML file: ` tnc: 7
scales: n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs
scale: n
backbone:
[from, repeats, module, args]
head:
[-1, 1, nn.Upsample, [None, 2, 'nearest']]
[[-1, 6], 1, Concat, [1]] # cat backbone P4
[-1, 3, C2f, [512]] # 12
[-1, 1, nn.Upsample, [None, 2, 'nearest']]
[[-1, 4], 1, Concat, [1]] # cat backbone P3
[-1, 3, C2f, [256]] # 15 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]]
[[-1, 12], 1, Concat, [1]] # cat head P4
[-1, 3, C2f, [512]] # 18 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]]
[[-1, 9], 1, Concat, [1]] # cat head P5
[-1, 3, C2f, [1024]] # 21 (P5/32-large)
lane
[9, 1, nn.Upsample, [None, 2, 'nearest']]
[[-1, 6], 1, Concat_dropout, [1]] # cat backbone P4
[-1, 3, C2f, [512]] # 24
[-1, 1, nn.Upsample, [None, 2, 'nearest']]
[[-1, 4], 1, Concat_dropout, [1]] # cat backbone P3
[-1, 3, C2f, [256]] # 27 (P3/8-small)
[-1, 1, nn.Upsample, [None, 2, 'nearest']] # for lane segmentation
[[-1, 2], 1, Concat_dropout, [1]] # cat backbone P2
[-1, 3, C2f, [128]] # 30 (P2)
[-1, 1, nn.Upsample, [None, 2, 'nearest']] #
[[-1, 0], 1, Concat_dropout, [1]] # cat backbone P1
[-1, 3, C2f, [64]] # 33 (P1)
drivable
[9, 1, nn.Upsample, [None, 2, 'nearest']]
[[-1, 6], 1, Concat_dropout, [1]] # cat backbone P4
[-1, 3, C2f, [512]] # 36
[-1, 1, nn.Upsample, [None, 2, 'nearest']]
[[-1, 4], 1, Concat_dropout, [1]] # cat backbone P3
[-1, 3, C2f, [256]] # 39 (P3/8-small)
[-1, 1, nn.Upsample, [None, 2, 'nearest']] # 30 for drivable segmentation
[[-1, 2], 1, Concat_dropout, [1]]
[-1, 3, C2f, [128]] # 42 (P2)
[-1, 1, nn.Upsample, [None, 2, 'nearest']] #
[[-1, 0], 1, Concat_dropout, [1]]
[-1, 3, C2f, [64]] # 45 (P1)
tasks
并修改了train.py中相应的文件路径,将amp设置为false是因为如果不设置的话在检查amp的时候报错了: I also modified the file paths in train.py accordingly and set amp to False because an error occurred when checking amp if it wasn't set. `import sys sys.path.insert(0, r"D:\file\graduate\code\YOLOv8-multi-task-main\YOLOv8-multi-task-main\ultralytics") from ultralytics import YOLO
model = YOLO(r'D:\file\graduate\code\YOLOv8-multi-task-main\YOLOv8-multi-task-main\ultralytics\models\v8\yolov8-bdd-v4-n-myCarDataset.yaml', task='multi') # build a new model from YAML
model.train(data=r'D:\file\graduate\code\YOLOv8-multi-task-main\YOLOv8-multi-task-main\ultralytics\datasets\datasets_cars.yaml', batch=12, epochs=30, imgsz=(640,640), device=[0], name='yolopm', val=True, task='multi', classes=[0,1,2,3,4,5,6,7,8,9,10], combine_class=[2,3,4,9], single_cls=False,amp=False) `
不知道是不是数据集格式的问题还是代码有哪里没有修改,运行训练代码后出现报错: I'm not sure if it's an issue with the dataset format or if there's something in the code that hasn't been modified, but after running the training code, I encountered an error: WARNING Box and segment counts should be equal, but got len(segments) = 43, len(boxes) = 106. To resolve this only boxes will be used and all segments will be removed. To avoid this please supply either a detect or segment dataset, not a detect-segment mixed dataset. Traceback (most recent call last): File "ultralytics/train_myCarDataset.py", line 12, in
model.train(data=r'D:\file\graduate\code\YOLOv8-multi-task-main\YOLOv8-multi-task-main\ultralytics\datasets\datasets_cars.yaml',
File "d:\file\graduate\code\yolov8-multi-task-main\yolov8-multi-task-main\ultralytics\yolo\engine\model.py", line 390, in train
self.trainer.train()
File "d:\file\graduate\code\yolov8-multi-task-main\yolov8-multi-task-main\ultralytics\yolo\engine\trainer.py", line 195, in train
self._do_train(world_size)
File "d:\file\graduate\code\yolov8-multi-task-main\yolov8-multi-task-main\ultralytics\yolo\engine\trainer.py", line 292, in _do_train
self._setup_train(world_size)
File "d:\file\graduate\code\yolov8-multi-task-main\yolov8-multi-task-main\ultralytics\yolo\engine\trainer.py", line 266, in _setup_train
self.test_loader = self.get_dataloader(self.testset, batch_size=batch_size 2, rank=-1, mode='val')
File "d:\file\graduate\code\yolov8-multi-task-main\yolov8-multi-task-main\ultralytics\yolo\v8\DecSeg\train.py", line 71, in get_dataloader
dataset = self.build_dataset(dataset_path, mode, batch_size)
File "d:\file\graduate\code\yolov8-multi-task-main\yolov8-multi-task-main\ultralytics\yolo\v8\DecSeg\train.py", line 45, in build_dataset
return build_yolo_dataset(self.args, img_path, batch, self.data, mode=mode, rect=mode == 'val', stride=gs)
File "d:\file\graduate\code\yolov8-multi-task-main\yolov8-multi-task-main\ultralytics\yolo\data\build.py", line 74, in build_yolo_dataset
return YOLODataset(
File "d:\file\graduate\code\yolov8-multi-task-main\yolov8-multi-task-main\ultralytics\yolo\data\dataset.py", line 40, in init
super().init(args, **kwargs)
File "d:\file\graduate\code\yolov8-multi-task-main\yolov8-multi-task-main\ultralytics\yolo\data\base.py", line 83, in init
self.set_rectangle()
File "d:\file\graduate\code\yolov8-multi-task-main\yolov8-multi-task-main\ultralytics\yolo\data\base.py", line 228, in set_rectangle
self.im_files = [self.im_files[i] for i in irect]
File "d:\file\graduate\code\yolov8-multi-task-main\yolov8-multi-task-main\ultralytics\yolo\data\base.py", line 228, in
self.im_files = [self.im_files[i] for i in irect]
IndexError: list index out of range
想请教一下这个报错是哪里的问题,应该怎么解决呢?谢谢! I would like to ask about this error—what might be causing it, and how can it be fixed? Thank you!