MuhammadMoinFaisal / YOLOv8_Segmentation_DeepSORT_Object_Tracking

YOLOv8 Segmentation with DeepSORT Object Tracking (ID + Trails)
216 stars 59 forks source link

Facing issue while training the model #3

Closed soleyromit closed 1 year ago

soleyromit commented 1 year ago
%cd {HOME}
!python train.py model=yolov8l-seg.pt task=segment data={dataset.location}/data.yaml epochs=100 imgsz=640 v5loader=true

the above command I tried to run in the Google collab based on your video. However, then, I am facing this issue.

Error executing job with overrides: ['model=yolov8l-seg.pt', 'task=segment', 'data=/content/YOLOv8_Segmentation_DeepSORT_Object_Tracking/ultralytics/yolo/v8/segment/Pothole-Detection-Project-2/data.yaml', 'epochs=100', 'imgsz=640', 'v5loader=true']
Traceback (most recent call last):
  File "train.py", line 150, in train
    model.train(**cfg)
  File "/content/YOLOv8_Segmentation_DeepSORT_Object_Tracking/ultralytics/yolo/engine/model.py", line 193, in train
    self.trainer.train()
  File "/content/YOLOv8_Segmentation_DeepSORT_Object_Tracking/ultralytics/yolo/engine/trainer.py", line 177, in train
    self._do_train(int(os.getenv("RANK", -1)), world_size)
  File "/content/YOLOv8_Segmentation_DeepSORT_Object_Tracking/ultralytics/yolo/engine/trainer.py", line 293, in _do_train
    self.loss, self.loss_items = self.criterion(preds, batch)
  File "/content/YOLOv8_Segmentation_DeepSORT_Object_Tracking/ultralytics/yolo/v8/segment/train.py", line 44, in criterion
    return self.compute_loss(preds, batch)
  File "/content/YOLOv8_Segmentation_DeepSORT_Object_Tracking/ultralytics/yolo/v8/segment/train.py", line 89, in __call__
    masks = batch["masks"].to(self.device).float()
KeyError: 'masks'

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

Please help.

MuhammadMoinFaisal commented 1 year ago

Hi @soleyromit please remove v5loader and rerun it and let me know if there is any issue

soleyromit commented 1 year ago

Hi @MuhammadMoinFaisal

Thank you for your response. Still facing the same issue after removing v5Loader.

Here is the collab link that I am working on: (https://colab.research.google.com/drive/1pj43CxXThyDXjBOn_xZpYslRu38CGlER?usp=sharing)

Error executing job with overrides: ['model=yolov8l-seg.pt', 'task=segment', 'data=/content/YOLOv8_Segmentation_DeepSORT_Object_Tracking/ultralytics/yolo/v8/segment/Pothole-Detection-Project-2/data.yaml', 'epochs=100', 'imgsz=640', 'pretrained=True']
Traceback (most recent call last):
  File "train.py", line 151, in train
    model.train(**cfg)
  File "/content/YOLOv8_Segmentation_DeepSORT_Object_Tracking/ultralytics/yolo/engine/model.py", line 193, in train
    self.trainer.train()
  File "/content/YOLOv8_Segmentation_DeepSORT_Object_Tracking/ultralytics/yolo/engine/trainer.py", line 177, in train
    self._do_train(int(os.getenv("RANK", -1)), world_size)
  File "/content/YOLOv8_Segmentation_DeepSORT_Object_Tracking/ultralytics/yolo/engine/trainer.py", line 275, in _do_train
    for i, batch in pbar:
  File "/usr/local/lib/python3.8/dist-packages/tqdm/std.py", line 1195, in __iter__
    for obj in iterable:
  File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py", line 628, in __next__
    data = self._next_data()
  File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py", line 1333, in _next_data
    return self._process_data(data)
  File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py", line 1359, in _process_data
    data.reraise()
  File "/usr/local/lib/python3.8/dist-packages/torch/_utils.py", line 543, in reraise
    raise exception
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
    data = fetcher.fetch(index)
  File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/_utils/fetch.py", line 58, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/_utils/fetch.py", line 58, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/content/YOLOv8_Segmentation_DeepSORT_Object_Tracking/ultralytics/yolo/data/base.py", line 179, in __getitem__
    return self.transforms(self.get_label_info(index))
  File "/content/YOLOv8_Segmentation_DeepSORT_Object_Tracking/ultralytics/yolo/data/augment.py", line 48, in __call__
    data = t(data)
  File "/content/YOLOv8_Segmentation_DeepSORT_Object_Tracking/ultralytics/yolo/data/augment.py", line 48, in __call__
    data = t(data)
  File "/content/YOLOv8_Segmentation_DeepSORT_Object_Tracking/ultralytics/yolo/data/augment.py", line 361, in __call__
    i = self.box_candidates(box1=instances.bboxes.T,
  File "/content/YOLOv8_Segmentation_DeepSORT_Object_Tracking/ultralytics/yolo/data/augment.py", line 375, in box_candidates
    return (w2 > wh_thr) & (h2 > wh_thr) & (w2 * h2 / (w1 * h1 + eps) > area_thr) & (ar < ar_thr)  # candidates
ValueError: operands could not be broadcast together with shapes (5,) (6,) 

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.