Luo-Z13 / pointobb

[CVPR2024] PointOBB: Learning Oriented Object Detection via Single Point Supervision
MIT License
55 stars 3 forks source link

读取不到图像数据 #6

Closed 66666146494 closed 8 months ago

66666146494 commented 8 months ago

通过redme的数据处理后训练dota数据集报错,终端输出信息如下

 --------------------
2024-03-12 17:00:58,823 - mmdet - INFO - workflow: [('train', 1)], max: 24 epochs
2024-03-12 17:00:58,824 - mmdet - INFO - Checkpoints will be saved to /home/rowlet/pointobb/P2BNet/TOV_mmdetection_cache/work_dir/debug by HardDiskBackend.
Traceback (most recent call last):
  File "/home/rowlet/pointobb/PointOBB/tools/train_dist.py", line 197, in <module>
    main()
  File "/home/rowlet/pointobb/PointOBB/tools/train_dist.py", line 186, in main
    train_detector(
  File "/home/rowlet/pointobb/PointOBB/mmdet/apis/train.py", line 172, in train_detector
    runner.run(data_loaders, cfg.workflow)
  File "/home/rowlet/anaconda3/lib/python3.9/site-packages/mmcv/runner/epoch_based_runner.py", line 136, in run
    epoch_runner(data_loaders[i], **kwargs)
  File "/home/rowlet/anaconda3/lib/python3.9/site-packages/mmcv/runner/epoch_based_runner.py", line 49, in train
    for i, data_batch in enumerate(self.data_loader):
  File "/home/rowlet/anaconda3/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 359, in __iter__
    return self._get_iterator()
  File "/home/rowlet/anaconda3/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 305, in _get_iterator
    return _MultiProcessingDataLoaderIter(self)
  File "/home/rowlet/anaconda3/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 944, in __init__
    self._reset(loader, first_iter=True)
  File "/home/rowlet/anaconda3/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 975, in _reset
    self._try_put_index()
  File "/home/rowlet/anaconda3/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1209, in _try_put_index
    index = self._next_index()
  File "/home/rowlet/anaconda3/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 512, in _next_index
    return next(self._sampler_iter)  # may raise StopIteration
  File "/home/rowlet/anaconda3/lib/python3.9/site-packages/torch/utils/data/sampler.py", line 229, in __iter__
    for idx in self.sampler:
  File "/home/rowlet/pointobb/PointOBB/mmdet/datasets/samplers/group_sampler.py", line 37, in __iter__
    indices = np.concatenate(indices)
  File "<__array_function__ internals>", line 180, in concatenate
ValueError: need at least one array to concatenate

请问数据集有格式要求吗,我尝试打印了一下dataset的构建信息'cfg.data.train',输出结果如下 ''' {'type': 'CocoFmtObbDataset', 'version': 'le90', 'ann_file': '/home/rowlet/pointobb/PointOBB/DOTAv10/data/split_ss_dota_1024_200/trainval/trainval_1024_P2Bfmt_dotav10_rbox.json', 'img_prefix': '/home/rowlet/pointobb/PointOBB/DOTAv10/data/split_ss_dota_1024_200/trainval/images/', 'pipeline': [{'type': 'LoadImageFromFile'}, {'type': 'LoadAnnotations', 'with_bbox': True}, {'type': 'Resize', 'img_scale': (1024, 1024), 'keep_ratio': True}, {'type': 'RandomFlip', 'flip_ratio': 0.5, 'version': 'le90'}, {'type': 'Normalize', 'mean': [123.675, 116.28, 103.53], 'std': [58.395, 57.12, 57.375], 'to_rgb': True}, {'type': 'Pad', 'size_divisor': 32}, {'type': 'DefaultFormatBundle'}, {'type': 'Collect', 'keys': ['img', 'gt_bboxes', 'gt_labels', 'gt_bboxes_ignore', 'gt_true_bboxes']}], 'filter_empty_gt': True} ''' 其中图片路径中的图片是通过mmrotate的裁剪工具裁剪dota数据集的大小为608*608的图片,请问有没有解决方法

Luo-Z13 commented 8 months ago

通过redme的数据处理后训练dota数据集报错,终端输出信息如下

 --------------------
2024-03-12 17:00:58,823 - mmdet - INFO - workflow: [('train', 1)], max: 24 epochs
2024-03-12 17:00:58,824 - mmdet - INFO - Checkpoints will be saved to /home/rowlet/pointobb/P2BNet/TOV_mmdetection_cache/work_dir/debug by HardDiskBackend.
Traceback (most recent call last):
  File "/home/rowlet/pointobb/PointOBB/tools/train_dist.py", line 197, in <module>
    main()
  File "/home/rowlet/pointobb/PointOBB/tools/train_dist.py", line 186, in main
    train_detector(
  File "/home/rowlet/pointobb/PointOBB/mmdet/apis/train.py", line 172, in train_detector
    runner.run(data_loaders, cfg.workflow)
  File "/home/rowlet/anaconda3/lib/python3.9/site-packages/mmcv/runner/epoch_based_runner.py", line 136, in run
    epoch_runner(data_loaders[i], **kwargs)
  File "/home/rowlet/anaconda3/lib/python3.9/site-packages/mmcv/runner/epoch_based_runner.py", line 49, in train
    for i, data_batch in enumerate(self.data_loader):
  File "/home/rowlet/anaconda3/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 359, in __iter__
    return self._get_iterator()
  File "/home/rowlet/anaconda3/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 305, in _get_iterator
    return _MultiProcessingDataLoaderIter(self)
  File "/home/rowlet/anaconda3/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 944, in __init__
    self._reset(loader, first_iter=True)
  File "/home/rowlet/anaconda3/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 975, in _reset
    self._try_put_index()
  File "/home/rowlet/anaconda3/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1209, in _try_put_index
    index = self._next_index()
  File "/home/rowlet/anaconda3/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 512, in _next_index
    return next(self._sampler_iter)  # may raise StopIteration
  File "/home/rowlet/anaconda3/lib/python3.9/site-packages/torch/utils/data/sampler.py", line 229, in __iter__
    for idx in self.sampler:
  File "/home/rowlet/pointobb/PointOBB/mmdet/datasets/samplers/group_sampler.py", line 37, in __iter__
    indices = np.concatenate(indices)
  File "<__array_function__ internals>", line 180, in concatenate
ValueError: need at least one array to concatenate

请问数据集有格式要求吗,我尝试打印了一下dataset的构建信息'cfg.data.train',输出结果如下 ''' {'type': 'CocoFmtObbDataset', 'version': 'le90', 'ann_file': '/home/rowlet/pointobb/PointOBB/DOTAv10/data/split_ss_dota_1024_200/trainval/trainval_1024_P2Bfmt_dotav10_rbox.json', 'img_prefix': '/home/rowlet/pointobb/PointOBB/DOTAv10/data/split_ss_dota_1024_200/trainval/images/', 'pipeline': [{'type': 'LoadImageFromFile'}, {'type': 'LoadAnnotations', 'with_bbox': True}, {'type': 'Resize', 'img_scale': (1024, 1024), 'keep_ratio': True}, {'type': 'RandomFlip', 'flip_ratio': 0.5, 'version': 'le90'}, {'type': 'Normalize', 'mean': [123.675, 116.28, 103.53], 'std': [58.395, 57.12, 57.375], 'to_rgb': True}, {'type': 'Pad', 'size_divisor': 32}, {'type': 'DefaultFormatBundle'}, {'type': 'Collect', 'keys': ['img', 'gt_bboxes', 'gt_labels', 'gt_bboxes_ignore', 'gt_true_bboxes']}], 'filter_empty_gt': True} ''' 其中图片路径中的图片是通过mmrotate的裁剪工具裁剪dota数据集的大小为608*608的图片,请问有没有解决方法

We processed the dataset using the mmrotate's cropping tool, which is consistent with the official cropping process. Based on my experience, I suspect that the error might be due to the category definition. I suggest you check the dataset path, image suffix, and other detailed information.