Describe the bug
Hi! I'm running into an issue when using Albumentations augmentation in semi-supervised object detection. Specifically, I get an error when using this augmentation in the strong_pipeline, but not when only using it in the sup_pipeline. Here is my config file and the full error message:
11/29 21:42:22 - mmengine - INFO - load model from: open-mmlab://detectron2/resnet50_caffe
11/29 21:42:22 - mmengine - INFO - Loads checkpoint by openmmlab backend from path: open-mmlab://detectron2/resnet50_caffe
11/29 21:42:22 - mmengine - WARNING - "FileClient" will be deprecated in future. Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io
11/29 21:42:22 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBackend" and the former will be deprecated in future.
11/29 21:42:22 - mmengine - INFO - Checkpoints will be saved to /home/nongxx/PycharmProjects/mmdetection/tools/work_dirs/soft-teacher-coco-idea.
Traceback (most recent call last):
File "/home/PycharmProjects/mmdetection/tools/train.py", line 133, in
main()
File "/home/PycharmProjects/mmdetection/tools/train.py", line 129, in main
runner.train()
File "/home/anaconda3/envs/pytorch1.13/lib/python3.10/site-packages/mmengine/runner/runner.py", line 1706, in train
model = self.train_loop.run() # type: ignore
File "/home/anaconda3/envs/pytorch1.13/lib/python3.10/site-packages/mmengine/runner/loops.py", line 277, in run
data_batch = next(self.dataloader_iterator)
File "/home/anaconda3/envs/pytorch1.13/lib/python3.10/site-packages/mmengine/runner/loops.py", line 164, in next
data = next(self._iterator)
File "/home/anaconda3/envs/pytorch1.13/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 628, in next
data = self._next_data()
File "/home/anaconda3/envs/pytorch1.13/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1333, in _next_data
return self._process_data(data)
File "/home/anaconda3/envs/pytorch1.13/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1359, in _process_data
data.reraise()
File "/home/anaconda3/envs/pytorch1.13/lib/python3.10/site-packages/torch/_utils.py", line 543, in reraise
raise exception
Exception: Caught Exception in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/anaconda3/envs/pytorch1.13/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
data = fetcher.fetch(index)
File "/home/anaconda3/envs/pytorch1.13/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 58, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/anaconda3/envs/pytorch1.13/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 58, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/anaconda3/envs/pytorch1.13/lib/python3.10/site-packages/mmengine/dataset/dataset_wrapper.py", line 159, in getitem
return self.datasets[dataset_idx][sample_idx]
File "/home/anaconda3/envs/pytorch1.13/lib/python3.10/site-packages/mmengine/dataset/base_dataset.py", line 421, in getitem
raise Exception(f'Cannot find valid image after {self.max_refetch}! '
Exception: Cannot find valid image after 1000! Please check your image path and pipeline
The error message above shows that I need to check the path, but I can run the official data augmentation code normally (the commented part),so please let me know if any other details would be helpful in this problem.
Thanks for your error report and we appreciate it a lot.
Checklist
Describe the bug Hi! I'm running into an issue when using Albumentations augmentation in semi-supervised object detection. Specifically, I get an error when using this augmentation in the strong_pipeline, but not when only using it in the sup_pipeline. Here is my config file and the full error message:
Reproduction configs/base/datasets/semi_coco_detection.py
Error traceback
The error message above shows that I need to check the path, but I can run the official data augmentation code normally (the commented part),so please let me know if any other details would be helpful in this problem.