Open ming053l opened 1 year ago
@DATASETS.register_module() class CocoDataset(BaseDetDataset): """Dataset for COCO.""" METAINFO = { 'classes': ('human','ball'), 'palette': [(220, 20, 60), (119, 11, 38)] } COCOAPI = COCO
ANN_ID_UNIQUE = True
def load_data_list(self) -> List[dict]:
"""Load annotations from an annotation file named as ``self.ann_file``
Here is whole corresponding code block on "/home/ming0531/mmdetection/mmdet/datasets/coco.py"
@ming053l How did you solve this problem
Hello, has anybody solved this issue?
I encountered the same problem and I think it is because the classes are not properly overwritten. This is how I solved it: I created a config file in the configs folder for my custom dataset (following this) and replaced the model setting in the config file to this to explicitly overwrite the number of classes:
num_classes = len(classes)
model = dict(
roi_head=dict(
bbox_head=dict(num_classes=num_classes),
mask_head=dict(num_classes=num_classes)))
Hope it'll help someone.
I've encountered an issue that I haven't been able to resolve despite researching various articles. I'm using mmdet for instance segmentation on my custom dataset and I have modified the MetaInfo in "/home/ming0531/mmdetection/mmdet/datasets/coco.py" to:
METAINFO = { 'classes': ('human','ball'), 'palette': [(220, 20, 60), (119, 11, 38)] }
I only have two classes in my dataset. While I'm able to view the segmentation results during testing, I run into the following error when trying to output to a JSON file.
Here's the error log:
08/03 13:51:27 - mmengine - INFO - Epoch(test) [50/64] eta: 0:00:44 time: 3.1450 data_time: 2.4415 memory: 1109 Traceback (most recent call last): File "/home/ming0531/mmdetection/tools/test.py", line 173, in
main()
File "/home/ming0531/mmdetection/tools/test.py", line 169, in main
runner.test()
File "/home/ming0531/miniconda3/envs/mmdetect/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1791, in test
metrics = self.test_loop.run() # type: ignore
File "/home/ming0531/miniconda3/envs/mmdetect/lib/python3.8/site-packages/mmengine/runner/loops.py", line 438, in run
metrics = self.evaluator.evaluate(len(self.dataloader.dataset))
File "/home/ming0531/miniconda3/envs/mmdetect/lib/python3.8/site-packages/mmengine/evaluator/evaluator.py", line 79, in evaluate
_results = metric.evaluate(size)
File "/home/ming0531/miniconda3/envs/mmdetect/lib/python3.8/site-packages/mmengine/evaluator/metric.py", line 133, in evaluate
_metrics = self.compute_metrics(results) # type: ignore
File "/home/ming0531/miniconda3/envs/mmdetect/lib/python3.8/site-packages/mmdet/evaluation/metrics/coco_metric.py", line 419, in compute_metrics
result_files = self.results2json(preds, outfile_prefix)
File "/home/ming0531/miniconda3/envs/mmdetect/lib/python3.8/site-packages/mmdet/evaluation/metrics/coco_metric.py", line 239, in results2json
data['category_id'] = self.cat_ids[label]
IndexError: list index out of range
Any help or guidance on resolving this issue would be appreciated.