Closed shabbbb closed 3 years ago
Hi, did you use your own dataset? or use coco original dataset? Also, please re-paste your error, it's hard to read. Your can use ```shell your error ``` which is seems like
your error
Hey sir, I tried to use the original cocodataset, but the memory was running out. And so I used some part of the COCO dataset. But, the format is same as that of original cocodataset.The error is as follows:
loading annotations into memory...
---------------------------------------------------
JSONDecodeError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/mmcv/utils/registry.py in build_from_cfg(cfg, registry, default_args)
50 try:
---> 51 return obj_cls(**args)
52 except Exception as e:
10 frames
/content/mmdetection/mmdet/datasets/custom.py in __init__(self, ann_file, pipeline, classes, data_root, img_prefix, seg_prefix, proposal_file, test_mode, filter_empty_gt)
87 # load annotations (and proposals)
---> 88 self.data_infos = self.load_annotations(self.ann_file)
89
<ipython-input-11-36b37ed1d5fb> in load_annotations(self, ann_file)
46
---> 47 self.coco = COCO(ann_file)
48 # The order of returned `cat_ids` will not
/content/mmdetection/mmdet/datasets/api_wrappers/coco_api.py in __init__(self, annotation_file)
21 UserWarning)
---> 22 super().__init__(annotation_file=annotation_file)
23 self.img_ann_map = self.imgToAnns
/usr/local/lib/python3.7/dist-packages/pycocotools/coco.py in __init__(self, annotation_file)
84 with open(annotation_file, 'r') as f:
---> 85 dataset = json.load(f)
86 assert type(dataset)==dict, 'annotation file format {} not supported'.format(type(dataset))
/usr/lib/python3.7/json/__init__.py in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
295 parse_float=parse_float, parse_int=parse_int,
--> 296 parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
297
/usr/lib/python3.7/json/__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
347 parse_constant is None and object_pairs_hook is None and not kw):
--> 348 return _default_decoder.decode(s)
349 if cls is None:
/usr/lib/python3.7/json/decoder.py in decode(self, s, _w)
336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
/usr/lib/python3.7/json/decoder.py in raw_decode(self, s, idx)
352 try:
--> 353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
JSONDecodeError: Expecting property name enclosed in double quotes: line 1853893 column 12 (char 51380224)
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
<ipython-input-14-153948f490f2> in <module>()
5
6 # Build dataset
----> 7 datasets = [build_dataset(cfg.data.train)]
8 datasets
/content/mmdetection/mmdet/datasets/builder.py in build_dataset(cfg, default_args)
69 dataset = _concat_dataset(cfg, default_args)
70 else:
---> 71 dataset = build_from_cfg(cfg, DATASETS, default_args)
72
73 return dataset
/usr/local/lib/python3.7/dist-packages/mmcv/utils/registry.py in build_from_cfg(cfg, registry, default_args)
52 except Exception as e:
53 # Normal TypeError does not print class name.
---> 54 raise type(e)(f'{obj_cls.__name__}: {e}')
55
56
TypeError: __init__() missing 2 required positional arguments: 'doc' and 'pos'
https://colab.research.google.com/drive/1JP1rZKYZwxYlKH0pPZst1WwaWn1OPmhK?usp=sharing
This is my code sir.. Thank you...
Did you solve your issue ? I have the same error can you help me ? I changed my dataset to the same form as it was stated in the tutorial
Hey, Is the dataset that you used were the original coco dataset? I guess, mmdetection only supports the original one. I was trying it in google colab. And found that colab does not provide enough memory to support the original coco dataset. So, I switched to pascal VC dataset. Thank you...
I had the same problem,but when I saw this discussion (https://github.com/ucbdrive/3d-vehicle-tracking/issues/21) ,I check my json file again.A small error appeared in it(an extra '],'),and then train.py is ok,although "CUDA out of memory"(^o^)
Sir, During the training of mmdetection using coco dataset, I am getting the following error :
loading annotations into memory...
JSONDecodeError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/mmcv/utils/registry.py in build_from_cfg(cfg, registry, default_args) 50 try: ---> 51 return obj_cls(**args) 52 except Exception as e:
10 frames /content/mmdetection/mmdet/datasets/custom.py in init(self, ann_file, pipeline, classes, data_root, img_prefix, seg_prefix, proposal_file, test_mode, filter_empty_gt) 87 # load annotations (and proposals) ---> 88 self.data_infos = self.load_annotations(self.ann_file) 89