Closed deepaksinghcv closed 4 years ago
If your mask annotations are not polygons, but binary masks, cfg.INPUT.MASK_FORMAT
should be "bitmask" instead of "polygon". We'll improve the documentation about it.
I changed to "bitmask". It worked. Thank you.
If your mask annotations are not polygons, but binary masks,
cfg.INPUT.MASK_FORMAT
should be "bitmask" instead of "polygon". We'll improve the documentation about it.
I have the same issue but changing the mask format didn't fix it. Is there anything else I can do to fix it?
If you do not know the root cause of the problem, and wish someone to help you, please post according to this template:
Instructions To Reproduce the Issue:
git diff
)[06/07 17:18:56 d2.data.common]: Serializing 6944 elements to byte tensors and concatenating them all ... [06/07 17:18:56 d2.data.common]: Serialized dataset takes 45.41 MiB [06/07 17:18:56 d2.data.detection_utils]: TransformGens used in training: [ResizeShortestEdge(short_edge_length=(800, 832, 864, 896, 928, 960, 992, 1024), max_size=2048, sample_style='choice'), RandomFlip()] [06/07 17:18:56 d2.data.build]: Using training sampler TrainingSampler Unable to load 'roi_heads.box_predictor.cls_score.weight' to the model due to incompatible shapes: (81, 1024) in the checkpoint but (10, 1024) in the model! Unable to load 'roi_heads.box_predictor.cls_score.bias' to the model due to incompatible shapes: (81,) in the checkpoint but (10,) in the model! Unable to load 'roi_heads.box_predictor.bbox_pred.weight' to the model due to incompatible shapes: (320, 1024) in the checkpoint but (36, 1024) in the model! Unable to load 'roi_heads.box_predictor.bbox_pred.bias' to the model due to incompatible shapes: (320,) in the checkpoint but (36,) in the model! Unable to load 'roi_heads.mask_head.predictor.weight' to the model due to incompatible shapes: (80, 256, 1, 1) in the checkpoint but (9, 256, 1, 1) in the model! Unable to load 'roi_heads.mask_head.predictor.bias' to the model due to incompatible shapes: (80,) in the checkpoint but (9,) in the model! [06/07 17:18:57 d2.engine.train_loop]: Starting training from iteration 0 ERROR [06/07 17:18:57 d2.engine.train_loop]: Exception during training: Traceback (most recent call last): File "/home/dksingh/inseg3/detectron2/detectron2/engine/train_loop.py", line 132, in train self.run_step() File "/home/dksingh/inseg3/detectron2/detectron2/engine/train_loop.py", line 209, in run_step data = next(self._data_loader_iter) File "/home/dksingh/inseg3/detectron2/detectron2/data/common.py", line 142, in iter for d in self.dataset: File "/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in next data = self._next_data() File "/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 856, in _next_data return self._process_data(data) File "/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 881, in _process_data data.reraise() File "/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/_utils.py", line 394, in reraise raise self.exc_type(msg) AssertionError: Caught AssertionError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop data = fetcher.fetch(index) File "/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/dksingh/inseg3/detectron2/detectron2/data/common.py", line 41, in getitem
data = self._map_func(self._dataset[cur_idx])
File "/home/dksingh/inseg3/detectron2/detectron2/utils/serialize.py", line 23, in call
return self._obj(*args, **kwargs)
File "/home/dksingh/inseg3/detectron2/detectron2/data/dataset_mapper.py", line 139, in call
annos, image_shape, mask_format=self.mask_format
File "/home/dksingh/inseg3/detectron2/detectron2/data/detection_utils.py", line 315, in annotations_to_instances
masks = PolygonMasks(segms)
File "/home/dksingh/inseg3/detectron2/detectron2/structures/masks.py", line 271, in init
process_polygons(polygons_per_instance) for polygons_per_instance in polygons
File "/home/dksingh/inseg3/detectron2/detectron2/structures/masks.py", line 271, in
process_polygons(polygons_per_instance) for polygons_per_instance in polygons
File "/home/dksingh/inseg3/detectron2/detectron2/structures/masks.py", line 262, in process_polygons
"Got '{}' instead.".format(type(polygons_per_instance))
AssertionError: Cannot create polygons: Expect a list of polygons per instance. Got '<class 'numpy.ndarray'>' instead.
[06/07 17:18:57 d2.engine.hooks]: Total training time: 0:00:00 (0:00:00 on hooks) Traceback (most recent call last): File "idd_trainer.py", line 33, in
trainer.train()
File "/home/dksingh/inseg3/detectron2/detectron2/engine/defaults.py", line 402, in train
super().train(self.start_iter, self.max_iter)
File "/home/dksingh/inseg3/detectron2/detectron2/engine/train_loop.py", line 132, in train
self.run_step()
File "/home/dksingh/inseg3/detectron2/detectron2/engine/train_loop.py", line 209, in run_step
data = next(self._data_loader_iter)
File "/home/dksingh/inseg3/detectron2/detectron2/data/common.py", line 142, in iter
for d in self.dataset:
File "/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in next
data = self._next_data()
File "/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 856, in _next_data
return self._process_data(data)
File "/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 881, in _process_data
data.reraise()
File "/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/_utils.py", line 394, in reraise
raise self.exc_type(msg)
AssertionError: Caught AssertionError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/dksingh/inseg3/detectron2/detectron2/data/common.py", line 41, in getitem
data = self._map_func(self._dataset[cur_idx])
File "/home/dksingh/inseg3/detectron2/detectron2/utils/serialize.py", line 23, in call
return self._obj(*args, **kwargs)
File "/home/dksingh/inseg3/detectron2/detectron2/data/dataset_mapper.py", line 139, in call
annos, image_shape, mask_format=self.mask_format
File "/home/dksingh/inseg3/detectron2/detectron2/data/detection_utils.py", line 315, in annotations_to_instances
masks = PolygonMasks(segms)
File "/home/dksingh/inseg3/detectron2/detectron2/structures/masks.py", line 271, in init
process_polygons(polygons_per_instance) for polygons_per_instance in polygons
File "/home/dksingh/inseg3/detectron2/detectron2/structures/masks.py", line 271, in
process_polygons(polygons_per_instance) for polygons_per_instance in polygons
File "/home/dksingh/inseg3/detectron2/detectron2/structures/masks.py", line 262, in process_polygons
"Got '{}' instead.".format(type(polygons_per_instance))
AssertionError: Cannot create polygons: Expect a list of polygons per instance. Got '<class 'numpy.ndarray'>' instead.
The config file for the model file is as follows: The config file is similar to the one available for Cityscapes/mask_rcnn_R_50_FPN.yaml I have just changed the NUM_CLASSES and DATASETS.TRAIN and DATASETS.TEST
Expected behavior:
I expected the code to crash since i haven't specified any GPUs. But it crashes before it starts training. Could you kindly tell me whats the right procedure to train or where is the mistake.
Environment: