AttributeError: Attribute 'stuff_classes' does not exist in the metadata of dataset 'coco_my_val_separated'. Available keys are dict_keys(['name', 'panoptic_root', 'image_root', 'panoptic_json', 'sem_seg_root', 'json_file', 'evaluator_type', 'ignore_label', 'thing_classes', 'thing_dataset_id_to_contiguous_id']). #4917
[04/18 09:09:05 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in training: [RandomCrop(crop_type='relative_range', crop_size=[0.9, 0.9]), ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice'), RandomFlip()]
[04/18 09:09:05 d2.data.build]: Using training sampler TrainingSampler
[04/18 09:09:05 d2.data.common]: Serializing 118287 elements to byte tensors and concatenating them all ...
[04/18 09:09:08 d2.data.common]: Serialized dataset takes 461.71 MiB
2023-04-18 09:09:11.993950: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudart64_110.dll
[04/18 09:09:14 fvcore.common.checkpoint]: [Checkpointer] Loading from configfile/model_final_c10459.pkl ...
[04/18 09:09:14 fvcore.common.checkpoint]: Reading a file from 'Detectron2 Model Zoo'
[04/18 09:09:15 d2.engine.train_loop]: Starting training from iteration 0
[04/18 09:09:29 d2.utils.events]: eta: 1:42:08 iter: 19 total_loss: 0.8808 loss_sem_seg: 0.163 loss_rpn_cls: 0.01234 loss_rpn_loc: 0.02302 loss_cls: 0.1425 loss_box_reg: 0.2256 loss_mask: 0.1804 time: 0.6293 data_time: 0.0296 lr: 7.7924e-06 max_mem: 2437M
D:\Users\gaotsingyuan\anaconda3\envs\python37\lib\site-packages\fvcore\transforms\transform.py:724: ShapelyDeprecationWarning: Iteration over multi-part geometries is deprecated and will be removed in Shapely 2.0. Use the `geoms` property to access the constituent parts of a multi-part geometry.
for poly in cropped:
D:\Users\gaotsingyuan\anaconda3\envs\python37\lib\site-packages\fvcore\transforms\transform.py:724: ShapelyDeprecationWarning: Iteration over multi-part geometries is deprecated and will be removed in Shapely 2.0. Use the `geoms` property to access the constituent parts of a multi-part geometry.
for poly in cropped:
D:\Users\gaotsingyuan\anaconda3\envs\python37\lib\site-packages\fvcore\transforms\transform.py:724: ShapelyDeprecationWarning: Iteration over multi-part geometries is deprecated and will be removed in Shapely 2.0. Use the `geoms` property to access the constituent parts of a multi-part geometry.
for poly in cropped:
[04/18 09:09:41 d2.utils.events]: eta: 1:43:32 iter: 39 total_loss: 0.673 loss_sem_seg: 0.09647 loss_rpn_cls: 0.01844 loss_rpn_loc: 0.04783 loss_cls: 0.1308 loss_box_reg: 0.1333 loss_mask: 0.188 time: 0.6390 data_time: 0.0271 lr: 1.5784e-05 max_mem: 2437M
D:\Users\gaotsingyuan\anaconda3\envs\python37\lib\site-packages\fvcore\transforms\transform.py:724: ShapelyDeprecationWarning: Iteration over multi-part geometries is deprecated and will be removed in Shapely 2.0. Use the `geoms` property to access the constituent parts of a multi-part geometry.
for poly in cropped:
[04/18 09:09:54 d2.utils.events]: eta: 1:44:25 iter: 59 total_loss: 0.8675 loss_sem_seg: 0.1861 loss_rpn_cls: 0.01036 loss_rpn_loc: 0.01953 loss_cls: 0.2103 loss_box_reg: 0.2019 loss_mask: 0.2035 time: 0.6389 data_time: 0.0248 lr: 2.3776e-05 max_mem: 2437M
D:\Users\gaotsingyuan\anaconda3\envs\python37\lib\site-packages\fvcore\transforms\transform.py:724: ShapelyDeprecationWarning: Iteration over multi-part geometries is deprecated and will be removed in Shapely 2.0. Use the `geoms` property to access the constituent parts of a multi-part geometry.
for poly in cropped:
D:\Users\gaotsingyuan\anaconda3\envs\python37\lib\site-packages\fvcore\transforms\transform.py:724: ShapelyDeprecationWarning: Iteration over multi-part geometries is deprecated and will be removed in Shapely 2.0. Use the `geoms` property to access the constituent parts of a multi-part geometry.
for poly in cropped:
[04/18 09:10:07 d2.utils.events]: eta: 1:43:45 iter: 79 total_loss: 0.8721 loss_sem_seg: 0.1211 loss_rpn_cls: 0.01267 loss_rpn_loc: 0.03563 loss_cls: 0.1317 loss_box_reg: 0.1683 loss_mask: 0.216 time: 0.6371 data_time: 0.0266 lr: 3.1768e-05 max_mem: 2437M
D:\Users\gaotsingyuan\anaconda3\envs\python37\lib\site-packages\fvcore\transforms\transform.py:724: ShapelyDeprecationWarning: Iteration over multi-part geometries is deprecated and will be removed in Shapely 2.0. Use the `geoms` property to access the constituent parts of a multi-part geometry.
for poly in cropped:
[04/18 09:10:20 d2.data.datasets.coco]: Loaded 5000 images in COCO format from E:\数据集\MS COCO\annotations\instances_val2017.json
[04/18 09:10:20 d2.data.datasets.coco]: Loaded 5000 images with semantic segmentation from E:\数据集\MS COCO\val2017
[04/18 09:10:20 d2.data.build]: Distribution of instances among all 80 categories:
| category | #instances | category | #instances | category | #instances |
|:-------------:|:-------------|:------------:|:-------------|:-------------:|:-------------|
| person | 10777 | bicycle | 314 | car | 1918 |
| motorcycle | 367 | airplane | 143 | bus | 283 |
| train | 190 | truck | 414 | boat | 424 |
| traffic light | 634 | fire hydrant | 101 | stop sign | 75 |
| parking meter | 60 | bench | 411 | bird | 427 |
| cat | 202 | dog | 218 | horse | 272 |
| sheep | 354 | cow | 372 | elephant | 252 |
| bear | 71 | zebra | 266 | giraffe | 232 |
| backpack | 371 | umbrella | 407 | handbag | 540 |
| tie | 252 | suitcase | 299 | frisbee | 115 |
| skis | 241 | snowboard | 69 | sports ball | 260 |
| kite | 327 | baseball bat | 145 | baseball gl.. | 148 |
| skateboard | 179 | surfboard | 267 | tennis racket | 225 |
| bottle | 1013 | wine glass | 341 | cup | 895 |
| fork | 215 | knife | 325 | spoon | 253 |
| bowl | 623 | banana | 370 | apple | 236 |
| sandwich | 177 | orange | 285 | broccoli | 312 |
| carrot | 365 | hot dog | 125 | pizza | 284 |
| donut | 328 | cake | 310 | chair | 1771 |
| couch | 261 | potted plant | 342 | bed | 163 |
| dining table | 695 | toilet | 179 | tv | 288 |
| laptop | 231 | mouse | 106 | remote | 283 |
| keyboard | 153 | cell phone | 262 | microwave | 55 |
| oven | 143 | toaster | 9 | sink | 225 |
| refrigerator | 126 | book | 1129 | clock | 267 |
| vase | 274 | scissors | 36 | teddy bear | 190 |
| hair drier | 11 | toothbrush | 57 | | |
| total | 36335 | | | | |
[04/18 09:10:20 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')]
[04/18 09:10:20 d2.data.common]: Serializing 5000 elements to byte tensors and concatenating them all ...
[04/18 09:10:20 d2.data.common]: Serialized dataset takes 19.53 MiB
[04/18 09:10:21 d2.data.datasets.coco]: Loaded 5000 images in COCO format from E:\数据集\MS COCO\annotations\instances_val2017.json
[04/18 09:10:21 d2.data.datasets.coco]: Loaded 5000 images with semantic segmentation from E:\数据集\MS COCO\val2017
ERROR [04/18 09:10:22 d2.engine.train_loop]: Exception during training:
Traceback (most recent call last):
File "C:\Users\gaotsingyuan\detectron2\detectron2\engine\train_loop.py", line 150, in train
self.after_step()
File "C:\Users\gaotsingyuan\detectron2\detectron2\engine\train_loop.py", line 180, in after_step
h.after_step()
File "C:\Users\gaotsingyuan\detectron2\detectron2\engine\hooks.py", line 555, in after_step
self._do_eval()
File "C:\Users\gaotsingyuan\detectron2\detectron2\engine\hooks.py", line 528, in _do_eval
results = self._func()
File "C:\Users\gaotsingyuan\detectron2\detectron2\engine\defaults.py", line 453, in test_and_save_results
self._last_eval_results = self.test(self.cfg, self.model)
File "C:\Users\gaotsingyuan\detectron2\detectron2\engine\defaults.py", line 609, in test
evaluator = cls.build_evaluator(cfg, dataset_name)
File "D:/Users/gaotsingyuan/PycharmProjects/pythonProject3/test3-panoptic.py", line 123, in build_evaluator
return build_evaluator(cfg, dataset_name, output_folder)
File "D:/Users/gaotsingyuan/PycharmProjects/pythonProject3/test3-panoptic.py", line 89, in build_evaluator
output_dir=output_folder,
File "C:\Users\gaotsingyuan\detectron2\detectron2\evaluation\sem_seg_evaluation.py", line 89, in __init__
self._class_names = meta.stuff_classes
File "C:\Users\gaotsingyuan\detectron2\detectron2\data\catalog.py", line 128, in __getattr__
"keys are {}.".format(key, self.name, str(self.__dict__.keys()))
AttributeError: Attribute 'stuff_classes' does not exist in the metadata of dataset 'coco_my_val_separated'. Available keys are dict_keys(['name', 'panoptic_root', 'image_root', 'panoptic_json', 'sem_seg_root', 'json_file', 'evaluator_type', 'ignore_label', 'thing_classes', 'thing_dataset_id_to_contiguous_id']).
[04/18 09:10:22 d2.engine.hooks]: Overall training speed: 97 iterations in 0:01:01 (0.6389 s / it)
[04/18 09:10:22 d2.engine.hooks]: Total training time: 0:01:04 (0:00:02 on hooks)
[04/18 09:10:22 d2.utils.events]: eta: 1:43:08 iter: 99 total_loss: 0.6138 loss_sem_seg: 0.08962 loss_rpn_cls: 0.006415 loss_rpn_loc: 0.02325 loss_cls: 0.1036 loss_box_reg: 0.1353 loss_mask: 0.1793 time: 0.6324 data_time: 0.0248 lr: 3.976e-05 max_mem: 2437M
Traceback (most recent call last):
File "D:/Users/gaotsingyuan/PycharmProjects/pythonProject3/test3-panoptic.py", line 456, in <module>
args=(args,),
File "C:\Users\gaotsingyuan\detectron2\detectron2\engine\launch.py", line 82, in launch
main_func(*args)
File "D:/Users/gaotsingyuan/PycharmProjects/pythonProject3/test3-panoptic.py", line 440, in main
return trainer.train()
File "C:\Users\gaotsingyuan\detectron2\detectron2\engine\defaults.py", line 484, in train
super().train(self.start_iter, self.max_iter)
File "C:\Users\gaotsingyuan\detectron2\detectron2\engine\train_loop.py", line 150, in train
self.after_step()
File "C:\Users\gaotsingyuan\detectron2\detectron2\engine\train_loop.py", line 180, in after_step
h.after_step()
File "C:\Users\gaotsingyuan\detectron2\detectron2\engine\hooks.py", line 555, in after_step
self._do_eval()
File "C:\Users\gaotsingyuan\detectron2\detectron2\engine\hooks.py", line 528, in _do_eval
results = self._func()
File "C:\Users\gaotsingyuan\detectron2\detectron2\engine\defaults.py", line 453, in test_and_save_results
self._last_eval_results = self.test(self.cfg, self.model)
File "C:\Users\gaotsingyuan\detectron2\detectron2\engine\defaults.py", line 609, in test
evaluator = cls.build_evaluator(cfg, dataset_name)
File "D:/Users/gaotsingyuan/PycharmProjects/pythonProject3/test3-panoptic.py", line 123, in build_evaluator
return build_evaluator(cfg, dataset_name, output_folder)
File "D:/Users/gaotsingyuan/PycharmProjects/pythonProject3/test3-panoptic.py", line 89, in build_evaluator
output_dir=output_folder,
File "C:\Users\gaotsingyuan\detectron2\detectron2\evaluation\sem_seg_evaluation.py", line 89, in __init__
self._class_names = meta.stuff_classes
File "C:\Users\gaotsingyuan\detectron2\detectron2\data\catalog.py", line 128, in __getattr__
"keys are {}.".format(key, self.name, str(self.__dict__.keys()))
AttributeError: Attribute 'stuff_classes' does not exist in the metadata of dataset 'coco_my_val_separated'. Available keys are dict_keys(['name', 'panoptic_root', 'image_root', 'panoptic_json', 'sem_seg_root', 'json_file', 'evaluator_type', 'ignore_label', 'thing_classes', 'thing_dataset_id_to_contiguous_id']).
Process finished with exit code 1
4. please simplify the steps as much as possible so they do not require additional resources to
run, such as a private dataset.
I used **official** COCO2017 datsets, and registered by using `register_coco_panoptic_separated`. The **image_root, panoptic_root, panoptic_json and instances_json** are referred to the files directly downloaded from [(https://cocodataset.org/#download)](url). The **sem_seg_root** refers to the file which is coverted from panoptic files, following [https://github.com/facebookresearch/detectron2/blob/main/datasets/prepare_panoptic_fpn.py](url). The **metadata** I put an empty dict.
The other training methods and settings are the same as what I used to train on Mask-RCNN, which can train and evaluate sucessfully.
It can do training, but when it starts to do panoptic evaluations, I get an error: AttributeError: Attribute 'stuff_classes' does not exist in the metadata of dataset 'coco_my_val_separated'. Available keys are dict_keys(['name', 'panoptic_root', 'image_root', 'panoptic_json', 'sem_seg_root', 'json_file', 'evaluator_type', 'ignore_label', 'thing_classes', 'thing_dataset_id_to_contiguous_id']).
When I set `MetadataCatalog.get('coco_my_val_separated').set(stuff_classes= [])`, I get another error: AttributeError: Attribute 'stuff_dataset_id_to_contiguous_id' does not exist in the metadata of dataset 'coco_my_val_separated'. Available keys are dict_keys(['name', 'stuff_classes', 'panoptic_root', 'image_root', 'panoptic_json', 'sem_seg_root', 'json_file', 'evaluator_type', 'ignore_label', 'thing_classes', 'thing_dataset_id_to_contiguous_id']).
It seems that 'stuff_classes' and 'stuff_dataset_id_to_contiguous_id' do not exist in the matadata.
How can I set them? Should I put all stuff categories in the metadata manually?
## Expected behavior:
I want to train panoptic fpn on official COCO2017 datasets and expect it can works.
## Environment:
Did you ever get a fix for this issue? I'm trying to run the COCOPanopticEvaluator and I'm getting the same issue with
'stuff_dataset_id_to_contiguous_id' does not exist in the metadata of dataset '
Instructions To Reproduce the 🐛 Bug:
Full changes I have made:
What exact command you run: python train_net.py
Full logs or other relevant observations:
Process finished with exit code 1
sys.platform win32 Python 3.7.11 (default, Jul 27 2021, 09:42:29) [MSC v.1916 64 bit (AMD64)] numpy 1.21.5 detectron2 0.6 @C:\Users\gaotsingyuan\detectron2\detectron2 detectron2._C not built correctly: DLL load failed: 找不到指定的程序。 DETECTRON2_ENV_MODULE
PyTorch 1.8.0 @D:\Users\gaotsingyuan\anaconda3\envs\python37\lib\site-packages\torch
PyTorch debug build False
GPU available Yes
GPU 0 NVIDIA GeForce GTX 1650 Ti (arch=7.5)
Driver version 512.78
CUDA_HOME C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2
Pillow 9.2.0
torchvision 0.9.0 @D:\Users\gaotsingyuan\anaconda3\envs\python37\lib\site-packages\torchvision
torchvision arch flags D:\Users\gaotsingyuan\anaconda3\envs\python37\lib\site-packages\torchvision_C.pyd; cannot find cuobjdump
fvcore 0.1.5.post20220512
iopath 0.1.9
cv2 4.6.0
PyTorch built with: