dvlab-research / PanopticFCN

Fully Convolutional Networks for Panoptic Segmentation (CVPR2021 Oral)
Apache License 2.0
393 stars 53 forks source link

Inference on Cityscapes #48

Open canglangzhige opened 1 year ago

canglangzhige commented 1 year ago

Hello, I trained a model on Cityscapes dataset using PanopticFCN cityscapes implementation. Now I want to use the demo.py from detectron2 to see the result of my trained model on a new image. I've already imported panoptic fcn in the demo.py and also added the panoptic fcn configuration in init.py.

from panopticfcn.config import add_panopticfcn_config # noqa add_panopticfcn_config(cfg)

_PROJECTS = { "point_rend": "PointRend", "deeplab": "DeepLab", "panoptic_deeplab": "Panoptic-DeepLab", "panopticfcn": "PanopticFCN", }

But when I try to use the demo.py like this:

python demo/demo.py --config-file projects/PanopticFCN/configs/cityscapes/PanopticFCN-R50-cityscapes.yaml --input projects/PanopticFCN/images/image1.png --output projects/PanopticFCN/results --opts MODEL.WEIGHTS projects/PanopticFCN/model/model_final.pth

I've got the following error. What do I still need to do, to be able to run the demo.py?

raceback (most recent call last): File "demo/demo.py", line 117, in predictions, visualized_output = demo.run_on_image(img) File "/home/sqw/anaconda3/envs/semantic_nerf/lib/python3.7/site-packages/detectron2_new/demo/predictor.py", line 48, in run_on_image predictions = self.predictor(image) File "/home/sqw/anaconda3/envs/semantic_nerf/lib/python3.7/site-packages/detectron2/engine/defaults.py", line 317, in call predictions = self.model([inputs])[0] File "/home/sqw/anaconda3/envs/semantic_nerf/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, kwargs) File "/home/sqw/anaconda3/envs/semantic_nerf/lib/python3.7/site-packages/detectron2_new/projects/PanopticFCN_cityscapes/panopticfcn/panoptic_seg.py", line 110, in forward return self.inference(batched_inputs, images, pred_centers, pred_regions, pred_weights, encode_feat) File "/home/sqw/anaconda3/envs/semantic_nerf/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, *kwargs) File "/home/sqw/anaconda3/envs/semantic_nerf/lib/python3.7/site-packages/detectron2_new/projects/PanopticFCN_cityscapes/panopticfcn/panoptic_seg.py", line 421, in inference self.panoptic_inst_thrs) File "/home/sqw/anaconda3/envs/semantic_nerf/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(args, kwargs) File "/home/sqw/anaconda3/envs/semantic_nerf/lib/python3.7/site-packages/detectron2_new/projects/PanopticFCN_cityscapes/panopticfcn/panoptic_seg.py", line 542, in combine_thing_and_stuff category_id = self.meta.thing_train_id2contiguous_id[thing_category_id] File "/home/sqw/anaconda3/envs/semantic_nerf/lib/python3.7/site-packages/detectron2/data/catalog.py", line 132, in getattr f"Attribute '{key}' does not exist in the metadata of dataset '{self.name}': " AttributeError: Attribute 'thing_train_id2contiguous_id' does not exist in the metadata of dataset 'cityscapes_fine_panoptic_train_separated': metadata is empty.

This is the same issue with https://github.com/dvlab-research/PanopticFCN/issues/45 But it has been closed. Can you tell me please what should I change to be able to use the demo.py from detectron2 for the work?