Closed kxhit closed 2 years ago
Hi, thanks for your interests! I'm kind of busy these days and will solve your issues few days later. Sorry for the delay!
Best.
Hi, I put a demo example here, you can try and see if you still have any bugs.
Hi, thanks a lot for your help!
But I still got errors below. I'm using mmcv 1.3.9, mmcv-full 1.4.0, mmsegmentation 0.17.0
sh run.sh
Traceback (most recent call last):
File "image_demo.py", line 41, in <module>
main()
File "image_demo.py", line 28, in main
model = init_segmentor(args.config, args.checkpoint, device=args.device)
File "/home/xin/anaconda3/envs/openmmlab/lib/python3.7/site-packages/mmseg/apis/inference.py", line 32, in init_segmentor
model = build_segmentor(config.model, test_cfg=config.get('test_cfg'))
File "/home/xin/anaconda3/envs/openmmlab/lib/python3.7/site-packages/mmseg/models/builder.py", line 49, in build_segmentor
cfg, default_args=dict(train_cfg=train_cfg, test_cfg=test_cfg))
File "/home/xin/anaconda3/envs/openmmlab/lib/python3.7/site-packages/mmcv/utils/registry.py", line 212, in build
return self.build_func(*args, **kwargs, registry=self)
File "/home/xin/anaconda3/envs/openmmlab/lib/python3.7/site-packages/mmcv/cnn/builder.py", line 27, in build_model_from_cfg
return build_from_cfg(cfg, registry, default_args)
File "/home/xin/anaconda3/envs/openmmlab/lib/python3.7/site-packages/mmcv/utils/registry.py", line 25, in build_from_cfg
'`cfg` or `default_args` must contain the key "type", '
KeyError: '`cfg` or `default_args` must contain the key "type", but got {\'pretrained\': None, \'backbone\': {\'pretrain_img_size\': 384, \'embed_dims\': 192, \'depths\': [2, 2, 18, 2], \'num_heads\': [6, 12, 24, 48], \'drop_path_rate\': 0.2, \'window_size\': 12}, \'decode_head\': {\'in_channels\': [192, 384, 768, 1536], \'num_classes\': 2, \'loss_decode\': {\'type\': \'CrossEntropyLoss\', \'use_sigmoid\': False, \'loss_weight\': 1.0}}, \'auxiliary_head\': {\'in_channels\': 768, \'num_classes\': 2, \'loss_decode\': {\'type\': \'CrossEntropyLoss\', \'use_sigmoid\': False, \'loss_weight\': 1.0}}, \'train_cfg\': None}\n{\'train_cfg\': None, \'test_cfg\': None}'
it's very strange...but it seems like you have similar issue like this this issue. Did you modify anything before running the demo code?
Hi, I cannot find '../_base_/models/upernet_swin.py',
and I comment it. I guess it may be the reason?
And I cannot find '../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
either in file ../configs/swin/swin_l_upper_w_jitter_inference.py
OK, I get it. You should first clone the whole mmsegmentation repo and then put the code of this repo in the corresponding place of your mmsegmentation.
Great! Thank you very much for your help! Now I can run the demo code.
However, I found the segmentation result only outputs one mask (foreground and background), and the config file indicates the output class num is 2 num_classes=2,
. How to get the dense multiple classes segmentation results as shown in Fig.3 UVO dataset paper? Thanks a lot!
Hi, there is only 1 class in UVO dataset(class 'object'), which means there is only instance mask&bbox annotations in UVO but no class labels. The Fig.3 that you refer to actually only visualizes the instance mask using different colors, and they don't distinguish the classes(proof: the class 'person' is labels by different colors in Fig.3). So that's why my network outputs a binary mask.
Thanks for your explanation! So the UVO challenge doesn't distinguish different instances. Can I get multiple instance masks even don't know the category? For example, I can get different masks for different objects/instances even though I don't know the semantic label.
Thanks a lot for your help again! I will close the issue as I have solved the config problem.
Yes of course, that's exactly what class-angositc instance segmentation aims to do. There are several ways to do:
Thanks for your quick reply!
I just checked the config files you give in this repo. For segmentation, they all set num_classes=2
which means I cannot get multiple instance outputs. For detection, I'm not sure how many instances will be predicted.
Yes, as the model in mmsegmentation aims at semantic segmentation, there is no instance segmentation models in mmsegmentation. My repo offers a 2-stage option, which first use an object detector to generate bbox, then apply the segmentation network on the cropped images to generate masks, in this way, you can get masks with higher quality(as shown in the examples in our paper), but the weakness is that our method is more time-consuming. If you only have limited time and don't really care about the mask quality, you could use the pre-trained instance segmentation models fro mmdetection/detectron2.
Thank you very much for your detailed suggestion. I will have a look and keep following.
Hi! Thanks for the code!
I tried to use your config_file in segmentation dir, but got an error. It seems in config.py there is no "type" key. I'm not familiar with openmmlab. Could you help me figure it out? Thanks a lot!
Script:
Log: