Closed lxtGH closed 2 years ago
Hi @lxtGH, thanks for your interest in our work.
In our new expanded, journal-length Arxiv paper, there are a number of differences vs. our CVPR 2020 paper:
We evaluate our models in the 2020 Robust Vision Challenge (RVC) as an extreme generalization experiment. MSeg training sets include only three of the seven datasets in the RVC; more importantly, the evaluation taxonomy of RVC is different and more detailed. Surprisingly, our model shows competitive performance and ranks second.
To evaluate how close we are to the grand aim of robust, efficient, and complete scene understanding, we go beyond semantic segmentation by training instance segmentation and panoptic segmentation models using our dataset.
Moreover, we also evaluate various engineering design decisions and metrics, including resolution and computational efficiency. (We evaluate runtime of our models, and train & evaluate models at different resolutions for practitioners who have different computational limitations).
There is no separate panoptic segmentation annotation; MSeg's relabeled classes per instance are directly applicable to panoptic segmentation as well.
Closing the issue, feel free to re-open if you have further questions @lxtGH
Will the panoptic segmentation annotation be released ??