Open Jeff-LiangF opened 2 years ago
Yes, I have tried before. I checked an old log of a previous experiment. The coco-stuff->ade20k-150 generalization performance is 16.4 in mIoU. But I am not sure if it is the newest model. And I need to check the details, for the comparison with other methods. Of course, you can also test it by yourself, since we have released the models and codes.
Thanks for your prompt help! It would be great if you can test your best model and report it so that the community can compare with your results. I'll try to test from my end. :)
Sure, I will update the results later.
@dingjiansw101 Hi Jian, thanks for your great work! I am wondering did you happen to test your trained coco-stuff model directly on the ADE-20K dataset? Because in the concurrent works, like [1][2], they all report this transfer number. It is very interesting to compare your work with counterparts. Thanks!
[1] Xu, Mengde, et al. "A simple baseline for zero-shot semantic segmentation with pre-trained vision-language model." arXiv preprint arXiv:2112.14757 (2021). [2] Ghiasi, Golnaz, et al. "Open-vocabulary image segmentation." arXiv preprint arXiv:2112.12143 (2021).