fudan-zvg / SETR

[CVPR 2021] Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers
MIT License
1.05k stars 150 forks source link

multi-scale testing #52

Open ZhengyuXia opened 2 years ago

ZhengyuXia commented 2 years ago

Hi,

I downloaded a SETR_MLA model (512x512 with a batch size of 8) to test its performance using ADE20K validation set. Since I have only two RTX Titan, I replaced the #GPU 8 with 2 for testing.

The SS testing is same as its report, which is 47.79%. image

For MS testing, I followed the instruction that uses the config file ending in _MS.py in configs/SETR. However, the MS testing is just 47.91%, which is far way lower than its report 50.03%. image

So, I'm wondering what could be the possible reasons? Thanks.