XiaLiPKU / EMANet

The code for Expectation-Maximization Attention Networks for Semantic Segmentation (ICCV'2019 Oral)
https://xialipku.github.io/publication/expectation-maximization-attention-networks-for-semantic-segmentation/
GNU General Public License v3.0
681 stars 131 forks source link

Questions about parameters and FLOPs in Tab. 1. #30

Closed zhouyuan888888 closed 4 years ago

zhouyuan888888 commented 4 years ago

You note "All results are achieved with the backbone ResNet-101 with output stride 8". Therefore, why the parameters and FLOPs of EMANet are substantially less than the backbone (ResNet-101)? Taking EMANet512 as an example, it contains 10M parameters and 43.1G FLOPs. However, the backbone (ResNet-101) network totally contains 42.6M parameters and 190.6G FLOPs. Are there some errors in this place?