wuhuikai / FastFCN

FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation.
http://wuhuikai.me/FastFCNProject
Other
838 stars 148 forks source link

FastFCN has been supported by MMSegmentation. #106

Closed MengzhangLI closed 3 years ago

MengzhangLI commented 3 years ago

Hi, right now FastFCN has been supported by MMSegmentation. We do find using JPU with smaller feature maps from backbone could get similar or higher performance than original models with larger feature maps.

There is still something to do for us, for example, we do not find obviously improvement about FPS in our implementation, thus we would try to figure it out in the future.

Anyway, thanks for your work and hope more people from community could use FastFCN.

Best,

wuhuikai commented 3 years ago

Thanks for your implementation in MMSegmentaion. Could you please share your result about FPS? Which backbones did you experiment with?

MengzhangLI commented 3 years ago

More details could be found here.

Backbone is ResNet50, decoder head is PSPNet, EncNet and DeepLabV3.

wuhuikai commented 3 years ago

As shown in the paper, the advantage is more significant when using ResNet101 as the backbone.

edwardyehuang commented 2 years ago

Why not submit to a conference? Although it is a 2019 paper, i think it is still worth a CVPR. Based on huge number of expierment I did, JPU can get simliar or better performance than dilation mode, even on Swin-Large or ConvNextXt-Large.

MengzhangLI commented 2 years ago

Why not submit to a conference? Although it is a 2019 paper, i think it is still worth a CVPR. Based on huge number of expierment I did, JPU can get simliar or better performance than dilation mode, even on Swin-Large or ConvNextXt-Large.

Could you list your numerical results about JPU + Swin/ConvNeXt? It would be better if your experiments were based on MMSegmentation codebase.

edwardyehuang commented 2 years ago

Why not submit to a conference? Although it is a 2019 paper, i think it is still worth a CVPR. Based on huge number of expierment I did, JPU can get simliar or better performance than dilation mode, even on Swin-Large or ConvNextXt-Large.

Could you list your numerical results about JPU + Swin/ConvNeXt? It would be better if your experiments were based on MMSegmentation codebase.

Sorry, I am not using mmseg atm. I will provide some results in near future.

wuhuikai commented 2 years ago

Why not submit to a conference? Although it is a 2019 paper, i think it is still worth a CVPR. Based on huge number of expierment I did, JPU can get simliar or better performance than dilation mode, even on Swin-Large or ConvNextXt-Large.

If you're interested, we can work on it together : )

MengzhangLI commented 2 years ago

Why not submit to a conference? Although it is a 2019 paper, i think it is still worth a CVPR. Based on huge number of expierment I did, JPU can get simliar or better performance than dilation mode, even on Swin-Large or ConvNextXt-Large.

Could you list your numerical results about JPU + Swin/ConvNeXt? It would be better if your experiments were based on MMSegmentation codebase.

Sorry, I am not using mmseg atm. I will provide some results in near future.

OK, got it. Swin and ConvNeXt official repos were both implemented based on MMSegmentation.

MengzhangLI commented 2 years ago

https://paperswithcode.com/paper/car-class-aware-regularizations-for-semantic-1 @wuhuikai