Visual-Attention-Network / SegNeXt

Official Pytorch implementations for "SegNeXt: Rethinking Convolutional Attention Design for Semantic Segmentation" (NeurIPS 2022)
Apache License 2.0
795 stars 85 forks source link

a question about your paper #44

Open massica17 opened 1 year ago

massica17 commented 1 year ago

“we use batch normalization instead of layer normalization as we found batch normalization gains more for the segmentation performance.” I can not understand why,I think maybe it is because you substitute self-attentin for Conv。But I am not sure