JaveyWang / Pyramid-Attention-Networks-pytorch

Implementation of Pyramid Attention Networks for Semantic Segmentation.
GNU General Public License v3.0
235 stars 55 forks source link

Positioning of RELU in FPA #13

Open JohnMBrandt opened 4 years ago

JohnMBrandt commented 4 years ago

In networks.py, lines 123 - 124:

x3_upsample = self.relu(self.bn_upsample_3(self.conv_upsample_3(x3_2)))
x2_merge = self.relu(x2_2 + x3_upsample)

I know that x2_2 has a linear activation, why does x3_upsample have a relu, if you then relu it again after the addition?