Closed qwert1337 closed 2 years ago
Hi, nice work first of all! I stumbled across the relu activations in the code:
x = x + self.down3(self.relu(x_)) x_ = x_ + F.interpolate( self.compression3(self.relu(layers[2])), size=[height_output, width_output], mode='bilinear')
In the paper in Fig. 3 there are no activations after blocks, but only after the bilateral fusion. Now I'm wondering what is correct?
Hi, please follow the code. The pre-trained models work well with the current codes. In fact, one relu won't affect performance a lot.
Hi, nice work first of all! I stumbled across the relu activations in the code:
x = x + self.down3(self.relu(x_)) x_ = x_ + F.interpolate( self.compression3(self.relu(layers[2])), size=[height_output, width_output], mode='bilinear')
https://github.com/ydhongHIT/DDRNet/blob/ba659f9ec358f9ef55dfea00f5e63dae6ad3efd9/segmentation/DDRNet_23_slim.py#L312 In the paper in Fig. 3 there are no activations after blocks, but only after the bilateral fusion. Now I'm wondering what is correct?