Closed Jderuijter closed 3 years ago
Hey @Jderuijter,
interesting finding!
As stated in the architecture file, the implementation was obtained directly from the main author's (Nabil Ibtehaz) GitHub repository for the MultiResUnet paper.
So, I would recommend getting in contact with the authors by opening up an issue in the paper repository:
https://github.com/nibtehaz/MultiResUNet
Cheers, Dominik
In the original paper about the MultiResUnet states: "All the convolutional layers in this network, except for the output layer, are activated by the ReLU (Rectified Linear Unit) activation function (LeCun et al., 2015), and are batch-normalized (Ioffe & Szegedy, 2015). "
So there should be no batch-normalization at the output layer. line 119: conv10 = conv2d_bn(mresblock9, n_labels, 1, 1, activation=self.activation)
suggestion: conv10 = Conv2D(n_labels, (1, 1), activation=self.activation)(mresblock9)
This problem also applies to the 3D case.