Open JakobKHAndersen opened 3 years ago
Hi @DBAFC
Which implementation did you refer to? I think I used 1x1 convolutions in the output layer as well.
See https://github.com/MrGiovanni/UNetPlusPlus/blob/master/keras/helper_functions.py#L135 unet_output = Conv2D(num_class, (1, 1), activation='sigmoid', name='output', kernel_initializer = 'he_normal', padding='same', kernel_regularizer=l2(1e-4))(conv1_5)
https://github.com/MrGiovanni/UNetPlusPlus/blob/master/keras/helper_functions.py#L269 nestnet_output_4 = Conv2D(num_class, (1, 1), activation='sigmoid', name='output_4', kernel_initializer = 'he_normal', padding='same', kernel_regularizer=l2(1e-4))(conv1_5)
Hi
This is the model i'm referring to:
https://github.com/MrGiovanni/UNetPlusPlus/blob/master/keras/segmentation_models/unet/builder.py
Hello
I was wondering what the rational is for using 3x3 convolutions as opposed to 1x1 convolutions in the output layer. As far as i know, the original U-net paper uses 1x1 sigmoid/softmax neurons in the output for pixelwise classification, but in you implementation you use 3x3. Why is that?