Closed whisney closed 6 years ago
Dropout can be added between any two hidden layers if only the regularization is preferred, since parameter size in some Conv layers are quite large, such as conv5, and the training data is small, it is reasonable to add dropout between larger convolutional connections.
There seems to be a dropout layer in the original paper:
Drop-out layers at the end of the contracting path perform further implicit data augmentation.
However I have noticed in my case (denoising) that dropout hinders the performance. This post also says that batch normalization may be better in conv nets, but I haven't found much regarding unets specifically.
There is no dropout layer in unet of U-Net: Convolutional Networks for Biomedical Image Segmentation.If it can really Improve model performance?