Closed jumutc closed 3 years ago
@muellerdo I think also that we need to switch here from BatchNormalization
to InstanceNormalization
as per training example.
Hey @jumutc,
thanks for the pull request and enhancement!
Please add parameters to miscnn.neural_network.architecture.unet.plain.Architecture to match the original code from nnUNet repository. Now the batch normalization is misparametrized and dropout, lrelu activations are missing. Also according to my experiments performance of the current implementation is lagging behind the original one.
Let's see, what we can do. I merged your contribution already to the dev branch and added lrelu to the plain architecture.
@muellerdo I think also that we need to switch here from BatchNormalization to InstanceNormalization as per training example.
If I remembered correctly, the InstanceNormalization of the keras-contrib package was a bit 'shacky' back then when I implemented it, but you are right! Internal experiments didn't show any improvements. They also didn't change the code since 2019: https://github.com/keras-team/keras-contrib/blob/master/keras_contrib/layers/normalization/instancenormalization.py
Sadly, keras do not officially support instance normalization, yet.
Cheers, Dominik
@muellerdo if you don't mind using tensorflow-addons
here we can proceed with it. My implementation uses instance normalisation from it and seems stable throughout all my CV runs.
Merged #87, verified functionality, merged to master and released on PyPI.
Thanks for the contribution!
Please add parameters to
miscnn.neural_network.architecture.unet.plain.Architecture
to match the original code from nnUNet repository. Now the batch normalization is misparametrized and dropout,lrelu
activations are missing. Also according to my experiments performance of the current implementation is lagging behind the original one.