The current implementation of UNet3d is outdated. One major problem is that it does a crop, instead of using valid convolutions all the way through. This slows it down majorly.
This network did well in the BRaTS 2018 challenge and is efficient. Additionally, using the volume_padding_to_size flag, the user can choose a size of file to perform inference in a single forward pass. This should reduce inference time greatly.
The current implementation of UNet3d is outdated. One major problem is that it does a crop, instead of using valid convolutions all the way through. This slows it down majorly.
I propose implementing a new UNet3d based on No New-Net (https://link.springer.com/chapter/10.1007/978-3-030-11726-9_21)
This network did well in the BRaTS 2018 challenge and is efficient. Additionally, using the
volume_padding_to_size
flag, the user can choose a size of file to perform inference in a single forward pass. This should reduce inference time greatly.