Open mrgloom opened 6 years ago
I figure out that with batch size = 8 (original is 32) network doesn't converges, is it possible to train this Unet implementation with smaller batch size? (i.e. with batch size = 1?)
Try adding BatchNormalization layer, it may help you to train with small batch sizes.
I figure out that with batch size = 8 (original is 32) network doesn't converges, is it possible to train this Unet implementation with smaller batch size? (i.e. with batch size = 1?)