Closed foobar167 closed 5 years ago
The training loss seems to be much lower than the validation loss or vice versa. i.e. I guess your model is overfitting. Which dataset you used?
Yes, arghadeep25, I think you write, there are too many epochs (120 epochs). So it seems overfitting.
The dataset is from Kaggle TGS Salt Identification Challenge: https://www.kaggle.com/c/tgs-salt-identification-challenge Original Jupyter Notebook: https://github.com/nikhilroxtomar/Deep-Residual-Unet/blob/master/Deep%20Residual%20UNet.ipynb You can open and run it via Google Colab: https://github.com/foobar167/articles/blob/master/Machine_Learning/code_examples/deep_residual_unet_segmentation.ipynb
The data seems to be quite sparse than the networks. I guess this can be solved by using earlyStopping, dropout and augmenting the data.
Thank you for the implementation and explanation of ResUNet! That's really interesting.
It seems to me that you should have a different "bridge" for ResUNet. According to the article, the "bridge" is the same as the "residual block" of ResUNet. The "bridge" and "residual block" are the same, except that dashed arrow from the entrance of "residual block" to the "Addition" block. But I think this is an error in the article :-) There must be a dashed arrow otherwise we don't need "Addition" block in the "bridge". So I've changed your code to this:
As you can see I just put "e5" to the "bridge" and that's all :-) Also I've set 120 epochs and obtained dice coefficient equal to 0.9825
237/237 [==============================] - 11s 48ms/step - loss: 0.0175 - dice_coef: 0.9825 - val_loss: 0.1533 - val_dice_coef: 0.8467
The result is on 8.6% higher than the 1st place on Kaggle, team name "b.e.s. & phalanx". It means that YOU can be a Kaggle winner next time. Keep doing don't stop :-)