IBBM / Cascaded-FCN

Source code for the MICCAI 2016 Paper "Automatic Liver and Lesion Segmentation in CT Using Cascaded Fully Convolutional NeuralNetworks and 3D Conditional Random Fields"
Other
304 stars 127 forks source link

Batch normalization not used? Step2 dataset? #18

Closed PiaoLiangHXD closed 6 years ago

PiaoLiangHXD commented 7 years ago

Hi, I studied u-net prototxt and your unet-overfit-python.prototxt . I found that you didn't use batch normalization layer. Can you explain why?

In step1, the dataset contains 15 patients to train liver. And for step2, do you still use these 15 patients' CT scans? You crop the liver and resized and padded to fit the u-net. Then I thought there are two problems: 1, for each slice, liver size is different, if you crop just the tiny liver, comparing with those "big" liver, there will be noises. In Medical image processing, they often crop the liver by the volume size, not the liver size of a single slice. 2, for 15 patients, 2063 slices in total, 1572 of them have livers, and only 579 of them have tumors. You just crop all livers to train lesion in step2? Then will there be imbalance problem?

mohamed-ezz commented 7 years ago

No specific reason to not use BatchNorm layers, except for lack of time. Let us know if you found them useful.

The other points :

  1. Your intuition is probably correct. This is however what we experimented with, so far. You contribution is welcome.
  2. In step1 and step2, to account for the imbalance problem we used class weights, giving more weight to the liver or tumor pixels, because there are way more background pixels.
PiaoLiangHXD commented 7 years ago

Thank you so much for your reply. Yes, batch normalization helps a lot. It fastens convergence. To overcome imbalance problem, I use dice coefficient instead, I found it quite useful. In caffe and TensorFlow, for some reasons, the batch size is limited to 1 or 2, but with Keras, batch size could be up to 16. This also helps a lot. @juliandewit, during his big work with DSB2017 challenge, he used 2d u-net with Keras. He also changed a little the architecture. I think this may help. Also a little trick: use batch normalization before conv layer.

PatrickChrist commented 7 years ago

Thanks for your feedback. You are more than invited to commit your modification and trained models in this repo. Other user will highly appreciate your efforts.

PiaoLiangHXD notifications@github.com schrieb am Mo. 10. Juli 2017 um 05:21:

Thank you so much for your reply. Yes, batch normalization helps a lot. It fastens convergence. To overcome imbalance problem, I use dice coefficient instead, I found it quite useful. In caffe and TensorFlow, for some reasons, the batch size is limited to 1 or 2, but with Keras, batch size could be up to 16. This also helps a lot. @juliandewit https://github.com/juliandewit, during his big work with DSB2017 challenge, he used 2d u-net with Keras. He also changed a little the architecture. I think this https://github.com/juliandewit/kaggle_ndsb2017/blob/master/step2_train_mass_segmenter.py may help. Also a little trick: use batch normalization before conv layer.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/IBBM/Cascaded-FCN/issues/18#issuecomment-313992536, or mute the thread https://github.com/notifications/unsubscribe-auth/AFQjlHzJdQG7zFlOJprg3u5Q_dDytDzyks5sMZi4gaJpZM4N5eGB .

-- Patrick Christ, M.Sc. Physik | Mobil +49 17620671290 | patrick.christ@ph.tum.de