lalonderodney / SegCaps

Official Implementation of the Paper "Capsules for Object Segmentation".
Apache License 2.0
276 stars 101 forks source link

Performance Do Not Improve #16

Open AuliaRizky opened 5 years ago

AuliaRizky commented 5 years ago

Hello, I'm using segcapsbasic and segnetR3 as the model for stroke segmentation of brain image using ISLES 2017 dataset. I use 25 patient 3D MRI data that I sliced (the total 2D image is 527 image) and adjusted to work with segcaps implementation. The problem is that the training performance never exceed 0.04 out_seg_dice_hard. Here is the latest training process which is stopped because the learning rate already very small:

Epoch 00037: val_out_seg_dice_hard did not improve from 0.03707 Epoch 38/100 475/475 [==============================] - 61s 128ms/step - loss: 0.9797 - out_seg_loss: 0.9708 - out_recon_loss: 0.0090 - out_seg_dice_hard: 0.0283 - val_loss: 0.9765 - val_out_seg_loss: 0 .9667 - val_out_recon_loss: 0.0099 - val_out_seg_dice_hard: 0.0326

The stroke lesion region determined based on the intensity. Since I used ADC image the intensity that shows the lesion is the one that hypointense. And it is the task for binary segmentation.

Is there any incompatibility for this algorithm to works on brain MRI? If not, is there any suggestion to improve the performance?

Thanks

lalonderodney commented 5 years ago

Hello @AuliaRizky , Have you tried U-Net and Tiramisu to see if all are failing? You can check the 'figs' folder to make sure the ground truth and images are looking correct. Try also turning on debug inside load_3d_data.py to see if the real time data augmentation is functioning properly (perhaps adjust the parameters of the elastic deformation augmentation). If the images and ground truths looks correct going into the network then feel free to come back and ask for further advice.

AuliaRizky commented 5 years ago

Hallo @lalonderodney ,

Thanks for your response, I've found some mistake in preprocessing and image feeding procedure. I did full dataset training using unet only, and it shows constant dice hard value (whether I use bce, mar, or dice). Currently, I am not using augmentation. I also post issue in segcaps implementation by Cheng Lin Li. She suggest that I do over fit testing by only feeding single image to test whether the model strong enough to do the task. Over testing result shows good DC between 0.8 - 0.9 using segcapsbasic and capsnetR3. The problem that I found is that the raw output from testing (output before applying otsu thresholding) show that the background value is 0.47 (It suppose to have 0 value) and the ROI value is (>0.65). There is no lower value than that 0.47 or higher than 0.77. After training using the full dataset (I use segcapsbasic) the training seems could not do the task. I test it using an image taken from dataset (that I know it have ROI) and the raw output recognize all of the brain area. It seems cannot recognize anything inside the brain area. And, I think this is the effect of value shown by the raw result have very narrow value between the ROI and background (as single image over fit test result show).

Do you have any advice to solve it? Thank you very much

Luchixiang commented 4 years ago

hello @AuliaRizky I've met the same problem on the Brats dataset, have you solved it?

pavanbaloju commented 4 years ago

Hello, I'm using segcapsbasic and segnetR3 as the model for stroke segmentation of brain image using ISLES 2017 dataset. I use 25 patient 3D MRI data that I sliced (the total 2D image is 527 image) and adjusted to work with segcaps implementation. The problem is that the training performance never exceed 0.04 out_seg_dice_hard. Here is the latest training process which is stopped because the learning rate already very small:

Epoch 00037: val_out_seg_dice_hard did not improve from 0.03707 Epoch 38/100 475/475 [==============================] - 61s 128ms/step - loss: 0.9797 - out_seg_loss: 0.9708 - out_recon_loss: 0.0090 - out_seg_dice_hard: 0.0283 - val_loss: 0.9765 - val_out_seg_loss: 0 .9667 - val_out_recon_loss: 0.0099 - val_out_seg_dice_hard: 0.0326

The stroke lesion region determined based on the intensity. Since I used ADC image the intensity that shows the lesion is the one that hypointense. And it is the task for binary segmentation.

Is there any incompatibility for this algorithm to works on brain MRI? If not, is there any suggestion to improve the performance?

Thanks

I have the same problem too. If you have the solution please help me. Thanks in advance!

pavanbaloju commented 4 years ago

Hello @AuliaRizky , Have you tried U-Net and Tiramisu to see if all are failing? You can check the 'figs' folder to make sure the ground truth and images are looking correct. Try also turning on debug inside load_3d_data.py to see if the real time data augmentation is functioning properly (perhaps adjust the parameters of the elastic deformation augmentation). If the images and ground truths looks correct going into the network then feel free to come back and ask for further advice.

Hello, U-net is doing good on the dataset. But Segcapsbasic, is not doing good. The loss isn't decreasing at all. The predictions for all pixels in binary segmentation is same.