Project-MONAI / MONAI

AI Toolkit for Healthcare Imaging
https://monai.io/
Apache License 2.0
5.86k stars 1.08k forks source link

How to improve quality of 3D segmentation? #474

Closed alone-programmer closed 4 years ago

alone-programmer commented 4 years ago

I was able to repurpose the spleen_segmentation_3d.ipynb tutorial for portal vein segmentation based on 3D-IRCADb 01, where I have 20 CT images and I used 15 for training and 5 for validation.

I did not change anything from spleen segmentation tutorial but just added a randomized Affine transformation. Finally, I got a mean dice value on validation dataset as ~0.57 after ~3200 epoch. My training loss reached ~0.15.

This accuracy is not that great or exciting for my application, which is using this segmentation for generating CFD meshes. Is there any subtle trick or configuration that I can change or add to the existing spleen tutorial to increase final mean dice value of validation dataset? Due to my low training loss and low validation mean dice value (in comparison to what I see in original spleen segmentation tutorial), I think, I'm overfitting here. Of course, in the original spleen segmentation tutorial, it tries to segment an organ as spleen, but I'm trying to segment portal vein which is a sub-region of liver and I think my problem is a bit more difficult in comparison to original spleen segmentation. I appreciate any idea or suggestion.

Nic-Ma commented 4 years ago

Hi @alone-programmer ,

Thanks for your interest and practice with MONAI. About your questions, I think it's hard to answer immediately without further experiments. I suggest trying from below basic steps:

  1. Use the visualization tool to plot 3D input image, label and model output in TensorBoard: https://github.com/Project-MONAI/MONAI/blob/master/monai/visualize/img2tensorboard.py#L151 And analyze the bad cases, maybe the model predicts all 0 output.
  2. split the dataset into 5 folds and do cross validation, to check whether the images are similar.

Thanks.