frankkramer-lab / MIScnn

A framework for Medical Image Segmentation with Convolutional Neural Networks and Deep Learning
GNU General Public License v3.0
407 stars 116 forks source link

Which standard configuration to use when comparing MISccn to other methods? #136

Open davidkvcs opened 2 years ago

davidkvcs commented 2 years ago

Hi

I have MIScnn shortlisted as a candidate to be included on our work to find the best openly available method to autosegment head and neck cancer (HNC) tumors on PET-CT. My study includes 1100 HNC patients. We train on about ~850 of these, the rest is for testing.

  1. Can MIScnn be configured to handle multimodal input?
  2. Your setup allows for a lot of different configurations. Do you have a paper outlining configurations I should use for our problem? --- I have seen "MIScnn: a framework for medical image segmentation with convolutional neural networks and deep learning". Here you choose normalization, resampling, patch-size etc, but I am in doubt, if you would use these same configs, in a problem like ours, or when comparing to others? -- (If I may; How did you arrive at these specific config choices?)

If it is described somewhere how we should configure MIScnn for our problem, it would most likely be eligible for our study.

Thank in advance --

muellerdo commented 2 years ago

Hey @davidkvcs,

I have MIScnn shortlisted as a candidate to be included on our work to find the best openly available method to autosegment head and neck cancer (HNC) tumors on PET-CT.

Happy to hear that MIScnn can be useful for your studies.

While you can run MIScnn on your dataset with default parameters and obtain probably competitive results, I have to note that MIScnn is a framework/toolbox for building such pipelines and not an AutoML / autosegmentation software like the excellent nnU-Net from the DKFZ.

Can MIScnn be configured to handle multimodal input?

Yes. This can be achieved by implementing a custom IO Interface or by combining the different modalities into a single NIfTI file for each sample & using the NIfTI IO Interface provided by MIScnn.

You can find the BraTS2020 example (combines multiple MRI sequences) for multimodality datasets here:
https://github.com/frankkramer-lab/MIScnn/blob/master/examples/BraTS2020.multimodal.ipynb

Your setup allows for a lot of different configurations. Do you have a paper outlining configurations I should use for our problem?

Sadly, no. Depending on your specific dataset and disease type, optimal configurations can widely vary.

However, I can recommend you some "good-enough" / starting configurations for CT analysis. For that, I would refer to our COVID-19 example: https://github.com/frankkramer-lab/covid19.MIScnn/blob/master/scripts/run_miscnn.py

Summary:

Hope that I was able to help you & good luck on your further study.

Cheers, Dominik

joaomamede commented 2 years ago

On this question (#1) The example has patch_shape=(80, 160, 160). This means the model takes each "channel" independently correct? It doesn't model through the 4 channels simultaneously.

I tried to add something like patch_shape=(2,80,160,160) but the UNet doesn't have the function for 4D.

Am I reading the situtation wrong?

Thanks so much for MIScnn it's great, we were able to segment organs from CT/PET scans!