MrGiovanni / ModelsGenesis

[MICCAI 2019 Young Scientist Award] [MEDIA 2020 Best Paper Award] Models Genesis
Other
737 stars 140 forks source link

Reproduction of BraTs results #36

Open camilleruppli opened 3 years ago

camilleruppli commented 3 years ago

Hi ! First, thank you for making your incredible work available. Your results are outstanding and the code is pretty easy to understand. In order to use your work in another application that I am investigating I wanted first to replicate your results on the BraTs dataset. I have downloaded and preprocessed the data : resizing the images to 64x64x64 and normalizing between 0 and 1. However, when finetuning Genesis CT I don't manage to get anything close to your results.

Is there something I am missing ? The whole image has to be fed to the network or split in patches ? Also in the code example given in the keras folder num_classes is set to 2 for segmentation which I am not sure to understand as the convolution will create (2, 64, 64, 64) sized mask.

Thank you for your help, Regards, Camille

MrGiovanni commented 3 years ago

Hi @camilleruppli

Thanks for reaching out. Genesis Chest CT was pre-trained by sub-volumes of 64x64x32, and we recommend to keep the same input shape in target tasks as well (we encountered degraded performance when using different shapes, e.g., 64x64x64 or 128x128x64). Also, for MRI images, normalization was done at the image level,

img = (img - np.min(img)) / (np.max(img) - np.min(img)) note that "img" is the MRI image for one patient, instead of the entire dataset.

We're cleaning up the target task scripts so they will be released as jupyter notebooks shortly.

Hope it helps.

Zongwei

camilleruppli commented 3 years ago

Hi @MrGiovanni ! Thank you for your answer. I also have normalized the images the same way you had so this should be a problem, I will try training with 64x64x32. As for the num_classes in the example for segmentation, could you explain why it is 2 and not 1 because it creates segmentation masks with 2 channels ? Changing classes number from 2 to 1 and using a sigmoid output activation rather than the softmax as in the example the model seems to, at least, learn something but the mean IOU stays stuck at 0.49. Could you explain the choice of a softmax for the output of the segmentation example ?

Thanks for your help, Regards, Camille

MrGiovanni commented 3 years ago

Hi @camilleruppli

I would say sigmoid and softmax have no major difference if you use them correctly.

Usually, for binary, multi-label classification/segmentation, I would use the sigmoid function in the last layer. for multi-class classification/segmentation, I would use the softmax function in the last layer.

You can find more explanation about sigmoid and softmax online, e.g., https://medium.com/arteos-ai/the-differences-between-sigmoid-and-softmax-activation-function-12adee8cf322#:~:text=Softmax%20is%20used%20for%20multi,in%20the%20Logistic%20Regression%20model.&text=This%20is%20similar%20to%20the,together%20all%20of%20the%20values.

You might want to take a look at our recently released target task example at https://github.com/MrGiovanni/ModelsGenesis/blob/master/keras/downstream_tasks/lung%20nodule%20segmentation.ipynb for detailed setups about data pre-processing, visualization, model training, testing, and so on.

Thank you, Zongwei