josedolz / LiviaNET

This repository contains the code of LiviaNET, a 3D fully convolutional neural network that was employed in our work: "3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study"
MIT License
161 stars 52 forks source link

how to see the segmentation results after training? #2

Closed YilinLiu97 closed 7 years ago

YilinLiu97 commented 7 years ago

Hi, this work is great! Do we have to write function ourselves to convert the segmentation results (.mat) to something we can actually see (.nifti)? Thanks.

josedolz commented 7 years ago

Hi, thanks for your nice comments.

Actually you have several options to visualize the results.

Let me know whether that works for you.

Best.

YilinLiu97 commented 7 years ago

Hi @josedolz, thanks for the help! This is one of the segmentation result that I visualized using MRIcron after the data format conversion. However, It doesn't look like a segmented image, or did I interpret it wrong? Btw I used the dataset you provided. Thanks! screenshot 2017-07-27 13 19 23

josedolz commented 7 years ago

Hi,

when you get the segmentation results you have several files. Some of the files are the probability maps of each class before they are assigned to any target label. Then there is one file that contains the classes for the segmentation. I do this because is common today to apply some sort of post-processing to the CNN output probabilities, such as CRF or graph-cuts, to improve the results given by the network.

I just run the example code with the provided dataset and create a nifti image to visualize it (Keep in mind that it is a tinny network running for few iterations, so results are far from being good :) ). Here you can see the segmentation.

screenshot 2017-07-27 23 52 09
YilinLiu97 commented 7 years ago

Hi @josedolz, thanks for the reply! I will install the MITK and use it to visualize the result later. Another quick question is, there's no pre-processing steps for applying our own dataset, right? e.i. we don't need to crop it into patches, etc.. I assume this because I saw there was a 'segmentVolume' function in your script.

Updated: I'm able to visualize the segmentation results now. It's beautiful!! Great work!! Btw 1) the MR dataset that you provided for training is from ISBR, right? (And ABIDE just for testing?) Now I'm interested in applying our own datasets and segmenting 25 subcortical structures. 2) One big question is about the pre-processing step mentioned above. 3) Also, why is ROI necessary for training?

It would be great if you have any suggestion on training on custom dataset. Thanks so much!!

josedolz commented 7 years ago

Hi @YilinLiu97, you do not actually need MITK to visualize the output. You can use any visualization tool (MITK, 3D Slicer,and so on). The thing is that you need to get the correct files you want to visualize. To see the final segmentation, the image should be of type int8, and that image looks more like double of float.

Just as a hint, I used this (in Matlab) to create the image in me previous answer:

nii = make_nii(vol, [1 1 1], [0 0 0], [8]);
save_nii('Test.nii',nii);

where vol is the segmentation result. In the demo example, the segmentation result is stored in Segmentation_MR_Img6.mat file.

YilinLiu97 commented 7 years ago

Hi, @josedolz, Yes, I used the above codes to do the conversion. It's just that MRIcron doesn't seem to highlight the segmented parts, so I decided to try other softwares, such as MITK, and it turns out well. Thanks for the help!!

josedolz commented 7 years ago

I am glad to hear that you solve that and hope you find the code useful. :)

Regards.

josedolz commented 7 years ago

Btw, for the pre-processing (I forgot it) I typically use public datasets, where images are normally pre-processed. In case they are not, image normalization will often improve your results.

YilinLiu97 commented 7 years ago

Got it! Thanks so much! Great work :)