Closed schmityv closed 5 years ago
I am using my own dataset to do the training. Here is the segmentation result of human ventricle with 8 training data only with around 5000 iterations. The result is not too satifactory with this small amount of data but do perform well when training set exceeds 50.
Same code has been implemented on T2 spine and CBCT mandible segmentations.
image label vnet output
To get a fast and good result, it is better to do some pre- and post-processing. Don't simply drop raw data to the network, you should use prior knowledge to crop to a confined region (in ventricle example I use brain atlas template to locate the center brain area).
You may found small isolated islands from the training result, which can be easily removed by finding largest connected components and volume thresholding in the region. Spikey edges can be easily refined with condition random field or bayesian learning based on vnet output
Thanks for the information. Could you share with us a few more details:
@mobbSF regarding your questions
The algo can read .nii, .nii.gz without any code changes, not sure about .hdr/.img analyze format but believe to be readable directly.
The dimension is arbitary but it will affect training speed and convergency rate. By default if your image is in RAS direction the last term should be z axis in world coordinate.
If you need to read dicom images, you need to modify the reader in NifityDataset.py with following reader: https://simpleitk.readthedocs.io/en/master/Examples/DicomSeriesReader/Documentation.html JPG/PNG can be assebled to 3D itk images by manually stacking them. Nifti is the best single medical image format as it can preserve spatial information and orientation in it's header.
Hello,
I have a question about the train data that you used. In the quoted paper about the v-net, the authors used MRI-data from the "promise challenge" (https://promise12.grand-challenge.org/). Are you testing your implementation with the same data?
Best regards, Yves