LuxImagingAI / DBSegment

This is a deep learning-based method to segment deep brain structures and a brain mask from T1 weighted MRI.
GNU General Public License v3.0
10 stars 6 forks source link

Model Training #24

Open Kimmm-pzl opened 1 year ago

Kimmm-pzl commented 1 year ago

Hi,

How to use your open source code for model training? Given that your model is based on nnunet, is the overall training process similar? Does the training data also need to be stored in folders such as "nnUNet_raw_data_base" like nnunet? In other words, can you provide guidance on model training?

Looking forward to your reply! This helped me a lot!

Best Kim

MehriB commented 1 year ago

Hi Kim, for training, you can first use main_preprocess to preprocess your train/dev images, then use nnunet for training a model and yes the input data should be in the format expected by nnunet. While training, we performed a leave one dataset out cross validation instead of the default random 5-fold cross validation of nnunet, which needs some code modification in nnunet (not necessary though). Once the training is finished, you can use our post processing for transforming the images back to the native space, otherwise the output segmentation will be in the space of the pre-processed images.

Kimmm-pzl commented 1 year ago

Hi,

Thank you for your reply!

I'm sorry to bother you so many times. In addition to using the "main_preprocess" function , do I need to call the "nnUNet_plan_and_preprocess" function of nnUnet to preprocess the data? For the training process, should I refer to nnUnet's training tutorial -- "Example: 3D U-Net training on the Hippocampus dataset" and use "nnUNet_train 3d_fullres" for training?

I aslo want ask has the "nii.gz" format data in the "preprocessed_v3" folder under the "mr_data" folder generated by the command "DBSegment -i ./test/mr_data -o ./test/mr_seg -mp ./test/models" been preprocessed by nnUnet and the "main_preprocess" function.

Looking forward to your reply, thanks!

Best Kim