You have to create 3 directory (copy paste this code)
mkdir {nnUNet_preprocessed, nnUNet_raw, nnUNet_results}
echo add this next 3 lines to your .bash profile
echo 'export nnUNet_raw=$(pwd)"/nnUNet_raw"'
echo 'export nnUNet_preprocessed=$(pwd)""/nnUNet_preprocessed"'
echo 'nnUNet_results=$(pwd)"/nnUNet_results"'
if using rosenberg do it in nvme or extra_ssd
nnUNet_raw
Place the raw datasets
This folder will have one subfolder for each dataset names DatasetXXX_YYY where XXX is a 3-digit identifier (such as 001, 002, 043, 999, ...) and YYY is the (unique) dataset name.
{
"channel_names": { # Remove unused channels
"0": "FLAIR",
"1": "T1w",
"2": "T2",
"3": "T2w"
},
"labels": { # Use label values from 0 to X
"background": 0,
"label_1": 1,
"label_2": 2
},
"numTraining": 00, # replace by number of subject in imagesTr
"file_ending": ".nii.gz"
"overwrite_image_reader_writer": "SimpleITKIO"
}
imagesTr
Contains the images belonging to the training cases.
NAME_NUM_CHANNEL.nii.gz
NUM = subject number
CHANNEL = FLAIR (0000), T1w (0001), T1gd (0002) and T2w (0003)
labelsTr
Images with ground truth segmentation
NAME_NUM.nii.gz
NUM = subject number
Replace X by GPU id for training
Every 50 epoch a checkpoint is saved (dont stop before the 50th epoch if you wan to run inference)
Configuration
2d
3d_fullres
3d_lowres
3d_cascade_fullres
Fold
5 fold by default, train 1 fold
Track progress
Graph tracking progress available in nnUNet_results/DATASET_ID/nnUNetTrainer__nnUNetPlans__3d_fullres/fold_X/progress.png
You can copy locally from folder with scp PATH:server_file PATH:local_file
Run inference
Basic run
Only possible if 50+ epoch
nnUNetv2_predict -i nnUNet_raw/DATASET_ID/imagesTs -o OUT_DIR -d DATASET_ID -c CONFIG --save_probabilities -chk checkpoint_best.pth -f FOLD
If you've train a 3d_fullres on fold_0 with Dataset_001 use nnUNetv2_predict -i nnUNet_raw/Dataset001_rootseg/imagesTs -o result_test/ -d 001 -c 3d_fullres --save_probabilities -chk checkpoint_best.pth -f 0
What is nnUNet ?
nnUNet is a segmenting framework with automatic data-driven augmentation available on github.
Installation
Code management
need python>=3.9
Data management
You have to create 3 directory (copy paste this code)
nnUNet_raw
Place the raw datasets This folder will have one subfolder for each dataset names DatasetXXX_YYY where XXX is a 3-digit identifier (such as 001, 002, 043, 999, ...) and YYY is the (unique) dataset name.
dataset.json
Contains metadata that nnU-Net needs for training
imagesTr
Contains the images belonging to the training cases.
NAME_NUM_CHANNEL.nii.gz
NUM = subject number CHANNEL = FLAIR (0000), T1w (0001), T1gd (0002) and T2w (0003)labelsTr
Images with ground truth segmentation
NAME_NUM.nii.gz
NUM = subject numberimagesTs
Folder for test images, not used by nnUNet.
Train a model
Verify data integrity
nnUNetv2_plan_and_preprocess -d DATASET_ID --verify_dataset_integrity
Train model
CUDA_VISIBLE_DEVICES=X nnUNetv2_train DATASET_ID CONFIG FOLD --npz
Configuration
Fold
5 fold by default, train 1 fold
Track progress
Graph tracking progress available in
nnUNet_results/DATASET_ID/nnUNetTrainer__nnUNetPlans__3d_fullres/fold_X/progress.png
You can copy locally from folder with
scp PATH:server_file PATH:local_file
Run inference
Basic run
Only possible if 50+ epoch
nnUNetv2_predict -i nnUNet_raw/DATASET_ID/imagesTs -o OUT_DIR -d DATASET_ID -c CONFIG --save_probabilities -chk checkpoint_best.pth -f FOLD
Best configuration run
...