ivadomed / model-spinal-rootlets

Deep-learning based segmentation of the spinal nerve rootlets
5 stars 2 forks source link

Lumbar rootlets - model training on `Draw Tube` labels #67

Open valosekj opened 1 month ago

valosekj commented 1 month ago

This issue summarizes model training on T2w lumbar data with relabeled rootlets using the 3D Slicer Draw Tube module.

This is a follow up of https://github.com/ivadomed/model-spinal-rootlets/issues/48.

0. Data overview

We have 6 subjects with the following labels:

sub_labels = {"sub-CTS04": { "start": "T11","end": "S1"},
             "sub-CTS05": {"start": "T11","end": "S1"},
             "sub-CTS09": {"start": "T10","end": "S2"},
             "sub-CTS10": {"start": "T11","end": "S1"},
             "sub-CTS14": {"start": "T11","end": "S1"},
             "sub-CTS15": {"start": "T10","end": "S2"}}

1. Preparing nnUNet folders

details ```bash # imagesTr cp sub-CTS04_ses-SPpre_acq-zoomit_T2w.nii.gz Dataset301_LumbarRootlets/imagesTr/sub-CTS04_ses-SPpre_T2w_001_0000.nii.gz cp sub-CTS05_ses-SPpre_acq-zoomit_T2w.nii.gz Dataset301_LumbarRootlets/imagesTr/sub-CTS05_ses-SPpre_T2w_001_0000.nii.gz cp sub-CTS09_ses-SPpre_acq-zoomit_T2w.nii.gz Dataset301_LumbarRootlets/imagesTr/sub-CTS09_ses-SPpre_T2w_001_0000.nii.gz cp sub-CTS10_ses-SPanat_acq-zoomit_T2w.nii.gz Dataset301_LumbarRootlets/imagesTr/sub-CTS10_ses-SPanat_T2w_001_0000.nii.gz cp sub-CTS14_ses-SPpre_acq-zoomit_T2w.nii.gz Dataset301_LumbarRootlets/imagesTr/sub-CTS14_ses-SPpre_T2w_001_0000.nii.gz cp sub-CTS15_ses-SPpre_acq-zoomit_T2w.nii.gz Dataset301_LumbarRootlets/imagesTr/sub-CTS15_ses-SPpre_T2w_001_0000.nii.gz # labelsTr cp T11-S1_RD_LD_sub-CTS04_relabeled.nii.gz Dataset301_LumbarRootlets/labelsTr/sub-CTS04_ses-SPpre_T2w_001.nii.gz cp T11-S1_RD_LD_sub-CTS05_relabeled.nii.gz Dataset301_LumbarRootlets/labelsTr/sub-CTS05_ses-SPpre_T2w_001.nii.gz cp T10-S2_RD_LD_sub-CTS09_relabeled.nii.gz Dataset301_LumbarRootlets/labelsTr/sub-CTS09_ses-SPpre_T2w_001.nii.gz cp T11-S1_RD_LD_sub-CTS10_relabeled.nii.gz Dataset301_LumbarRootlets/labelsTr/sub-CTS10_ses-SPanat_T2w_001.nii.gz cp T11-S1_RD_LD_sub-CTS14_relabeled.nii.gz Dataset301_LumbarRootlets/labelsTr/sub-CTS14_ses-SPpre_T2w_001.nii.gz cp T10-S2_RD_LD_sub-CTS15_relabeled.nii.gz Dataset301_LumbarRootlets/labelsTr/sub-CTS15_ses-SPpre_T2w_001.nii.gz ```

2. Changing label values to be consecutive (this is required by nnUNet)

details Original values ```bash cd labelsTr for file in *nii.gz;do get_unique_values $file;done [ 0. 19. 20. 21. 22. 23. 24. 25. 26.] [ 0. 19. 20. 21. 22. 23. 24. 25. 26.] [ 0. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27.] [ 0. 19. 20. 21. 22. 23. 24. 25. 26.] [ 0. 19. 20. 21. 22. 23. 24. 25. 26.] [ 0. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27.] ``` Removing label 18 (presented only for two subjects) for now: ```bash sct_maths -i sub-CTS09_ses-SPpre_T2w_001.nii.gz -thr 19 -o sub-CTS09_ses-SPpre_T2w_001.nii.gz sct_maths -i sub-CTS15_ses-SPpre_T2w_001.nii.gz -thr 19 -o sub-CTS15_ses-SPpre_T2w_001.nii.gz ``` Recoding using [recode_nii.py](https://github.com/ivadomed/model-spinal-rootlets/blob/main/utilities/recode_nii.py): ```bash for file in *nii.gz;do python ~/code/model-spinal-rootlets/utilities/recode_nii.py -i $file -o $file;done Unique values in the data: [ 0. 19. 20. 21. 22. 23. 24. 25. 26.] Unique values in the recoded data: [0 1 2 3 4 5 6 7 8] Unique values in the data: [ 0. 19. 20. 21. 22. 23. 24. 25. 26.] Unique values in the recoded data: [0 1 2 3 4 5 6 7 8] Unique values in the data: [ 0. 19. 20. 21. 22. 23. 24. 25. 26. 27.] Unique values in the recoded data: [0 1 2 3 4 5 6 7 8 9] Unique values in the data: [ 0. 19. 20. 21. 22. 23. 24. 25. 26.] Unique values in the recoded data: [0 1 2 3 4 5 6 7 8] Unique values in the data: [ 0. 19. 20. 21. 22. 23. 24. 25. 26.] Unique values in the recoded data: [0 1 2 3 4 5 6 7 8] Unique values in the data: [ 0. 19. 20. 21. 22. 23. 24. 25. 26. 27.] Unique values in the recoded data: [0 1 2 3 4 5 6 7 8 9] ```

3. Training

fold1, 4 training and 2 validation images.

Semantic (level-specific) model: Dataset301_LumbarRootlets

cd ~/code/model-spinal-rootlets/training
bash run_training.sh 1 301 Dataset301_LumbarRootlets

Binary model (all rootlets set to 1): Dataset302_LumbarRootlets

Binarize labels and modify dataset.json:

cd $nnUNet_raw
cp -r Dataset301_LumbarRootlets Dataset302_LumbarRootlets
cd Dataset302_LumbarRootlets/labelsTr
for file in *nii.gz;do sct_maths -i $file -bin 0.5 -o $file;done
cd ..
# modify dataset.json
cd ~/code/model-spinal-rootlets/training
bash run_training.sh 1 302 Dataset302_LumbarRootlets
valosekj commented 1 month ago

bash run_training.sh 1 301 Dataset301_LumbarRootlets raised the following errors when running nnUNetv2_plan_and_preprocess:

-------------------------------------------------------
Running preprocessing and verifying dataset integrity
-------------------------------------------------------
Fingerprint extraction...
Dataset301_LumbarRootlets
Using <class 'nnunetv2.imageio.simpleitk_reader_writer.SimpleITKIO'> reader/writer
Error: Shape mismatch between segmentation and corresponding images.
Shape images: (192, 372, 1024).
Shape seg: (125, 417, 85).
Image files: ['/home/GRAMES.POLYMTL.CA/p118175/data/nnunetv2/nnUNet_raw/Dataset301_LumbarRootlets/imagesTr/sub-CTS05_ses-SPpre_T2w_001_0000.nii.gz'].
Seg file: /home/GRAMES.POLYMTL.CA/p118175/data/nnunetv2/nnUNet_raw/Dataset301_LumbarRootlets/labelsTr/sub-CTS05_ses-SPpre_T2w_001.nii.gz

Error: Spacing mismatch between segmentation and corresponding images.
Spacing images: [0.5, 0.29296875, 0.29296875].
Spacing seg: [2.0, 0.6000000238418579, 0.5999999642372131].
Image files: ['/home/GRAMES.POLYMTL.CA/p118175/data/nnunetv2/nnUNet_raw/Dataset301_LumbarRootlets/imagesTr/sub-CTS05_ses-SPpre_T2w_001_0000.nii.gz'].
Seg file: /home/GRAMES.POLYMTL.CA/p118175/data/nnunetv2/nnUNet_raw/Dataset301_LumbarRootlets/labelsTr/sub-CTS05_ses-SPpre_T2w_001.nii.gz

Warning: Origin mismatch between segmentation and corresponding images.
Origin images: (153.98794555664062, 107.47212982177734, 37.20260238647461).
Origin seg: (26.90328598022461, 137.39739990234375, -24.306747436523438).
Image files: ['/home/GRAMES.POLYMTL.CA/p118175/data/nnunetv2/nnUNet_raw/Dataset301_LumbarRootlets/imagesTr/sub-CTS05_ses-SPpre_T2w_001_0000.nii.gz'].
Seg file: /home/GRAMES.POLYMTL.CA/p118175/data/nnunetv2/nnUNet_raw/Dataset301_LumbarRootlets/labelsTr/sub-CTS05_ses-SPpre_T2w_001.nii.gz

Warning: Direction mismatch between segmentation and corresponding images.
Direction images: (-0.9999999926680287, -4.673998152578654e-08, 0.00012109475305146024, 7.87272500310792e-06, -0.9979094409945485, 0.06462775934903711, 0.00012083858162973154, 0.0646277616349623, 0.9979094337952635).
Direction seg: (-0.9999750728554918, 0.0009042621747940346, 0.007002569075477366, -0.0009042399688199635, -0.999999591154876, 6.3321783911189024e-06, 0.0070025722225890786, 1.7274973968599142e-11, 0.9999754816925497).
Image files: ['/home/GRAMES.POLYMTL.CA/p118175/data/nnunetv2/nnUNet_raw/Dataset301_LumbarRootlets/imagesTr/sub-CTS05_ses-SPpre_T2w_001_0000.nii.gz'].
Seg file: /home/GRAMES.POLYMTL.CA/p118175/data/nnunetv2/nnUNet_raw/Dataset301_LumbarRootlets/labelsTr/sub-CTS05_ses-SPpre_T2w_001.nii.gz

Checking the downloaded data:

function pixdim { sct_image -i ${1} -header | grep pixdim }
for file in *nii.gz;do echo $file; pixdim $file;done
T10-S2_RD_LD_sub-CTS09_relabeled.nii.gz
pixdim      [1.0, 0.292969, 0.292969, 0.5, 0.0, 0.0, 0.0, 0.0]
T10-S2_RD_LD_sub-CTS15_relabeled.nii.gz
pixdim      [1.0, 0.292969, 0.292969, 0.5, 0.0, 0.0, 0.0, 0.0]
T11-S1_RD_LD_sub-CTS04_relabeled.nii.gz
pixdim      [1.0, 0.292969, 0.292969, 0.5, 0.0, 0.0, 0.0, 0.0]
T11-S1_RD_LD_sub-CTS05_relabeled.nii.gz
pixdim      [1.0, 0.6, 0.6, 2.0, 0.0, 0.0, 0.0, 0.0]
T11-S1_RD_LD_sub-CTS10_relabeled.nii.gz
pixdim      [1.0, 0.292969, 0.292969, 0.5, 0.0, 0.0, 0.0, 0.0]
T11-S1_RD_LD_sub-CTS14_relabeled.nii.gz
pixdim      [1.0, 0.292969, 0.292969, 0.5, 0.0, 0.0, 0.0, 0.0]
sub-CTS04_ses-SPpre_acq-zoomit_T2w.nii.gz
pixdim      [1.0, 0.292969, 0.292969, 0.5, 3.08, 0.0, 0.0, 0.0]
sub-CTS05_ses-SPpre_acq-zoomit_T2w.nii.gz
pixdim      [1.0, 0.292969, 0.292969, 0.5, 3.08, 0.0, 0.0, 0.0]
sub-CTS09_ses-SPpre_acq-zoomit_T2w.nii.gz
pixdim      [1.0, 0.292969, 0.292969, 0.5, 3.08, 0.0, 0.0, 0.0]
sub-CTS10_ses-SPanat_acq-zoomit_T2w.nii.gz
pixdim      [1.0, 0.292969, 0.292969, 0.5, 3.08, 0.0, 0.0, 0.0]
sub-CTS14_ses-SPpre_acq-zoomit_T2w.nii.gz
pixdim      [1.0, 0.292969, 0.292969, 0.5, 3.08, 0.0, 0.0, 0.0]
sub-CTS15_ses-SPpre_acq-zoomit_T2w.nii.gz
pixdim      [1.0, 0.292969, 0.292969, 0.5, 3.08, 0.0, 0.0, 0.0]

It seems that T11-S1_RD_LD_sub-CTS05_relabeled.nii.gz has a different pixel size and dimensions than its source image (sub-CTS05_ses-SPpre_acq-zoomit_T2w.nii.gz):

pixdim      [1.0, 0.292969, 0.292969, 0.5, 0.0, 0.0, 0.0, 0.0]

@RaphaSchl, could you please double-check this subject on your side? Thank you.

RaphaSchl commented 1 month ago

Hi Jan, Did you mean sub-CTS05 ? From the pixdims shown above and what I've checked, it seems there is an issue with the first 4 pixdims for sub-CTS05's segmentation. It appears saved in a different space - I went back and saved it from Slicer with the correct reference volume this time, what do you think ?

sub-CTS10's segmentation and reference volume both have the same 4 first pixdims, as you showed above. I get the same result when double-checking.

Let me know if anything else comes up !

valosekj commented 1 month ago

Did you mean sub-CTS05

Yes, I meant sub-CTS05. Apologies for confusion. I fixed my comment.

I went back and saved it from Slicer with the correct reference volume this time, what do you think ?

Thank you, Raphaelle! Looks good now!

pixdim T11-S1_RD_LD_sub-CTS05_relabeled.nii.gz
pixdim      [1.0, 0.292969, 0.292969, 0.5, 0.0, 0.0, 0.0, 0.0]

Just FYI, I'm going to resolve (hide) the comments as the issue is now resolved.

valosekj commented 1 month ago

The training of both models has finished.

TL;DR: predictions of the binary model (Dataset302_LumbarRootlets) on unseen subjects are quite good. Interestingly, some rootlets predicted by the model (Dataset202_LumbarRootlets; https://github.com/ivadomed/model-spinal-rootlets/issues/48) trained on GT created using the FSLeyes were not predicted by the model (Dataset302_LumbarRootlets) trained on GT created using the Slicer Draw Tube tool and vice versa.

Semantic (level-specific) model (Dataset301_LumbarRootlets)

training_log ```console 2024-07-21 01:07:55.046880: Current learning rate: 5e-05 2024-07-21 01:09:39.849596: train_loss -0.8012 2024-07-21 01:09:39.849765: val_loss -0.3613 2024-07-21 01:09:39.849864: Pseudo dice [0.0, 0.2224, 0.4656, 0.2038, 0.0393, 0.1114, 0.1499, 0.1689, 0.0] 2024-07-21 01:09:39.849931: Epoch time: 104.8 s 2024-07-21 01:09:41.051663: 2024-07-21 01:09:41.051810: Epoch 998 2024-07-21 01:09:41.051903: Current learning rate: 4e-05 2024-07-21 01:11:26.669583: train_loss -0.8066 2024-07-21 01:11:26.669759: val_loss -0.3505 2024-07-21 01:11:26.669865: Pseudo dice [0.0, 0.2126, 0.4824, 0.1682, 0.0279, 0.0927, 0.1335, 0.1464, 0.0] 2024-07-21 01:11:26.669945: Epoch time: 105.62 s 2024-07-21 01:11:27.936484: 2024-07-21 01:11:27.936840: Epoch 999 2024-07-21 01:11:27.937007: Current learning rate: 2e-05 2024-07-21 01:13:13.107770: train_loss -0.7985 2024-07-21 01:13:13.107922: val_loss -0.3357 2024-07-21 01:13:13.108025: Pseudo dice [0.0, 0.1974, 0.4752, 0.1902, 0.0334, 0.1057, 0.1582, 0.1612, 0.0] 2024-07-21 01:13:13.108104: Epoch time: 105.17 s 2024-07-21 01:13:15.006795: Training done. 2024-07-21 01:13:15.025028: Using splits from existing split file: /home/GRAMES.POLYMTL.CA/p118175/data/nnunetv2/nnUNet_preprocessed/Dataset301_LumbarRootlets/splits_final.json 2024-07-21 01:13:15.025259: The split file contains 5 splits. 2024-07-21 01:13:15.025300: Desired fold for training: 0 2024-07-21 01:13:15.025336: This split has 4 training and 2 validation cases. 2024-07-21 01:13:15.025455: predicting sub-CTS10_ses-SPanat_T2w_001 2024-07-21 01:13:15.026371: sub-CTS10_ses-SPanat_T2w_001, shape torch.Size([1, 192, 372, 1023]), rank 0 2024-07-21 01:15:05.478443: predicting sub-CTS15_ses-SPpre_T2w_001 2024-07-21 01:15:05.497945: sub-CTS15_ses-SPpre_T2w_001, shape torch.Size([1, 192, 372, 1023]), rank 0 2024-07-21 01:16:55.437292: Validation complete 2024-07-21 01:16:55.437382: Mean Validation Dice: 0.12612002796626479 ```

The predictions of the semantic model on unseen subjects are not good. The rootlets are predicted only the caudal part of the FOV.

example image

Binary (all rootlets set to 1) model (Dataset302_LumbarRootlets)

training_log ```console 2024-07-21 01:44:41.538921: Epoch 998 2024-07-21 01:44:41.539023: Current learning rate: 4e-05 2024-07-21 01:45:34.577191: train_loss -0.8987 2024-07-21 01:45:34.577362: val_loss -0.3704 2024-07-21 01:45:34.577416: Pseudo dice [0.3858] 2024-07-21 01:45:34.577475: Epoch time: 53.04 s 2024-07-21 01:45:35.760479: 2024-07-21 01:45:35.760615: Epoch 999 2024-07-21 01:45:35.760712: Current learning rate: 2e-05 2024-07-21 01:46:28.753630: train_loss -0.8903 2024-07-21 01:46:28.753798: val_loss -0.3714 2024-07-21 01:46:28.753851: Pseudo dice [0.3877] 2024-07-21 01:46:28.753914: Epoch time: 52.99 s 2024-07-21 01:46:30.514437: Training done. 2024-07-21 01:46:30.529201: Using splits from existing split file: /home/GRAMES.POLYMTL.CA/p118175/data/nnunetv2/nnUNet_preprocessed/Dataset302_LumbarRootlets/splits_final.json 2024-07-21 01:46:30.529334: The split file contains 5 splits. 2024-07-21 01:46:30.529372: Desired fold for training: 0 2024-07-21 01:46:30.529406: This split has 4 training and 2 validation cases. 2024-07-21 01:46:30.529503: predicting sub-CTS10_ses-SPanat_T2w_001 2024-07-21 01:46:30.530237: sub-CTS10_ses-SPanat_T2w_001, shape torch.Size([1, 192, 372, 1023]), rank 0 2024-07-21 01:47:25.381866: predicting sub-CTS15_ses-SPpre_T2w_001 2024-07-21 01:47:25.399373: sub-CTS15_ses-SPpre_T2w_001, shape torch.Size([1, 192, 372, 1023]), rank 0 2024-07-21 01:48:20.876391: Validation complete 2024-07-21 01:48:20.876483: Mean Validation Dice: 0.40915237629260454 ```

The predictions of the binary model on unseen subjects are reasonable; see the example on 2 testing subjects below. The comparison shows predictions obtained using:

sub-CTS03_ses-SPpre_acq-zoomit_T2w.nii.gz ![Kapture 2024-07-23 at 07 29 57](https://github.com/user-attachments/assets/3cad5cf1-df3b-4f76-860b-e7d2de003501)
sub-CTS17_ses-SPpre_acq-zoomit_T2w.nii.gz ![Kapture 2024-07-23 at 07 31 25](https://github.com/user-attachments/assets/d32d5c33-6221-4b02-853f-935f690a4f9b)

Interestingly, some rootlets predicted by the older model (202) were not predicted by the new model (302) and vice versa.

valosekj commented 1 month ago

Next steps: tweak model training parameters to improve the model, namely:

valosekj commented 1 month ago

Looking at nnUNetPlans.json for the lumbar models:

(the axis order here is: SI, AP, RL)

        "3d_fullres": {
            "batch_size": 2,
            "patch_size": [
                64,
                112,
                320
            ],
            "median_image_size_in_voxels": [
                192.0,
                372.0,
                1023.0
            ],
            "spacing": [
                0.5,
                0.29296875,
                0.29296875
            ],

The patch_size is relatively small compared to the median_image_size_in_voxels.

For example, when comparing with nnUNetPlans.json for the T2w cervical model; here patch_size is closer to median_image_size_in_voxels:

        "3d_fullres": {
            "batch_size": 2,
            "patch_size": [
                224,
                224,
                48
            ],
            "median_image_size_in_voxels": [
                320.0,
                320.0,
                64.0
            ],
            "spacing": [
                0.800000011920929,
                0.800000011920929,
                0.7999992370605469
            ],

This brings me to the idea that maybe I could try to crop the images around the SC before running the training. The contrast-agnostic model seems to work well! So, the cropping could be done easily.

image
valosekj commented 1 month ago

Training on the cropped images started: binary model (Dataset312_LumbarRootlets).

Crop data around the spinal cord ```console cd ~/code/model-spinal-rootlets/ git fetch git checkout jv/lumbar_rootlets cd $nnUNet_raw cp -r Dataset302_LumbarRootlets Dataset312_LumbarRootlets cd Dataset312_LumbarRootlets/imagesTr bash ~/code/model-spinal-rootlets/training/crop_lumbar_data.sh ```

[!NOTE] Note that I had to change "overwrite_image_reader_writer" to NibabelIO in dataset.json. For some reason, SimpleITKIO read the original image dimensions!

nnUNetPlans.json ```json "3d_fullres": { "batch_size": 2, "patch_size": [ 128, 128, 128 ], "median_image_size_in_voxels": [ 192.0, 161.0, 166.5 ], "spacing": [ 0.5, 0.29296875, 0.29296875 ], ```
Start training: ```console bash ~/code/model-spinal-rootlets/training/run_training.sh 1 312 Dataset312_LumbarRootlets ```
valosekj commented 1 month ago

Okay, training on the cropped images is done (binary model Dataset312_LumbarRootlets).

training_log ```console 2024-07-25 07:57:28.071485: Current learning rate: 5e-05 2024-07-25 07:58:12.384417: train_loss -0.9216 2024-07-25 07:58:12.384571: val_loss -0.3303 2024-07-25 07:58:12.384622: Pseudo dice [0.3783] 2024-07-25 07:58:12.384680: Epoch time: 44.31 s 2024-07-25 07:58:13.608948: 2024-07-25 07:58:13.609235: Epoch 998 2024-07-25 07:58:13.609570: Current learning rate: 4e-05 2024-07-25 07:58:57.742579: train_loss -0.9236 2024-07-25 07:58:57.742764: val_loss -0.3314 2024-07-25 07:58:57.742818: Pseudo dice [0.3775] 2024-07-25 07:58:57.742882: Epoch time: 44.13 s 2024-07-25 07:58:58.964358: 2024-07-25 07:58:58.964548: Epoch 999 2024-07-25 07:58:58.964673: Current learning rate: 2e-05 2024-07-25 07:59:43.275873: train_loss -0.9214 2024-07-25 07:59:43.276055: val_loss -0.3291 2024-07-25 07:59:43.276127: Pseudo dice [0.3773] 2024-07-25 07:59:43.276187: Epoch time: 44.31 s 2024-07-25 07:59:45.474522: Training done. 2024-07-25 07:59:45.489014: Using splits from existing split file: /home/GRAMES.POLYMTL.CA/p118175/data/nnunetv2/nnUNet_preprocessed/Dataset312_LumbarRootlets/splits_final.json 2024-07-25 07:59:45.489179: The split file contains 5 splits. 2024-07-25 07:59:45.489222: Desired fold for training: 0 2024-07-25 07:59:45.489260: This split has 4 training and 2 validation cases. 2024-07-25 07:59:45.489359: predicting sub-CTS10_ses-SPanat_T2w_001 2024-07-25 07:59:45.490096: sub-CTS10_ses-SPanat_T2w_001, shape torch.Size([1, 192, 163, 163]), rank 0 2024-07-25 08:00:06.051738: predicting sub-CTS15_ses-SPpre_T2w_001 2024-07-25 08:00:06.053505: sub-CTS15_ses-SPpre_T2w_001, shape torch.Size([1, 192, 160, 168]), rank 0 2024-07-25 08:00:12.400051: Validation complete 2024-07-25 08:00:12.400232: Mean Validation Dice: 0.36048916255847474 ```

The Mean Validation Dice is 0.360, which is lower than 0.409 for the non-cropped model (binary model Dataset302_LumbarRootlets). Also, visually, the prediction on an unseen test subject is comparable (or maybe even worse) than for the non-cropped model (binary model Dataset302_LumbarRootlets):

sub-CTS03_ses-SPpre_acq-zoomit_T2w.nii.gz light blue - model trained on non-cropped images (`Dataset302_LumbarRootlets`) yellow - model trained on cropped images (`Dataset312_LumbarRootlets`) ![Kapture 2024-07-25 at 15 38 34](https://github.com/user-attachments/assets/83081937-3814-4c1f-8e7f-fae6d3b10657)

Preliminary conclusion: training on images cropped around the SC does not increase the segmentation performance. In contrast, it introduces a dependency on the SC seg used for cropping (which is a disadvantage).

RaphaSchl commented 1 month ago

I've tested Dataset302 model (non-cropped images) on other subject images. When we compare 3D renderings of sub-CTS13, sub-CTS17, and sub-CTS20 model rootlet segmentations (from FSLeyes), we see continuous rootlets, as we aspire to, for sub-CTS20, and for most sub-CTS13. For CTS17, there is a lot of discontinuity.

I'm not sure how this could be improved, but thought this variability in performance was worth noting. It does not seem correlated to artefacts in the CSF as I initially thought, as sub-CTS20 is quite artefacted but has good model segmentation.

valosekj commented 1 month ago

Thank you for testing the model on additional images, @RaphaSchl! The 3D rendering is useful here!

I had a discussion with @naga-karthik, and he suggested using the 96x160x192 patch size for the cropped model instead of 128, 128, 128. I'll try it!

commands ```console cd $nnUNet_preprocessed cp -r Dataset312_LumbarRootlets Dataset322_LumbarRootlets ``` modify manually `patch_size` in `nnUNetPlans.json` to: ```json "3d_fullres": { "data_identifier": "nnUNetPlans_3d_fullres", "preprocessor_name": "DefaultPreprocessor", "batch_size": 2, "patch_size": [ 192, 160, 96 ], ``` (192: SI, 160: AP, 96: RL) change manually the dataset name to `Dataset322_LumbarRootlets` in `dataset.json` and `nnUNetPlans.json` start training: ```console bash ~/code/model-spinal-rootlets/training/run_training.sh 1 322 Dataset322_LumbarRootlets ```
valosekj commented 1 month ago

Okay, the training of the model with 96x160x192 patch size (Dataset322_LumbarRootlets) is done. I trained this model with the default nnUNet trainer and also with nnUNetTrainerDA5 (extensive data augmentation) and nnUNetTrainer_1000epochs_NoMirroring (no axis mirroring during the data augmentation) to see whether there is any impact on the performance.

Comparison with the non-cropped model Dataset302_LumbarRootlets on sub-CTS17_ses-SPpre_acq-zoomit_T2w.nii.gz:

screenshot sub-CTS17_ses-SPpre_acq-zoomit_T2w
valosekj commented 1 month ago

Notes about running the models on other testing images (done by @RaphaSchl -- thank you!):

For CTS03 :

For CTS09 (remarkable!):

For CTS13:

For CTS17: