ivadomed / ms-lesion-agnostic

Deep Learning contrasts "agnostic" tool for MS lesion segmentation in the spinal cord
MIT License
4 stars 0 forks source link

Training segmentation models without the head and the brain stem #28

Open plbenveniste opened 1 month ago

plbenveniste commented 1 month ago

In this issue, I explore how removing the brain and the brain stem improves the performance of the model for segmenting spinal lesions in MS.

The brain/brain stem were removed using the contrast agnostic model with sct_deepseg (version: git-master-a866fc666681eca5e7e075b2f6174be0d670f6dd)

The code is currently iterating over every image to create a new msd dataset. The command used was :

python ms-lesion-agnostic/monai/1_create_msd_data_head_cropped.py -pd ~/net/ms-lesion-agnostic/data/ -po ~/net/ms-lesion-agnostic/msd_data/ --lesion-only --canproco-exclude canproco/exclude.yml

Related to #21

plbenveniste commented 1 month ago

I checked if all files had a same directions and found that there was a problem with only one file: sub-P167_UNIT1_desc-rater3_label-lesion_seg.nii.gz.

The problem comes from the origin file which is missing the Sform and qform.

TODO:

plbenveniste commented 1 month ago

The origin file was corrected using the following piece of code and pushed to branch plb/fix_p167_lesion_seg:

import os
import nibabel as nib

image_path = "/Users/plbenveniste/tmp_romane/ms_lesion_agnostic/data/basel-mp2rage/sub-P167/anat/sub-P167_UNIT1.nii.gz"
label_path = "/Users/plbenveniste/tmp_romane/ms_lesion_agnostic/data/basel-mp2rage/derivatives/labels/sub-P167/anat/sub-P167_UNIT1_desc-rater3_label-lesion_seg.nii.gz"

image = nib.load(image_path)
label = nib.load(label_path)

# Save the new label with the same header as the image
new_label = nib.Nifti1Image(label.get_fdata(), image.affine, image.header)
nib.save(new_label, "/Users/plbenveniste/tmp_romane/ms_lesion_agnostic/data/basel-mp2rage/derivatives/labels/sub-P167/anat/sub-P167_UNIT1_desc-rater3_label-lesion_seg.nii.gz")

PR is opened and ready for review

plbenveniste commented 1 month ago

The monai model is currently being trained (on koios) with the same parameters as the current SOTA model:

CUDA_VISIBLE_DEVICES=1 python ms-lesion-agnostic/monai/train_monai_unet_lightning.py --config ms-lesion-agnostic/monai/config.yml

The MSD dataset used is: /home/plbenveniste/net/ms-lesion-agnostic/msd_data/dataset_2024-08-13_seed42_lesionOnly.json

plbenveniste commented 2 weeks ago

The model training and validation curves are displayed below: image

It seems that in terms of Dice score, the model didn't outperform our previous SOTA model. However, the validation loss was reduced thanks to the removal of the head and the brain stem. Looking at the performance of the model on the test set might give us more insights on how it is performing compared to the previous SOTA model. To be done

plbenveniste commented 1 day ago

To compute the performance of this model :

CUDA_VISIBLE_DEVICES=1 python ms-lesion-agnostic/monai/test_model.py --config ms-lesion-agnostic/monai/config_test.yml --data-split test

To compute the figures afterwards:

python ms-lesion-agnostic/monai/plot_performance.py --pred-dir-path ~/net/ms-lesion-agnostic/results_cropped_head/2024-08-13_10\:33\:43.552507/test_set/ --data-json-path ~/net/ms-lesion-agnostic/msd_data/dataset_2024-08-13_seed42_lesionOnly.json --split test

Here are the results: dice_scores_contrast