Open plbenveniste opened 3 weeks ago
I am building the Dataset601_msLesionAgnostic
which has the same images as 301, but the labels (both LabelsTr and LabelsTs) are predicted using the model in release r20241101
.
On GPU, the model takes around 40 seconds per image.
[!NOTE] I noticed that the model sometimes segments lesions outside the spinal cord (in the brain). If we decide to use the predictions of the model as GT for retraining, we should pre-process the predictions to remove lesions outside the spinal cord.
I am training the model using the ResEnc Large architecture with the labels being the predicted labels.
In this issue, I investigate how to improve the lesion ground truth segmentation used for training our model.
We saw recently that on multiple images on the test, the predictions were actually better than the ground-truth. This can be shown on the below GIFs.
Images used are :
sub-P088_UNIT1.nii.gz
sub-m255816_ses-20190416_acq-ax_chunk-2_T2w.nii.gz
,sub-edm088_ses-M0_PSIR.nii.gz
andsub-cal072_ses-M0_STIR.nii.gz
.One idea we want to explore is to replace the ground-truth segmentation by the prediction of the model. Then the model would be trained again to see how it performs on unseen data.