sct-pipeline / contrast-agnostic-softseg-spinalcord

Contrast-agnostic spinal cord segmentation project with softseg
MIT License
4 stars 3 forks source link

Incorrect segmentation on a thoracic healthy patient #122

Open po09i opened 3 days ago

po09i commented 3 days ago

Information

Contrast: T1w Region: Thoracic Pathology: Healthy

SCT model: https://github.com/sct-pipeline/contrast-agnostic-softseg-spinalcord/releases/download/v2.4/model_soft_bin_20240425-170840.zip

Issue

In my pipeline, I register the T1w image (0.9x0.9x5mm) to another acquisition (0.5x0.5x5mm) then performed the segmentation

Segmentation completely fails
![Screenshot 2024-10-23 at 5 08 58 PM](https://github.com/user-attachments/assets/bd502277-8cbc-44b7-be6a-bca0c57fff51)

I tried to segment the original image as well. This went better, but it stopped in lower slices

Incomplete segmentation in lower slices in the original acquisition
![Screenshot 2024-10-23 at 5 13 54 PM](https://github.com/user-attachments/assets/d4d29df2-21ed-4da4-8219-0ff42352eb8a)
jcohenadad commented 3 days ago

This is strange, the cord is well contrasted. @NathanMolinier can you give it a try with TotalSpineSeg?

naga-karthik commented 3 days ago

hey @po09i, how urgent is this? I believe the newer model (unreleased yet) will work better on the 2nd image for sure. I was planning to make a pre-release of the newer model by the end of this week. I would say v2.4 model is old now

po09i commented 3 days ago

It is not urgent at all, the segmentation on the second image covers more than the FOV of the image I need it registered to. So this is more of a feedback than something I need help with :)

NathanMolinier commented 3 days ago

This is the output of TotalSpineSeg on the registered image Kapture 2024-10-24 at 09 38 22

naga-karthik commented 3 days ago

hey @po09i, I just made a release of the latest model I was referring to, please do try it out if you have time! Note that you might to update the model URL in SCT's codebase in order to access the model via SCT.

jcohenadad commented 3 days ago

Note that you might to update the model URL in SCT's codebase in order to access the model via SCT.

no, this is bad practice-- we should not encourage ppl to manually edit hard-coded path in SCT, resulting in code that is out-of-sync with git history. @naga-karthik please update the URL with your model and submit a PR on SCT, thanks

valosekj commented 3 days ago

Maybe Naga meant to download the pre-release as now possible using -custom-url:

sct_deepseg -install seg_sc_contrast_agnostic -custom-url https://github.com/sct-pipeline/contrast-agnostic-softseg-spinalcord/releases/download/v2.5/model_contrast-agnostic_20240930-1002.zip
sct_deepseg -i <IMAGE> -task seg_sc_contrast_agnostic

(i.e., no need to touch the SCT code)

po09i commented 3 days ago

I tried @naga-karthik / @valosekj 's suggestion and got this error:

Terminal dump
``` sct_deepseg -i ./ax4_thoracic_off/nav_output/derivatives/anat_reg.nii.gz -task seg_sc_contrast_agnostic -o ./ax4_thoracic_off/nav_output/derivatives/anat_reg_seg.nii.gz -qc ./ax4_thoracic_off/nav_output/derivatives/qc -- Spinal Cord Toolbox (git-master-8589f9d108a638592cdff56d9971f831373d31be) sct_deepseg -i ./ax4_thoracic_off/nav_output/derivatives/anat_reg.nii.gz -task seg_sc_contrast_agnostic -o ./ax4_thoracic_off/nav_output/derivatives/anat_reg_seg.nii.gz -qc ./ax4_thoracic_off/nav_output/derivatives/qc -- Using custom model from URL 'https://github.com/sct-pipeline/contrast-agnostic-softseg-spinalcord/releases/download/v2.5/model_contrast-agnostic_20240930-1002.zip'. Traceback (most recent call last): File "spinalcordtoolbox/spinalcordtoolbox/scripts/sct_deepseg.py", line 400, in main(sys.argv[1:]) File "spinalcordtoolbox/spinalcordtoolbox/scripts/sct_deepseg.py", line 309, in main im_lst, target_lst = inference.segment_non_ivadomed(path_model, model_type, input_filenames, thr, File "spinalcordtoolbox/spinalcordtoolbox/deepseg/inference.py", line 121, in segment_non_ivadomed net = create_net(path_model, device) File "spinalcordtoolbox/spinalcordtoolbox/deepseg/monai.py", line 128, in create_nnunet_from_plans model.load_state_dict(checkpoint) File "spinalcordtoolbox/python/envs/venv_sct/lib/python3.9/site-packages/torch/nn/modules/module.py", line 2153, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for PlainConvUNet: size mismatch for encoder.stages.4.0.convs.0.conv.weight: copying a param with shape torch.Size([384, 256, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 256, 3, 3, 3]). size mismatch for encoder.stages.4.0.convs.0.conv.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for encoder.stages.4.0.convs.0.norm.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for encoder.stages.4.0.convs.0.norm.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for encoder.stages.4.0.convs.0.all_modules.0.weight: copying a param with shape torch.Size([384, 256, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 256, 3, 3, 3]). size mismatch for encoder.stages.4.0.convs.0.all_modules.0.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for encoder.stages.4.0.convs.0.all_modules.1.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for encoder.stages.4.0.convs.0.all_modules.1.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for encoder.stages.4.0.convs.1.conv.weight: copying a param with shape torch.Size([384, 384, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 320, 3, 3, 3]). size mismatch for encoder.stages.4.0.convs.1.conv.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for encoder.stages.4.0.convs.1.norm.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for encoder.stages.4.0.convs.1.norm.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for encoder.stages.4.0.convs.1.all_modules.0.weight: copying a param with shape torch.Size([384, 384, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 320, 3, 3, 3]). size mismatch for encoder.stages.4.0.convs.1.all_modules.0.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for encoder.stages.4.0.convs.1.all_modules.1.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for encoder.stages.4.0.convs.1.all_modules.1.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for encoder.stages.5.0.convs.0.conv.weight: copying a param with shape torch.Size([384, 384, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 320, 3, 3, 3]). size mismatch for encoder.stages.5.0.convs.0.conv.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for encoder.stages.5.0.convs.0.norm.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for encoder.stages.5.0.convs.0.norm.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for encoder.stages.5.0.convs.0.all_modules.0.weight: copying a param with shape torch.Size([384, 384, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 320, 3, 3, 3]). size mismatch for encoder.stages.5.0.convs.0.all_modules.0.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for encoder.stages.5.0.convs.0.all_modules.1.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for encoder.stages.5.0.convs.0.all_modules.1.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for encoder.stages.5.0.convs.1.conv.weight: copying a param with shape torch.Size([384, 384, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 320, 3, 3, 3]). size mismatch for encoder.stages.5.0.convs.1.conv.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for encoder.stages.5.0.convs.1.norm.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for encoder.stages.5.0.convs.1.norm.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for encoder.stages.5.0.convs.1.all_modules.0.weight: copying a param with shape torch.Size([384, 384, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 320, 3, 3, 3]). size mismatch for encoder.stages.5.0.convs.1.all_modules.0.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for encoder.stages.5.0.convs.1.all_modules.1.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for encoder.stages.5.0.convs.1.all_modules.1.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.encoder.stages.4.0.convs.0.conv.weight: copying a param with shape torch.Size([384, 256, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 256, 3, 3, 3]). size mismatch for decoder.encoder.stages.4.0.convs.0.conv.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.encoder.stages.4.0.convs.0.norm.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.encoder.stages.4.0.convs.0.norm.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.encoder.stages.4.0.convs.0.all_modules.0.weight: copying a param with shape torch.Size([384, 256, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 256, 3, 3, 3]). size mismatch for decoder.encoder.stages.4.0.convs.0.all_modules.0.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.encoder.stages.4.0.convs.0.all_modules.1.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.encoder.stages.4.0.convs.0.all_modules.1.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.encoder.stages.4.0.convs.1.conv.weight: copying a param with shape torch.Size([384, 384, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 320, 3, 3, 3]). size mismatch for decoder.encoder.stages.4.0.convs.1.conv.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.encoder.stages.4.0.convs.1.norm.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.encoder.stages.4.0.convs.1.norm.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.encoder.stages.4.0.convs.1.all_modules.0.weight: copying a param with shape torch.Size([384, 384, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 320, 3, 3, 3]). size mismatch for decoder.encoder.stages.4.0.convs.1.all_modules.0.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.encoder.stages.4.0.convs.1.all_modules.1.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.encoder.stages.4.0.convs.1.all_modules.1.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.encoder.stages.5.0.convs.0.conv.weight: copying a param with shape torch.Size([384, 384, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 320, 3, 3, 3]). size mismatch for decoder.encoder.stages.5.0.convs.0.conv.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.encoder.stages.5.0.convs.0.norm.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.encoder.stages.5.0.convs.0.norm.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.encoder.stages.5.0.convs.0.all_modules.0.weight: copying a param with shape torch.Size([384, 384, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 320, 3, 3, 3]). size mismatch for decoder.encoder.stages.5.0.convs.0.all_modules.0.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.encoder.stages.5.0.convs.0.all_modules.1.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.encoder.stages.5.0.convs.0.all_modules.1.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.encoder.stages.5.0.convs.1.conv.weight: copying a param with shape torch.Size([384, 384, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 320, 3, 3, 3]). size mismatch for decoder.encoder.stages.5.0.convs.1.conv.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.encoder.stages.5.0.convs.1.norm.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.encoder.stages.5.0.convs.1.norm.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.encoder.stages.5.0.convs.1.all_modules.0.weight: copying a param with shape torch.Size([384, 384, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 320, 3, 3, 3]). size mismatch for decoder.encoder.stages.5.0.convs.1.all_modules.0.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.encoder.stages.5.0.convs.1.all_modules.1.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.encoder.stages.5.0.convs.1.all_modules.1.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.stages.0.convs.0.conv.weight: copying a param with shape torch.Size([384, 768, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 640, 3, 3, 3]). size mismatch for decoder.stages.0.convs.0.conv.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.stages.0.convs.0.norm.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.stages.0.convs.0.norm.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.stages.0.convs.0.all_modules.0.weight: copying a param with shape torch.Size([384, 768, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 640, 3, 3, 3]). size mismatch for decoder.stages.0.convs.0.all_modules.0.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.stages.0.convs.0.all_modules.1.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.stages.0.convs.0.all_modules.1.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.stages.0.convs.1.conv.weight: copying a param with shape torch.Size([384, 384, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 320, 3, 3, 3]). size mismatch for decoder.stages.0.convs.1.conv.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.stages.0.convs.1.norm.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.stages.0.convs.1.norm.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.stages.0.convs.1.all_modules.0.weight: copying a param with shape torch.Size([384, 384, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 320, 3, 3, 3]). size mismatch for decoder.stages.0.convs.1.all_modules.0.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.stages.0.convs.1.all_modules.1.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.stages.0.convs.1.all_modules.1.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.transpconvs.0.weight: copying a param with shape torch.Size([384, 384, 1, 2, 2]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 2, 2]). size mismatch for decoder.transpconvs.0.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([320]). size mismatch for decoder.transpconvs.1.weight: copying a param with shape torch.Size([384, 256, 2, 2, 2]) from checkpoint, the shape in current model is torch.Size([320, 256, 2, 2, 2]). size mismatch for decoder.seg_layers.0.weight: copying a param with shape torch.Size([1, 384, 1, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 320, 1, 1, 1]). ```

*I shortened some paths

naga-karthik commented 2 days ago

@po09i Oopsi, I realized that the model architecture was also slightly updated with v2.5. Hence the size mismatch errors you got. I created a PR on the SCT repo and tested it on an image myself. If you could please try it again (usage instructions in the PR), it should work fine now!

po09i commented 2 days ago

@naga-karthik The PR did resolve the crash. I used the newer version (2024_09_30) and similarly did not get any segmentation of the spinal cord :/