sct-pipeline / spine-park

Pipeline for multicontrast analysis in PD patients
MIT License
0 stars 0 forks source link

Add postprocessing functionality for the new contrast-agnostic spinal cord segmentation model #31

Open Kaonashi22 opened 6 months ago

Kaonashi22 commented 6 months ago

For some subjects, the template to subject registration is not accurate on the last slices. I noticed the segmentation masks are not accurate at these levels, which may explain that. I attached an example of MT off image with the WM template warped and the original segmentation. I guess this will be fixed during the second run using the manual corrections. image image

For one subject, the T1 template doesn't align properly at the edges of the subject images (MT off, UNI). The segmentations masks are good.

sub-DEV169Sujet03_mt-off_MTS_crop_seg.nii.gz sub-DEV169Sujet03_mt-off_MTS_crop.nii.gz PAM50_t1.nii.gz PAM50_wm.nii.gz

Two questions/comments: -If I understood well, the segmentation masks are mainly used for the registration, not for the metrics extraction (done with the template objects warped onto the subject space). Is it correct? -How accurate the segmentations should be? Should we increase the number of voxels to make the measurements more robust or be more conservative to avoid partial volume effect?

Kaonashi22 commented 6 months ago

Funny thing: the aorta is segmented on this dwi image! image

Kaonashi22 commented 6 months ago

Here is the result of the position of the WM template after warping; it will be fixed after mask correction image

jcohenadad commented 6 months ago

Funny thing: the aorta is segmented on this dwi image!

oopsi! we should use post-processing methods, such as 'keep largest object'. Is this something that is available (or can be easily implemented) @joshuacwnewton @naga-karthik ?

jcohenadad commented 6 months ago

@Kaonashi22 can you please send me sub-DEV169Sujet03 so I can try to reproduce your results and understand what the issue is. If the mis-registration concerns slices that are not used for metrics extraction, this is not a problem per se. But I am wondering, is why the warped template was cropped at the edge, even though the segmentation looked OK. I suspect an issue with straightening.

Also, please always indicate the version of the script that you used in your issue (edit https://github.com/sct-pipeline/spine-park/issues/31#issue-2278526410 to add this information). Also, please upload a log file, so I can verify which version of SCT was used.

jcohenadad commented 6 months ago

If I understood well, the segmentation masks are mainly used for the registration, not for the metrics extraction (done with the template objects warped onto the subject space). Is it correct?

Yes and no. The quality of the segmentation impacts the quality of registration, and therefore the quality of metrics extraction.

How accurate the segmentations should be?

It depends how heavily weighted towards the segmentation the registration is. We can decide to weight the registration mode towards the image (ie T1w), but then we risk that in subjects with image artifacts, the registration will be more wrong. So there is a delicate decision to be made, depending on the image quality. This is why it is important for me to have a representative sample of the images, so I can make an informed decision.

Should we increase the number of voxels to make the measurements more robust or be more conservative to avoid partial volume effect?

no-- the segmentation of the cord should be the segmentation of the cord (not a dilated version of it)

joshuacwnewton commented 6 months ago

Funny thing: the aorta is segmented on this dwi image!

oopsi! we should use post-processing methods, such as 'keep largest object'. Is this something that is available (or can be easily implemented) @joshuacwnewton @naga-karthik ?

By "keep largest object", do you mean sct_deepseg -remove-small?

  -remove-small REMOVE_SMALL [REMOVE_SMALL ...]
                        Minimal object size to keep with unit (mm3 or vox). A
                        single value can be provided or one value per prediction
                        class. Single value example: 1mm3, 5vox. Multiple values
                        example: 10 20 10vox (remove objects smaller than 10
                        voxels for class 1 and 3, and smaller than 20 voxels for
                        class 2).

It looks like this option refers to functionality that is within ivadomed, meaning it is not currently available for MONAI/nnUNet models. (I didn't realize this! We should absolutely document this, and maybe throw a warning if folks try to use the ivadomed-specific options with the new MONAI/nnUNet models.)


As a quick alternative using existing SCT CLIs, could we do sct_get_centerline -> sct_create_mask -> sct_crop_image -b 0 to zero out all voxels outside of the detected centerline? (I'm assuming that sct_get_centerline/OptiC is possibly more reliable at detecting a single spinal cord, since it relies on more "traditional computer vision"-based priors (tube shape, localization map) as opposed to "black box DL". Of course, we we would still use contrast agnostic for the accurate segmentation itself.

Or, alternatively, since the cord seems to be centered in the FOV (while the aorta is offset), we could even do sct_create_mask -p center to generate the mask... :thinking:

Kaonashi22 commented 6 months ago

@Kaonashi22 can you please send me sub-DEV169Sujet03 so I can try to reproduce your results and understand what the issue is. If the mis-registration concerns slices that are not used for metrics extraction, this is not a problem per se. But I am wondering, is why the warped template was cropped at the edge, even though the segmentation looked OK. I suspect an issue with straightening.

Also, please always indicate the version of the script that you used in your issue (edit #31 (comment) to add this information). Also, please upload a log file, so I can verify which version of SCT was used.

Thanks @jcohenadad, I'll send you the images by mail. This is the version of the script I used: $ git log --pretty=oneline 9ad4422f84001a3f5d70c70b3939dea30538faa8 (HEAD -> main, origin/main, origin/HEAD) Extraction of metrics in all tracts

batch_processing_sub-DEV169Sujet03.log

Kaonashi22 commented 6 months ago

How accurate the segmentations should be?

It depends how heavily weighted towards the segmentation the registration is. We can decide to weight the registration mode towards the image (ie T1w), but then we risk that in subjects with image artifacts, the registration will be more wrong. So there is a delicate decision to be made, depending on the image quality. This is why it is important for me to have a representative sample of the images, so I can make an informed decision.

They are slight to moderate motion artifacts that can affect different images depending on the subjects. I can share the full dataset by Onedrive if needed.

Kaonashi22 commented 6 months ago

Funny thing: the aorta is segmented on this dwi image!

oopsi! we should use post-processing methods, such as 'keep largest object'. Is this something that is available (or can be easily implemented) @joshuacwnewton @naga-karthik ?

By "keep largest object", do you mean sct_deepseg -remove-small?

  -remove-small REMOVE_SMALL [REMOVE_SMALL ...]
                        Minimal object size to keep with unit (mm3 or vox). A
                        single value can be provided or one value per prediction
                        class. Single value example: 1mm3, 5vox. Multiple values
                        example: 10 20 10vox (remove objects smaller than 10
                        voxels for class 1 and 3, and smaller than 20 voxels for
                        class 2).

It looks like this option refers to functionality that is within ivadomed, meaning it is not currently available for MONAI/nnUNet models. (I didn't realize this! We should absolutely document this, and maybe throw a warning if folks try to use the ivadomed-specific options with the new MONAI/nnUNet models.)

As a quick alternative using existing SCT CLIs, could we do sct_get_centerline -> sct_create_mask -> sct_crop_image -b 0 to zero out all voxels outside of the detected centerline? (I'm assuming that sct_get_centerline/OptiC is possibly more reliable at detecting a single spinal cord, since it relies on more "traditional computer vision"-based priors (tube shape, localization map) as opposed to "black box DL". Of course, we we would still use contrast agnostic for the accurate segmentation itself.

Or, alternatively, since the cord seems to be centered in the FOV (while the aorta is offset), we could even do sct_create_mask -p center to generate the mask... 🤔

Thanks @joshuacwnewton. I've already corrected manually all masks. Will the cord segmentation using this new approach be reproducible with the previous one?

jcohenadad commented 6 months ago

By "keep largest object", do you mean sct_deepseg -remove-small

yes!

(I didn't realize this! We should absolutely document this, and maybe throw a warning if folks try to use the ivadomed-specific options with the new MONAI/nnUNet models.)

oops! yes! good catch

As a quick alternative using existing SCT CLIs, could we do sct_get_centerline -> sct_create_mask -> sct_crop_image -b 0 to zero out all voxels outside of the detected centerline? (I'm assuming that sct_get_centerline/OptiC is possibly more reliable at detecting a single spinal cord, since it relies on more "traditional computer vision"-based priors (tube shape, localization map) as opposed to "black box DL". Of course, we we would still use contrast agnostic for the accurate segmentation itself. Or, alternatively, since the cord seems to be centered in the FOV (while the aorta is offset), we could even do sct_create_mask -p center to generate the mask... 🤔

hum... these are very ad hoc approaches-- before implementing this for Lydia's project we should see if this issue concerns 1%, 10% or 50% of the data--

jcohenadad commented 6 months ago

@Kaonashi22 I ran the process on this subject and the labeling failed: image

If you have created derivatives for this subject, it is important that you also send me the derivatives so I can reproduce your processing.

Kaonashi22 commented 6 months ago

I forgot to send the manual labeling that I used (for all subjects since I couldn't run SPINEPS) sub-DEV169Sujet03_T2_label-disc.json sub-DEV169Sujet03_T2_label-disc.nii.gz

naga-karthik commented 6 months ago

oopsi! we should use post-processing methods, such as 'keep largest object'. Is this something that is available (or can be easily implemented) @joshuacwnewton @naga-karthik ?

@joshuacwnewton is this something you would like me to implement on the monai side? As I am working on the improvements to the model on a separate branch, I have it already implemented. I can submit a PR on the SCT repo to include this function preferably in this part of the code? Please let me know!

If we go this route, please note that the post-processing might not be a global SCT function but a specific to the monai inference script.

joshuacwnewton commented 6 months ago

I can submit a PR on the SCT repo to include this function preferably in this part of the code? Please let me know!

That would be lovely! Thank you for offering. :heart:

If we go this route, please note that the post-processing might not be a global SCT function but a specific to the monai inference script.

Since the postprocessing can be applied to any numpy array (as far as I can tell), I think the easiest thing to do would be to call this function inside the "segment_non_ivadomed" function here, similar to how we apply thresholding as post-processing. (That way, it can be used for both MONAI and nnUNet.)

Also, since we would now have 2 postprocessing operations, we may also want to add an extra explanatory comment above the thresholding, saying something like:


# Apply postprocessing (replicates existing functionality from ivadomed package)
jcohenadad commented 5 months ago

@Kaonashi22 I've renamed this issue because it was non-specific ("QC"). Once https://github.com/spinalcordtoolbox/spinalcordtoolbox/issues/4481 is merged (@joshuacwnewton any ETA) I will re-run the processing and let you know if this is ready for you to use.

joshuacwnewton commented 5 months ago

I'll make sure it gets merged today. :)

joshuacwnewton commented 5 months ago

The PR has been merged. Now, the existing -largest and -remove-small arguments for sct_deepseg should work for MONAI/nnUNet models, depending on what you want to filter.

jcohenadad commented 5 months ago

Fantastic! Thank you @joshuacwnewton. I'll explore these syntaxes and choose the appropriate one for this project

jcohenadad commented 5 months ago

@Kaonashi22 I am now working on the postprocessing methods to improve the automatic segmentation for your dataset. I notice this issue includes multiple issues (https://github.com/sct-pipeline/spine-park/issues/31#issue-2278526410 and https://github.com/sct-pipeline/spine-park/issues/31#issuecomment-2098360250 are two very different issues), making it very difficult to follow and address specifically. One issue should ideally refer to one single issue. Can you please open another issue about https://github.com/sct-pipeline/spine-park/issues/31#issuecomment-2098360250, with link to data and the SCT version you used. Thanks.

Kaonashi22 commented 5 months ago

Sure, I'll open the issue tomorrow. "I have to retrieve the ID of the subject where I encountered this issue.