sct-pipeline / contrast-agnostic-softseg-spinalcord

Contrast-agnostic spinal cord segmentation project with softseg
MIT License
4 stars 3 forks source link

Verify whether softseg model performs well on images with spinal cord compression #47

Closed joshuacwnewton closed 4 months ago

joshuacwnewton commented 1 year ago

Context -- This older SCT issue:

Received a comment saying:

Will be tackled by https://github.com/sct-pipeline/contrast-agnostic-softseg-spinalcord

This issue is for following up on the task of verifying whether the new model performs better than the existing models on images with spinal cord compression.

naga-karthik commented 1 year ago

Thanks for the follow-up @joshuacwnewton ! We are in the process of finalizing the contrast-agnostic model this week. Once it's done, I will post the QC report of the model tested on some subjects with compression! Thanks for your patience!

naga-karthik commented 4 months ago

Hey @joshuacwnewton ! It's been a while but we have gotten around to this finally! The latest release of contrast-agnostic (which is also in SCT v6.3 now) works reasonably well on compression data. Please let me know if there are any subjects you wanted to test.

joshuacwnewton commented 4 months ago

Please let me know if there are any subjects you wanted to test.

Personally, I'm not sure which subjects to test on outside of the image mentioned in the original issue:

path to image : /duke/projects/ml_sci_prognosis/issue/issuedeepseg#2470

Thankfully, the image is still at the same location, so I tried running the contrast agnostic model, and the results are significantly better (no empty slices, much more accurate segmentation):

ezgif-7-6b02c13461

(Red: sct_deepseg -task seg_sc_contrast_agnostic, blue: sct_deepseg_sc)

We can test on more subjects, of course, but I think in the context of the original SCT issue, the problem with sct_deepseg_sc has been fixed. :)

naga-karthik commented 4 months ago

amazing, thanks for the quick testing!

joshuacwnewton commented 4 months ago

I think we can close this issue? (Unless we want to come up with some sort of benchmark testing dataset for comparing different SC seg methods... but that seems like a paper in and of itself, hehe.)

naga-karthik commented 4 months ago

some sort of benchmark testing dataset for comparing different SC seg methods

This is actually a great idea! Right now, we're just using randomly using those datasets which happen to come to mind during our project meetings. But this is not ideal as datasets evolve and new datasets are added. Maybe having a (hopefully, public) dataset could be useful.

Moreover, in a lot of ML/DL papers, testing against some benchmark also makes it easy for the reader to objectively judge the performance of the method/model.

I added it to the agenda for the next SCT meeting. But, as for this issue, it can be closed now I believe !