Open Nilser3 opened 3 months ago
sub-0047_acq-ax_PSIR
seg_sc_contrast_agnostic
model v2.4sct_deepseg_sc
-c t1QC here: qc_ucsf-gm-psir.zip
Legend
Because the contrast-agnostic model v2.4 failed to segment the subject sub-0047
(which is a single 2D slice, see segmentations above),
I made a merge in Z of 40 slices, building a 3D volume where I applied the same contrast-agnostic model.
We have good results in the central slices (first column), but irregularities in the upper and lower ends:
Upper last slice
Lower last slice
Maybe we should put an informative message about the use of contrast-agnostic
model when applying it to 2D images? feedback pls @naga-karthik
Hey Nilser, thanks for the detailed segmentation images! it's surprising that the contrast-agnostic model is not working that well -- i'm guessing that's mainly because these are single-slice images? the v2.4 model was trained will all 3D images AND didn't have axial PSIR images (only sagittal PSIR scans were available)
I made a merge in Z of 40 slices,
what do you mean by this? where is this 40
coming from? I have a few more questions:
but irregularities in the upper and lower ends:
yeah, this is an issue because of padding the images to a common size during training. Because you're are merging slices anyway to make it a 3D input, maybe you can also discard the top and bottom slices? i.e. your actual SC slice would be in the middle -- which I see has a good segmentation already. let me know what you think!
Thanks you @naga-karthik
Here in detail: I made a merge on the Z-axis of 40 2D slices (repeat the same slice).
Original input image 2D slice
:
3D vol
:
Yes, I think the model makes a good segmentation in the middle of a 3D volume, maybe this would be the approach when the model confronts 2D images in input.
I made a merge in Z of 40 slices, building a 3D volume where I applied the same contrast-agnostic model.
what if we pad using mirroring data to avoid edge cases? I think we already talked about it @naga-karthik @Nilser3 -- we should add this as preprocessing of sct_deepseg before inference
Yeah, this could be done -- I opened an issue on SCT regarding this
Hi @mguaypaq ,
Could you please create a repo for the dataset: hc-ucsf-psir
Thanks you
Maybe it shoudl be gm-psir-ucsf
? a lot of our datasets have center names at the end (e.g. sci-zurich
, dcm-paris
, lumbar-epfl
etc. )
~how about simply psir-ucsf
? i'm afraid 'gm' will be confusing. We do have a 'gmseg' but it was specifically for a challenge~
actually, according to our convention, the contrast goes at the end. So it should be ucsf-psir
then
EDIT 20240620_124453: I just edited our convention, to add hc-
in the beginning, so it would be hc-ucsf-psir
if that's ok with everyone
Hi @mguaypaq , could you create a repo for hc-ucsf-psir
dataset please.
Thanks you
I created the repository and gave write access to @Nilser3: https://data.neuro.polymtl.ca/datasets/hc-ucsf-psir
Thanks you @mguaypaq PR done!
Description
I would like to push a new dataset
ucsf-gm-psir
to git-annex server.Is a dataset shared by our UCSF collaborators, contains 110 subjects with:
anat
: axial PSIR images from healthy controls (one slice by subject) at C2-3 levelGT
: GM manual segmentationHere the qc_ucsf-gm-psir-gm.zip checking GM segmentation