Closed valosekj closed 2 weeks ago
I managed to get predictions (using my contrast translation approach) on a T1w image. However results are not optimal due to my current resampling to 1 mm isotropic.
Axial view | Sagittal view |
---|---|
this is very encouraging though! do you think you could re-train a model that resamples to 0.5mm to see if that helps?
Yes sure ! I will work on that !
I managed to get predictions (using my contrast translation approach) on a T1w image.
Very cool, thannks!
I compared the original and the "fake" image (i.e. created by your contrast translation), and it seems that after contrast translation, we lose the contrast between the rootlets and the CSF:
Thus, it is not surprising to me that the rootlets segmentation does not work (as we do not see the rootlets after the contrast translation).
However results are not optimal due to my current resampling to 1 mm isotropic.
What do you mean by this? When I check the header of the "fake" image, it's still 0.7 mm. Do you mean that the resolution is actually 1 mm (i.e. the header has not been updated)?
No the header is right, but I just perform a resampling to the original resolution at the end of my processing.
So what I mean is that:
I think I could technically improve my results for this task if I train the model with a higher resolution. Because I might lose the rootlets when performing my first resampling.
Another possible approach to try: transfer learning, i.e., pre-train on T2w, finetune on inv2 image from an MP2RAGE
I believe we can close the issue for now.
For a bit of context:
inv2
image but also the inv1
and UNIT1
images (details)UNIT1
images multiplied by -1
(details)hc-leipzig-7t-mp2rage
and manually corrected them (details here and here)
Try rootlets segmentation on the inv2 image from an MP2RAGE acquisition.
Ideas: