Closed tsalo closed 2 years ago
It looks like downsampling before applying the transform works best (instead of doing both at once), based on the test below where I applied the transform to the T1w-space, T1w-resolution ASEG image vs. the T1w-space, BOLD-resolution one.
I'm just not sure how best to downsample the T1w-resolution images, since nilearn.image.resample_img
requires the target affine, which I'm not sure how to calculate.
3dresample allows you to feed in target voxel sizes without figuring out the affine or anything, so I think I will try that next.
I tried out six different cortical ribbon segmentation workflows on five CamCAN participants to see which one performs best. To be honest, the results look equivalent for all of the T1w-resolution-based approaches and all of the BOLD-resolution-based approaches. The BOLD-resolution-based ones look better overall, though sub-CC722216 doesn't look good. I'm thinking that my initial test just caught a bad subject... although I'm still concerned about the WM and CSF masks, which the methods say are done in T1w-resolution and then downsampled.
I plotted the calculated T1w-space analysis segmentations for five random CamCAN subjects. They look good, so I think I will just move forward with the masks as-is.
Obviously moving to BOLD resolution is going to decrease the quality of masks, but this is beyond what I was expecting! 😨
The ASEG file
The GM mask
The code
Here's the transform application code: