nipreps / fmriprep

fMRIPrep is a robust and easy-to-use pipeline for preprocessing of diverse fMRI data. The transparent workflow dispenses of manual intervention, thereby ensuring the reproducibility of the results.
https://fmriprep.org
Apache License 2.0
625 stars 290 forks source link

CIFTI resampling weights #3119

Closed effigies closed 9 months ago

effigies commented 10 months ago

I'm trying to think through the logic of the resampling weights that @HippocampusGirl added in c331d1d. What we get are lines like:

0, 7, 102, 28, 137, 0.0740741, 102, 29, 137, 0.111111, 102, 30, 137, 0.0740741, 103, 28, 137, 0.148148, 103, 28, 138, 0.0740741, 103, 29, 137, 0.0740741, 103, 29, 138, 0.0555556
1, 6, 102, 28, 137, 0.148148, 102, 28, 138, 0.203704, 102, 29, 137, 0.222222, 102, 29, 138, 0.314815, 102, 30, 137, 0.148148, 102, 30, 138, 0.222222

That is:

This is heavily dependent on the exact grid, so we need to calculate it for each BOLD series. We've been resampling from T1w space into fsLR, but to calculate this without resampling the entire time series, we would need the BOLD sampling reference in the T1w space. I think it will be most efficient to project the GIFTI surfaces into the original BOLD space and then generate this.

Now, I want to understand what we can do with this. This will give us weights from vertices in fsnative into the voxel space of the BOLD reference. When we apply them, do we need the slice-timing+head-motion+susceptibility corrected series? Or will we be able to apply our voxel-to-voxel head motion transforms and voxel shift map to these indices before applying the weighting?

WDYT @feilong?

cc @oesteban

feilong commented 10 months ago

I think the weight is the percentage of the voxel that's inside the vertex's coverage. The algorithm is probably dividing each voxel into 3x3x3 sub-voxels, and count how many of them are inside the polyhedron.

Projecting the surfaces into original BOLD space makes perfect sense to me.

I suppose it's best to perform this resampling for each volume separately (if computational time permits). Due to head motion, these weights might change from volume to volume. My preferred solution is projecting the surfaces into each volume and compute the weights and perform the resampling.

feilong commented 10 months ago

In this case, it's not practical to save all these weights for all the volumes. We just compute them when we perform the resampling.

effigies commented 10 months ago

Got it. I have a suspicion that would be infeasibly slow, but I suppose it's worth coding up and seeing. We would want it to be equivalent (to some tolerance) with wb_command -volume-to-surface-mapping.

So it sounds like there's no advantage to pre-computing anything for volume-to-surface. That's simple.

HippocampusGirl commented 10 months ago

Wow, that seems like an awfully complex algorithm to calculate the weights. It seems like they have some kind of logic for creating polygons between the pial and the white matter surfaces that are used for the intersections. https://github.com/Washington-University/workbench/blob/a9f8f00b05bd207c544a293a72b1026e1a57db83/src/Files/RibbonMappingHelper.cxx#L83-L165 I don't really understand how that can create sensible weights for resampling. Do you know if there's a paper or a description somewhere? Why don't they simply resample the mesh vertices to volume space, and then interpolate?

effigies commented 10 months ago

Why don't they simply resample the mesh vertices to volume space, and then interpolate?

To cover the cortex, you would need to resample at several layers, as you can have sections that are multiple voxels thick. In this approach, you can imagine the vertex as describing a tube from the white to pial surfaces. I suppose they consider this more efficient and/or correct than sampling at multiple layers and averaging.

There have definitely been papers on cortical sampling, but I don't have a reference off the top of my head. Unfortunately, most algorithms are hidden in papers solving a specific clinical problem. The FreeSurfer, Civet (now HCP), SUMA (AFNI) and BrainSuite labs are likely to have published on it. And looking at the layer fMRI work might be a useful source of references, as they are moving toward treating each layer separately, rather than aggregating.

feilong commented 10 months ago

I agree with @effigies Aggregating data across different layers is more accurate than based on a single point. Given the algorithm is volume-based overlapping, it might also aggregate across data from the same "layer", especially if the vertex occupies a large area of the cortex.

Mathematically the overlap-based algorithm is an elegant solution. It can be more accurate if we compute the overlap using polyhedra instead of the current estimation method, but that will further increase the computational time...

effigies commented 10 months ago

Just thinking about this, how would you implement the algorithm you describe while incorporating a voxel shift map, which will squeeze/stretch voxels along one axis?

feilong commented 10 months ago

When we transform the surfaces to BOLD space, we apply the voxel shift map to the coordinates, so that the surfaces are distorted in a similar way as the BOLD images. Does this make sense?

effigies commented 10 months ago

Okay, got it. Yeah, that doesn't really require any changes to the fieldmap resampling approach.

Now I'm curious about the Jacobian scaling. Does it even apply when there isn't a voxel volume to consider? I think it should, as the idea is that signal in the original image is either attenuated due to stretching or exaggerated due to pile-up.

I need to think through the theory here. I have a worry that this is going to be more complicated than the raw fieldmap as now there are three components. Unless you've thought this through already?

feilong commented 10 months ago

I haven't thought about Jacobian much. In my analysis the data is always normalized afterwards (usually z-scored), so the scale doesn't matter in this case.

If we use the overlap-based weights (without normalizing the sum to 1), it naturally scales the data as Jacobian would do to volumetric data--larger vertices ends up with larger intensity.

Regarding the Jacobian for SDC, it makes sense to apply it to the original BOLD data anyway. As you said, the intensity of the acquired BOLD is biased by the distortion, and it is nice to correct for it. The weights changes the intensity based on the number of sub-voxels in each vertex, and the SDC Jacobian normalizes the intensity in each voxel.

feilong commented 10 months ago

Whether to use other Jacobians, e.g., from T1w to MNI, or from BOLD to surface, depend on how you want to resample the data.

Imagine a transform matrix where rows are features (voxels/vertices) in the 1st space, and columns are features in the second space. If we normalize the sum of each row to 1, that keeps the sum of the quantity constant (e.g., the total cortical area). If we normalize the sum of each column to 1, that makes the features of the resampled data have similar scales as the features of the input data.

I usually use the first way for things like area, volume, and the second way for things like curvature, sulcal depth. I'm using the second way for BOLD data, but either way makes sense to me, and I believe the differences are small in most cases.