ME-ICA / tedana

TE-dependent analysis of multi-echo fMRI
https://tedana.readthedocs.io
GNU Lesser General Public License v2.1
158 stars 94 forks source link

Detrending & tedana (or detrended variance explained) #1054

Open handwerkerd opened 4 months ago

handwerkerd commented 4 months ago

Summary

I was talking with @afni-rickr and he suggested detrending and possibly regressing out motion parameters before tedana. This would reduce the amount of variance tedana would need to model and potentially make it perform better.

Additional Detail

In thinking this over, I think regressing motion would be problematic because we'd need to regress from all echoes if motion artifacts didn't all follow the regressors, it might not actually save degrees-of-freedom or improve other results. I think there's a better case for detrending. I'm not too worried about high variance trends making tedana perform worse, but they do make it harder to interpret total variance explained and accepted/rejected variance explained since the magnitude of the linear drift dominates total variance explained. This make it hard to see whether rejected components are 30% of the meaningful total variance in one population vs 50% in another.

We might want to test of tedana results are substantively different if we detrend first. If not, one way to address the above issue would be to add a new metric detrended variance explained. I'd need to think through the math, but we'd detrend each component's time series and scale each components "variance explained" by how much detrending reduced variance. IF we want to get extra-fancy, we could show the variance explained pie chart and component time series with or without detrending.

CesarCaballeroGaudes commented 4 months ago

IMHO, detrending prior to tedana might be helpful because IC explaining high % of variance do often exhibit low frequency trends very clearly. In our datasets we usually find (two) related components with opposite trends. In contrast, I don't think regressing out realignment parameters prior to tedana would be advantageous. If so, I'd advocate for the use of the realignment parameters in the decision tree as @eurunuela has aimed to implement. A bigger movement-related tree is easier to see in the brain forest !!

handwerkerd commented 3 months ago

My top priorities are to get realignment parameters into the decision tree (#1021) and get a more stable component estimation (likely #1013). Unless someone else gets to it first, running tedana on detrended data should be fairly easy. One would need to run polynomial detrending on all echoes, but keep the mean. Without any code edits, tedana could be run with and without this detrending to see how it alters the eventual denoised time series.

afni-rickr commented 3 months ago

Note that detrending or other regression should happen in the context of censoring, at least if time gets squeezed. If modularity is important, censoring with spike regressors might be preferable to time squeezing. For detrending, I would think that would still distort the relationship between the echoes. So it would seem like detrending or other regression should happen to the OC input to ICA, rather than to the individual echoes. Though again, I am not sure what happens after that.

CesarCaballeroGaudes commented 3 months ago

I agree that detrending should be done to the OC input

handwerkerd commented 3 months ago

To make the metric fits work, you'd need to detrend the separate echoes (but retain the mean). To calculate kappa & rho, the component time series are fit to each echo's data. If the OC is detrended, but the echos aren't then those calculations would definitely break down. That said, we might be losing or corrupting some echo-dependent information if we separately detrend each echo.

One intermediate option would be to fit the polynomial detrending regressors either the OC data or all echoes, and then fit a single detrending regressor to all echoes. That is, for the OC data, a voxel's detrending regressors are modeled by y=1.5x^3 + 0.8x^2 + 2.1x + 5. Then, for that voxel, we'd fit each echo to y=A*(1.5x^3 + 0.8x^2 + 2.1x)+C rather than letting the relationships between the coefficients vary.

I'm not sure if this would actually matter or what the effects of any detrending approach would be, but we might get a better sense when someone tries it and compares the results.

afni-rickr commented 3 months ago

Does the relationship between the echoes still matter after OC is computed? How BOLD-like a time series looks could be evaluated before OC (and therefore before detrending, etc). At that point, wouldn't it just be a question of variance explained? And if so, the relationship would no longer be needed and the echoes could be detrended. As a separate question, does that mean you evaluate explained variance from the echoes separately? I figured that would come from OC. I guess I just don't know exactly what metrics are being computed, and on what. Thanks.

handwerkerd commented 3 months ago

ICA is the limiting step. The ICA component times series need to be map-able back onto the original echoes. If the OC data that go into the ICA step are detrended, but the individual echoes are not, then this breaks down.

That said, this goes back to my opening comment. If the goal is to calculate variance explained excluding linear trends, that should be relatively easy. If the goal is to run ICA on detrended data, then there are more complex issues to think through.

afni-rickr commented 3 months ago

My somewhat ignorant inclination would be to run ICA on the detrended data. Echo/BOLD relationships could be computed separately, even if it is necessary to have both original and detrended echoes. But it seems that the ICA could be made to be far more sensitive to properties of interest if it were not "distracted" by components such as trends or censored spikes.

tsalo commented 2 months ago

The --gscontrol gsr approach estimates global signal from the OC data and then removes it from each echo, so we do have code that could be repurposed for detrending.