nipreps / fmripost-rapidtide

A BIDS App for running rapidtide on a preprocessed fMRI dataset.
https://fmripost-rapidtide.readthedocs.io
Apache License 2.0
1 stars 0 forks source link

Combining rapidtide regressors with other confounds #16

Open tsalo opened 1 week ago

tsalo commented 1 week ago

This stems from https://github.com/nipreps/fmripost-rapidtide/issues/7#issuecomment-2315494726 and https://github.com/nipreps/fmripost-rapidtide/issues/10#issuecomment-2333993170. Basically, I want to make sure that my plan for chaining fMRIPost-Rapidtide with other fMRIPost workflows (including XCP-D and giga_connectome) makes sense.

My idea was that users could take the voxel-wise lagged regressor and combine it with other sets of confounds in an omnibus denoising step. However, I'm seeing in retroglm that a number of derivatives from the rapidtide run are being used for the GLM, and that has me a little worried.

bbfrederick commented 1 week ago

Retroglm uses the derived sLFO regressor, some masks, and the delay map (and the original input data). It then generates voxelwise delayed regressors from that information, and regresses them out. I can split that into two parts - the voxelwise regressor generation step and the actual filtering part, if that would be helpful. The main reason retroglm exists is that I realized that 90% of the runtime of rapidtide is extracting the regressor and getting the voxelwise delay, and that that only generates a tiny fraction of the output data (23MB vs 5GB for a single HCP resting state run). It's only once you start saving the delayed regressors and the products of filtering that your output data size explodes. So pausing the analysis at this point lets you save the majority of the effort with only tiny data size. But generating the voxel specific regressors is very fast - doing it on the fly would certainly be doable.

tsalo commented 1 week ago

That makes sense. I don't think the regressor generation step requires a command-line interface. A function in the rapidtide package that accepts the lag map and the regressor file would be amazing though. That would be easier to incorporate into the Nipype workflow than something that accepts the rapidtide output directory.

Also, would it make sense to average the lag map across runs from the same subject/session before generating the voxel-wise regressor? I figured that might reduce run-wise noise, since the lag should be fairly stable over runs, right?

tsalo commented 1 week ago

Was just watching your coffee chat with Ben Inglis and having the first derivative of the voxel-wise regressor from this function would be wonderful. It would be trivial to compute separately, but it would still be nice to have it straight out of the function.

bbfrederick commented 1 week ago

Rapidtide and retroglm already have this!

--glmderivs. When doing final GLM, include derivatives up to NDERIVS order. Default is 0

When you invoke the option, the voxelwise derivatives are saved.

XXX_desc-lfofilterEV_bold (nii.gz, json) - Shifted sLFO regressor to filter XXX_desc-lfofilterEVDerivN_bold (nii.gz, json) - Nth time derivative of shifted sLFO regressor

bbfrederick commented 6 days ago

Also, would it make sense to average the lag map across runs from the same subject/session before generating the voxel-wise regressor? I figured that might reduce run-wise noise, since the lag should be fairly stable over runs, right?

You'd like to think so, but the fits are often kind of noisy. We have a paper in revision about this - you can dramatically improve the reliability of the delay maps using a PCA decomposition, but that's not currently part of rapidtide.

tsalo commented 6 days ago

Just to make sure I understand, you'd run a PCA on the delay maps after concatenating them across runs? If we take HCP-YA as an example, you get four resting-state runs and... I dunno, four or so task runs. Would you run rapidtide on each of the ~8 runs separately, then concatenate the delay maps, run PCA on that, keep the first component's map, and then feed that delay map into RetroGLM for each run?

EDIT: Because I can totally implement that in fMRIPost-rapidtide.