This is a holdover from the 2017 overhaul but needs to be completed. Here are some design objectives:
What confounds do we want?
Motion parameters (we have these, but might want to do derivative/power expansion)
FD (in addition to/instead of the motion parameters? Check what is recommended)
DVars?
aCompCor: terminology for time series consisting of projection onto top spatial PCs across anatomically masks of (deep) white matter and/or CSF
tCompCor: terminology for high variance subcortical voxels?
nCompCor: (my) terminology for the same idea as aCompCor, but using the voxels identified as "locally noisy". These can be gray matter voxels, but likely are corrupted by large vessels.
global signal (whole brain? just gray matter? have both?)
When do we want to extract them?
It would be convenient to get them during preprocessing and then have a text file sitting around that can be used elsewhere. But -- we now do high-pass filtering during the model workflow, and ideally the components derived from the time series data should reflect that filtering.
What kind of visual QC should we do?
This should be fairly straightforward, just a plot of (standardized?) values over time with some common legend for different components. Should we show everything or just what is used in cleanup? (The latter will end up in the design matrix plot, too). Should we also show spatial maps of the components weights?
How to parameterize what gets used?
This should be model-level information, and we need to specify both a) what to include and b) confound specific parameters (e.g., derivatives and/or transformations of the motion parameters, number of components for the PCA-baed methods)
How do we make the useful outside the context of the nipype workflows?
i.e. it would be useful to have access to these for ROI analyses. Should happen in the big model fitting node, but should happen via a function that can take images and operate on them.
This is a holdover from the 2017 overhaul but needs to be completed. Here are some design objectives: