ME-ICA / tedana-reliability-analysis

An analysis of the reliability of the tedana denoising pipeline on an example dataset
GNU General Public License v3.0
4 stars 4 forks source link

Goals and Strategy #1

Open jbteves opened 5 years ago

jbteves commented 5 years ago

I'm opening this issue so that we can discuss the goals and strategy for this analysis. I'd like for this issue to help us outline the goals and strategy. I would like to propose the below:

Goals:

Strategy:

Apologies to @tsalo if this was discussed elsewhere or is in the code; I took a look and I see that you have some setup for fmriprep and running tedana already, but I would like to investigate the afni_proc preproc as well.

tsalo commented 5 years ago

I wasn't planning to look into preprocessing pipeline or data quality/acquisition parameters. Granted, that's mostly because I was limiting the scope of this analysis because I didn't have much time to dedicate to it. If you and @dowdlelt are planning to spend some time on this, then I think those can definitely fall within the scope of the project.

In terms of metrics, some additions to your own that I think would be useful are:

jbteves commented 5 years ago

Can't speak for @dowdlelt but I think it's worth doing the work to take that into account, and I suspect I have the time. These additional metrics sound good to me.

dowdlelt commented 5 years ago

I like those metrics, though there is an obvious difficulty with resting state data for which no model exists. Perhaps for resting state scans we could use a seed region, corresponding to a typical resting state node as the analyses method. Should get reasonable maps of a resting state network that way, and be able to generate similar voxel significance maps. Maybe a couple different seed regions...

tsalo commented 5 years ago

Yeah, a seed-to-voxel analysis with a common seed is a good option for doing this with resting-state data. What about one seed for each of the major canonical networks? E.g., default mode (PCC), executive control (dmPFC), and salience (anterior insula).

tsalo commented 5 years ago

One potential issue with group comparisons across seeds is that using the same seed for different data (e.g., different subjects) doesn't mean anything. If we compare group-level maps from one seed to the next, we have a combination of within-subject variability due to seed, between-subject variability due to the data, and between-subject variability due to seed. That's probably not the most accurate way of describing it, but I think the logic is sound.

I figure that we have three possible solutions:

  1. Only look at individual subjects. This will be very difficult to interpret.
  2. Ignore the issue and build our group-level maps by seed.
  3. Build our group-level maps by randomly selecting across seeds.
tsalo commented 5 years ago

I would like to flesh out the analysis plan in a Google Doc, but before I start on that I want to ensure that we've figured out the cross-seed comparison issue. Would everyone please make sure to take a look at #3 and weigh in there?

Addendum: We also need to figure out the issue above (group comparisons when seed doesn't matter).

dowdlelt commented 5 years ago

Per the idea of predicting whether convergence issues can be predicted, two additional measures came to mind during the phone call:

TSNR of the data Spatial Smoothness (whether FWHM, or something like AFNI acf values)

In addition, thiking out loud on others: sampling rate, voxel size (which would certainly relate the smoothness), type of head coil (num channels perhaps)?