NotaCS / Functionnectome

Project your functional brain signal onto the white matter, and explore the pathways supporting brain functions.
Other
22 stars 4 forks source link

[questions] Inputs for Functionnectome #3

Closed smeisler closed 2 years ago

smeisler commented 2 years ago

Hello,

Really interested in this bimodal tool! But I have some questions about the inputs. I would like to use data that have already been minimally processed by fMRIPrep (fmri), QSIPrep (dwi), and QSIRecon (tractography), which should be comparable to HCP pipelines.

0) Should one feel open to explore with fMRI regressors? For example, would it be alright to add in the squared and derivative expansions of motion realignment parameters? 1) Do the fMRI and DWI have to be in the same space? The QSIPrep has been rotated for ACPC alignment, so although it is still native space it is not aligned to the functional scan. 2) Is the isotropic 2mm resolution required? And does that only apply for the functional image (not the dwi)? 3) Instead of using the HCP "pretrained" anatomical priors, is it possible / well-advised to use subject-specific priors derived from probabilistic tractography, such that each subject has a unique probability map?

Thanks, Steven

NotaCS commented 2 years ago

Hi Steven,

I'm glad you are interested in the Functionnectome.

First, just so that we are on the same page: The Functionnectome takes 4D brain images as input (usually fMRI, but could also be brain images with another metric measured) and project the grey matter metric (usually BOLD signal for fMRI) onto the white matter. The input and the priors must be in the same space, with the same voxel size (which is MNI space with 2mm isotropic voxels with our priors).

As for your questions:

  1. I'm not entirely sure how you mean to explore these regressors with the Functionnectome, as they are 1D regressors, and the Functionnectome takes 4D inputs. Could you elaborate a bit, please?
  2. The Functionnectome does not take DWI images as input, nor does the script creating the priors (which uses tractograms). So I am not sure how you wish to use the DWI in this context.
  3. The 2mm isotropic voxel size is required if you use our priors. To use a finer resolution, you would need new priors, which is possible (and I might actually do it in the future), but would greatly increase the computation (although it's less a problem now, with the last update to the software).
  4. It is perfectly possible to use subject-specific priors, and it's an idea that I have had on my mind for some time, but I haven't actually tried it yet, so I cannot advise for or against it. All I can tell is that if you have a lot of subjects, making individual priors of each of them would probably be time-consuming. Also, the priors from individual a tractogram from a probabilistic tractography approach would be conceptually different from our group level priors, so the results would have to be carefully interpreted.

I hope I could clarify some of the questions you had. I'll keep this issue open as long as there are some points that you need answers to, so don't hesitate to ask.

Cheers, Victor

smeisler commented 2 years ago

Thanks for the response!

0) This would be for denoising the 4d time series (so the regression would be against the 4th / time dimension). In the paper you say you regress out motion realignment parameter and wm/csf signal, so I was curious if this was specifically chosen or if one could play around with other denoising configurations.

1) Understood. The DWI would have been used to generate priors (related to point 3).

2) Got it!

3) Yes that definitely poses some interesting conceptual challenges, but I think it's a worthwhile endeavor given inter-individual variability in white matter pathways.

NotaCS commented 2 years ago

You're welcome!

  1. OK I understand now, I thought you somehow wanted to input the regressors into the Functionnectome, and that got me confused. Here, you can actually do any kind of preprocessing / denoising you see fit. In essence, the Functionnectome is almost like a preprocessing step itself, it just requires the input to be properly registered to the brain template used in the priors, but that's all. So you can play with the preprocessing before it without a problem.
  2. OK, the DWI doesn't need to be in the MNI space, nor does it need to have the 2mm voxels. However, the tractogram you will create from it will need to be in the correct space. I personally do tractography in the individual space, use ANTs to register the subject's T1 or FA (in diffusion space) to the group template (MNI for me), then use the scilpy library scripts (scil_apply_transform_to_tractogram.py, but be careful if you use it, the transformation files to apply are not the same as for ants_apply_transform) to apply the registration to the tractograms.
  3. Cool!
  4. Yep, I think it's a very interesting area of expertise that I would like to investigate at one point (but I've got a bit too many things on my plate right now, so if you plan to do it, let me know of the results, I'd be very interested).

I hope you got all the answers you needed, but of course, I remain available if you need more precisions or have other questions.

Cheers

NotaCS commented 2 years ago

Hi Steven,

I just realised that I haven't updated the manual, but with the newest version of the Functionnectome(version >= 1.1.0), you have to be careful with the priors you enter: the new algo uses probability maps from the WM voxels, instead of the probability maps from the GM voxels (in a sense, instead of "projecting" the GM signal onto the WM, the algo "grabs" the GM signal into the WM, it's a shift in perspective but doesn't change anything in practice). In fact, for now, it uses both GM and WM voxels maps.

In our priors, we already had both GM and WM voxels probability maps, so I didn't have to change anything, but if you are making custom priors, you have to generate probability maps of voxels in the WM in order for the algo to work... I don't know if you are still working on it or if you had troubles with it, but don't hesitate to come back to me if there is any problem.

Best wishes, Victor

smeisler commented 2 years ago

Sorry to reopen an old thread, but I was curious while reading your manuscript why you decided to only apply extra fMRI preprocessing steps to the resting state fMRI. More specifically:

Additionally, the resting-state acquisitions were further preprocessed with despiking, detrending of motion and CSF, white matter and grey-matter signal, temporal filtering (0.01–0.1 Hz), and spatial smoothing (5 mm FWHM).

Also I am a bit confused as to whether images should be spatially smoothed. The above suggests yes, but then later in the manuscript:

Specifically, no spatial smoothing is required for the functionnectomes 4D volumes. Usual smoothing aims at improving the signal/noise ratio (SNR) using a weighted average of the local signal, assuming that neighbouring voxels share some signal of interest. The functionnectome method combines the signal from distant yet structurally linked voxels, which has an analogous effect of improving the SNR, but is guided by actual brain circuits.

Thanks again, Steven

NotaCS commented 2 years ago

Hey, no problem. As I am not a specialist of task activation fMRI (I'm more of a resting-state guy), I decided to stick with the preprocessing used for such data in HCP papers (see https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4011498/ "fMRI Data Processing"). As I understand it, FSL's FILM used for the analysis applies whitening and temporal filtering of the time-series, which complete the preprocessing. On the other hand, the additional steps we used for the resting-state data are just the usual preprocessing steps used for this kind of data, needed to properly extract RSNs.

Concerning the smoothing, note that we did not use the resting-state fMRI with the Functionnectome, we only applied it to the task fMRI. In that case, we used smoothing when computing the traditional task activation maps, and not when feeding the data to the Functionnectome (which would act like a spatial filter itself). Now, should you smooth your data before using the Functionnectome? Well, I think that depends on the data. if it's noisy or you are not sure about your grey matter mask, maybe you should yes, to ensure that the relevant signal is at least partially at the right place. In the article, we prefered to skip the smoothing as we thought it was not necessary (with the filtering effect of the Functionnectome), and because it would likely make the functionnectome activation maps "fatter", less specific.

I hope that answered your question :) If not, don't hesitate to come back to me, I'll do my best to clarify.

Cheers, Victor

smeisler commented 2 years ago

That makes perfect sense, I understand now the traditional smoothing was for the analogous task activation maps as a reference method. I also really like the notion you describe of smoothing based on white matter connectivity; reminds me of the kind of white-matter local neighborhood smoothing used in fixel-based analyses.