neurostuff / NiMARE

Coordinate- and image-based meta-analysis in Python
https://nimare.readthedocs.io
MIT License
182 stars 58 forks source link

Implement SDM meta-analysis techniques #183

Open JohannesWiesner opened 5 years ago

JohannesWiesner commented 5 years ago

I would like to use nimare to perform a meta-analysis. Is there a plan to implement the SDM algorithms (such as SDM-PSI, the latest version) as alternatives to ALE and MKDA?

tsalo commented 5 years ago

We don't have immediate plans to implement all of SDM, as it's closed-source and none of the core developers of NiMARE are very familiar with the details of any of the algorithms. However, I did speak with @HBossier fairly recently, and he has an R-based implementation of at least one of the SDM algorithms that we were interested in translating over to Python and incorporating into NiMARE. I haven't had a chance to work on this yet, but maybe it's something we can work on adding.

HBossier commented 4 years ago

Hi @JohannesWiesner,

I didn't find time yet to start working on this as I had to shift stuff on my to-do priority list...

These functions might also be useful if you want to work within R.

tsalo commented 4 years ago

I'm going to use this comment to collect information I glean from SDM manuscripts.

The steps, as best I can understand them, of the SDM-PSI algorithm are:

  1. Use anisotropic Gaussian kernels, plus effect size estimates and metadata, to produce lower-bound and upper-bound effect size maps from the coordinates.
    • We need generic inter-voxel correlation maps for this.
    • We also need a fast implementation of Dijkstra's algorithm to estimate the shortest path (i.e., "virtual distance") between two voxels based on the map of correlations between each voxel and its neighbors. I think dijkstra3d might be useful here.
  2. Use maximum likelihood estimation to estimate the most likely effect size and variance maps across studies (i.e., a meta-analytic map).
  3. Use the MLE maps and each study's upper and lower-bound effect size maps to impute study-wise effect size and variance images that meet specific requirements.
  4. For each imputed pair of effect size and variance images, simulate subject-level images.
    • The mean across subject-level images, for each voxel, must equal the value from the study-level effect size map.
    • Values for each voxel, across subjects, must correlate with the values for the same voxel at 1 in all other imputations.
    • Values of adjacent voxels must show "realistic" correlations as well. SDM uses tissue-type masks for this.
    • SDM simplifies the simulation process by creating a single "preliminary" set of subject-level maps for each dataset (across imputations), and scaling it across imputations.
  5. Permutations. "The permutation algorithms are general."
  6. Combine subject-level images into study-level Hedge-corrected effect size images.
  7. Perform meta-analysis across study-level effect size maps using random effects model. Performed separately for each imputation.
    • One of our IBMA interfaces should be able to do this. Either DerSimonianLaird or VarianceBasedLikelihood.
  8. Compute imputation-wise heterogeneity statistics.
  9. Use "Rubin's rules" to combine heterogeneity statistics, coefficients, and variance for each imputed dataset.
  10. Perform Monte Carlo-like maximum statistic procedure to get null distributions for vFWE or cFWE. Or do TFCE.

Misc. notes: