neurostuff / masking-bias-in-ibma

An analysis to evaluate bias of IBMA estimators under different masking methods in NiMARE.
Apache License 2.0
0 stars 0 forks source link

Evaluate bias of IBMA estimators under different masking methods #1

Open tsalo opened 3 years ago

tsalo commented 3 years ago

Summary

In neurostuff/NiMARE#466, @nicholst and @tyarkoni note that maskers which aggregated values across voxels before fitting the meta-analytic model will likely produce biased results, depending on the meta-analytic model. We should systematically evaluate the different estimators across a range of datasets.

Additional details

@tyarkoni has performed some simulations, and did not find large bias across approaches for the non-combination, non-likelihood estimators (e.g., Hedges, WeightedLeastSquares, DerSimonianLaird, and probably PermutedOLS). The combination test estimators (Fishers and Stouffers) are probably heavily biased. The likelihood-based estimators (SampleSizeBasedLikelihood and VarianceBasedLikelihood) may or may not be biased.

@nicholst proposed the following options:

  1. OLS - We ignore ROI variances and the weighting is tau^2+const (no weighting), worst case is inefficiency and (as per Mumford & Nichols) no FPR risk for one-sample or balanced two-sample comparisons. (However, the M&N result was calibrated against heterogeneity seen in task fMRI, and not N=10 <-> N=1200 differences)
  2. GLS - We take average ROI variances as "correct", but they're actually too small, so weighting is tau^2 + TooSmallVar_i... I think this is OK as the estimated tau^2 will make up for the vars being too small over all, so inferences are probably fine, but just not as efficient as they could be. Another plus is that this approach will capture gross differences in sample size, something important if N's have a big range.

Analysis plan (tentative)

  1. Collect subject-level data (z, p, beta, and varcope maps) from a range of datasets.
    • We can collect these data from Neuroscout.
  2. Generate a range of dataset-level results with resampling.
    • Generate subset results with varying sample sizes.
    • Vary smoothness as well.
  3. Run voxel-wise image-based meta-analyses, then average results across ROIs.
  4. Run ROI-wise image-based meta-analyses.
  5. Compare results of both approaches. The former is the ground truth.
    • As the most basic test, we can perform pair-wise comparisons between the analysis-first and the aggregation-first results from the same estimators and datasets.
    • We can also dig into dataset parameters/characteristics, which might clarify what sources of bias there are.
    • Parameters to investigate:
      • Sample size characteristics (e.g., mean sample size, or perhaps through holding dataset sample sizes constant in some analyses?)
      • Smoothness
      • Original contrast variance levels?