aperezlebel / meta_analysis_notebook

This notebook gathers and explains some well known neuroimaging meta-analysis techniques, discusses their limitations and applies them to real fMRI data.
8 stars 6 forks source link

Things to potentially add/clarify #16

Open koudyk opened 4 years ago

koudyk commented 4 years ago

Things to potentially add/clarify:

aperezlebel commented 4 years ago

Thank you for your comment!

For the first two points, the formula found in NiMARE is:

With:

But I cannot explain it. That's why I didn't put it in the notebook. I will search further and try to find an explanation.

More comments coming for the other points next week :)

aperezlebel commented 4 years ago

Under the heading "Mathematical recap" in the MKDA section: Is there supposed to be a "" in "Let $N \in \mathbb{N}^$ be the number of experiments/studies"?

Yes since a meta-analysis without any study (N=0) doesn't really make sense. (N* stands for the natural numbers without 0).

Under the heading "2. Estimation of $V$" in the GLM section: You say "Remember that $V$ is unknown in the above process, hence one needs to estimate $V$ before applying these formulae," but the formula for calculating $V$ includes the $/beta$ weights, which are also estimated. To me, it sounds like you're saying that you need $/beta$ to estimate $V$, but you need to estimate $V$ before estimating $/beta$.

Yes, there is a paragraph explaining this just below:

Note that this is possible since $\hat{\beta}$ does not depend on $V$ under the assumption $V = \sigma^2I$. Once $\sigma$ estimated, can be estimated the covariance of the estimates: $\mbox{Cov}(\hat{\beta}) = \hat{\sigma}^2(X^TX)^{-1}$. Under the more general assumption $e \sim \mathcal{N}(0, V)$, $\beta$ does depend on $V$ and therefore, more complex methods must be used. Beckmann et al. [4] present some of them.

But I admit this is a bit fuzzy. I will try to make it clearer.

Under the "Hypothesis testing" heading of the GLM section: you might want to explain that error controls are needed because we perform the same test at each of the many voxels. E.g., you could say something like, "These models are estimated separately for every voxel. This raises a multiple-comparisons problem; the more tests you run, the greater the risk of getting results that are not true (i.e., false-positives). In order to control the false-positive rate at the desired level $\alpha$, you need correct for multiple comparisons. Two well-known error controls are... [and continue with what you said in the notebook]".

Good idea! I will add it to the notebook.