0todd0000 / spm1d

One-Dimensional Statistical Parametric Mapping in Python
GNU General Public License v3.0
61 stars 21 forks source link

Use of betas in random effects modeling #232

Closed 0todd0000 closed 1 year ago

0todd0000 commented 1 year ago

(This is paraphrased from an email discussion.)

In spm1d's random effects documentation example code appears analyzing the regression betas. I am interested in your thoughts on extracting the betas to compute the correlation coefficient over time. In a “normal” regression model, the correlation coefficient is equivalent to: $r^2 = [beta * (std{x}/std{y})]^2$. I see no immediate reason why the relationship between the regression slope and the correlation coefficient would not scale up to SPM regression, but I am still somewhat new to SPM and wanted to get your thoughts on the matter, if possible.

0todd0000 commented 1 year ago

If the goal is normal regression, then spm1d.stats.regress may be sufficient; see here for example. While this and all other procedures in spm1d use standard test statistics (i.e., t, F, $\chi^2$, etc.), you can also retrieve the correlation coefficient like this:

t  = spm1d.stats.regress(Y, x)
r  = t.r  # correlation coefficient

Note that correlation coefficients are not test statistics, and that inference for linear regression and related methods are based on test statistics, usually the t statistic. spm1d preferentially uses the t-value for linear regression because, whereas correlation coefficients range from -1 to +1, the t-value ranges from $-\infty$ to $+\infty$. This means that using the t statistic generally makes it easier to qualitatively discern large effects from moderate effects.

The main reason to consider the betas themselves is if running correlations separately on each subject makes sense. IN this case inferences could be made regarding the population beta as demonstrated in the random effects example.