FBartos / RoBMA

RoBMA R package for estimating robust Bayesian meta-analyses
https://fbartos.github.io/RoBMA/
8 stars 2 forks source link

Different results using n or se #4

Closed tressoldi closed 4 years ago

tressoldi commented 4 years ago

I run the following syntax and got the following results:

d <-c(-0.15,0.49,0.34,0.37,-0.03,0.44,0.28,0.22,-0.08,-0.05) se <-c(0.11,0.10,0.14,0.18,0.15,0.18,0.18,0.14,0.16,0.13) n<-c(50,128,51,32,30,30,30,50,24,40) study<-c(1,2,3,4,5,6,7,8,9,10)

RoBMA(d = d, se = se, study_names = study)

Robust Bayesian Meta-Analysis Models Prior prob. Post. prob. Incl. BF Effect 6/12 0.500 0.440 0.787 Heterogeneity 6/12 0.500 0.995 188.309 Pub. bias 8/12 0.500 0.441 0.788

Model-averaged estimates Mean Median 0.025 0.975 mu 0.000 0.000 0.000 0.295 tau 0.213 0.213 0.094 0.408 omega[0,0.05] 1.000 1.000 1.000 1.000 omega[0.05,0.1] 1.000 1.000 0.209 1.000 omega[0.1,1] 1.000 1.000 0.174 1.000 (Estimated omegas correspond to two-sided p-values)

RoBMA(d = d, n = n, study_names = study)

Robust Bayesian Meta-Analysis Models Prior prob. Post. prob. Incl. BF Effect 6/12 0.500 0.573 1.344 Heterogeneity 6/12 0.500 0.423 0.734 Pub. bias 8/12 0.500 0.333 0.499

Model-averaged estimates Mean Median 0.025 0.975 mu 0.113 0.113 0.000 0.396 tau 0.000 0.000 0.000 0.316 omega[0,0.05] 1.000 1.000 1.000 1.000 omega[0.05,0.1] 1.000 1.000 0.302 1.000 omega[0.1,1] 1.000 1.000 0.258 1.000 (Estimated omegas correspond to two-sided p-values)

Why these differences using n or se. I expected identical results.

FBartos commented 4 years ago

Hi and thank you for submitting the issue.

The reason for this difference is because the data are (probably) from one-sample t-tests which is not the default for the RoBMAfunction. The models are estimated using the test-statistics likelihood (as is usual for selection models). Because of that, the input needs to be transformed into the appropriate test statistics and degrees of freedom.

In the case that the Cohen's d effect sizes and sample sizes are provided, the t-statistics and degrees of freedom can be easily obtained using the psych::d2t function (that's used internally). Similarly, in cases with Cohen's d effect sizes and standard errors, the t-statistics can be obtained by dividing the effect size by the standard error and the degrees of freedom by solving the equation of variance of Cohen's d based on the Cohen's d and sample size for sample size (see this)

However, these formulas and functions differ for one-sample and two-sample t-tests. I applied both versions them to your data and it seems like that the data are (mostly) from one sample t-tests but the results are not completely consistent and the differences are not explainable by just rounding errors (ie. study 1, 2, 9, and 10).

round(cbind(
  direct     = d/se,
  two.sample = psych::d2t(d, n  = n),
  one.sample = psych::d2t(d, n1 = n)  
),2)
      direct two.sample one.sample
 [1,]  -1.36      -0.53      -1.06
 [2,]   4.90       2.77       5.54
 [3,]   2.43       1.21       2.43
 [4,]   2.06       1.05       2.09
 [5,]  -0.20      -0.08      -0.16
 [6,]   2.44       1.20       2.41
 [7,]   1.56       0.77       1.53
 [8,]   1.57       0.78       1.56
 [9,]  -0.50      -0.20      -0.39
[10,]  -0.38      -0.16      -0.32

By default, the RoBMA function assumes that the input is from two-sample t-tests (viz documentation). You can change this setting my adding test_type = "one.sample" to the function call. When using these modified calls, the results are much more similar, however, discrepancies still occur due to the differences in the computed test-statistics. (The differences in the BF for heterogeneity seem much higher than it is actually - the posterior probabilities are very close and the parameter estimates do not seem to be that much affected by the different input data).

fit1 <- RoBMA(d = d, n  = n,  study_names = study, test_type = "one.sample")
summary(fit1)
Call:
RoBMA(d = d, n = n, study_names = study, test_type = "one.sample")

Robust Bayesian Meta-Analysis
              Models Prior prob. Post. prob. Incl. BF
Effect          6/12       0.500       0.549    1.219
Heterogeneity   6/12       0.500       0.975   39.763
Pub. bias       8/12       0.500       0.445    0.803

Model-averaged estimates
                 Mean Median 0.025 0.975
mu              0.104  0.083 0.000 0.319
tau             0.212  0.204 0.030 0.410
omega[0,0.05]   1.000  1.000 1.000 1.000
omega[0.05,0.1] 0.826  1.000 0.221 1.000
omega[0.1,1]    0.790  1.000 0.178 1.000
(Estimated omegas correspond to two-sided p-values)
fit2 <- RoBMA(d = d, se = se, study_names = study, test_type = "one.sample")
Call:
RoBMA(d = d, se = se, study_names = study, test_type = "one.sample")

Robust Bayesian Meta-Analysis
              Models Prior prob. Post. prob. Incl. BF
Effect          6/12       0.500       0.441    0.790
Heterogeneity   6/12       0.500       0.995  188.237
Pub. bias       8/12       0.500       0.439    0.782

Model-averaged estimates
                 Mean Median 0.025 0.975
mu              0.073  0.000 0.000 0.292
tau             0.222  0.212 0.095 0.407
omega[0,0.05]   1.000  1.000 1.000 1.000
omega[0.05,0.1] 0.833  1.000 0.216 1.000
omega[0.1,1]    0.797  1.000 0.181 1.000
(Estimated omegas correspond to two-sided p-values)

The ideal solution would be to look into the original articles and use the reported test-statistics with the specified sample size directly (and adding the test_type = "one.sample" argument, to use correctly computed degrees of freedom and effect sizes - if the tests were truly one-sample t-tests). Otherwise, you could also use the general effect sizes and standard errors using the y and se argument that uses z-approximation.

One another note, please, install the updated package version. I found out that the summary function was printing medians instead of the means.

Best, František Bartoš

tressoldi commented 4 years ago

Yes, I confirm the effect size and se value derive from one-sample. Thank you very much for your clear explanation.

Best Patrizio