Something confuses me in the report of a two-sided t.test: the report says that the effect is positive which is correct, but it says that the difference between the two means is positive (98.37 in the example below), even though the mean of group 0 is greater than the mean of group 1. If I try to run the report function on a t.test with the formula hp ~ !vs, I get -98.37.
Is there something I'm missing? Or is it a bug and the sign of the difference of the report is wrong?
> t.test(hp ~ vs, data = mtcars)
Welch Two Sample t-test
data: hp by vs
t = 6.2908, df = 23.561, p-value = 1.82e-06
alternative hypothesis: true difference in means between group 0 and group 1 is not equal to 0
95 percent confidence interval:
66.06161 130.66854
sample estimates:
mean in group 0 mean in group 1
189.72222 91.35714
> report(t.test(hp ~ vs, data = mtcars))
Effect sizes were labelled following Cohen's (1988) recommendations.
The Welch Two Sample t-test testing the difference of hp by vs (mean in group 0
= 189.72, mean in group 1 = 91.36) suggests that the effect is positive,
statistically significant, and large (difference = 98.37, 95% CI [66.06,
130.67], t(23.56) = 6.29, p < .001; Cohen's d = 2.59, 95% CI [1.48, 3.67])
Warning message:
Unable to retrieve data from htest object.
Returning an approximate effect size using t_to_d().
Question and context
Something confuses me in the report of a two-sided t.test: the report says that the effect is positive which is correct, but it says that the difference between the two means is positive (98.37 in the example below), even though the mean of group 0 is greater than the mean of group 1. If I try to run the report function on a t.test with the formula
hp ~ !vs
, I get -98.37.Is there something I'm missing? Or is it a bug and the sign of the difference of the report is wrong?