Open mattansb opened 3 years ago
(Might also consider estimating one sided CIs for other parameters.)
So we need to pass down the alternative
argument?
For the CIs controlled by effectsize
, yes. Most default to "two.sided"
, but Phi, Cohen's w, Cramer's V, ANOVA effect sizes, rank Epsilon squared, Kendall's W default to "greater"
.
What if an htest object is already computed with 1-sided alternative, and a two-sided alternative is requested via effectsize?
Does it apply for all htest objects?
What if an htest object is already computed with 1-sided alternative, and a two-sided alternative is requested via effectsize?
User request overrides the htest:
tt <- t.test(mtcars$mpg, mtcars$hp, alternative = "less")
effectsize::effectsize(tt)
#> Cohen's d | 95% CI
#> -------------------------
#> -2.60 | [-Inf, -1.91]
#>
#> - Estimated using un-pooled SD.
#> - One-sided CIs: lower bound fixed at (-Inf).
effectsize::effectsize(tt, alternative = "two.sided")
#> Cohen's d | 95% CI
#> --------------------------
#> -2.60 | [-3.40, -1.79]
#>
#> - Estimated using un-pooled SD.
Created on 2021-08-19 by the reprex package (v2.0.1)
Does it apply for all htest objects?
Not all. For the ones supported by effectsize
these are:
wilcox only, or all rank tests (including friedman and kruskal)?
library(effectsize)
tab <- rbind(c(762, 327, 468),
c(484, 239, 477),
c(86, 150, 570))
Default to alternative="greater"
:
chisq.test(tab) |>
effectsize()
#> Cramer's V | 95% CI
#> -------------------------
#> 0.24 | [0.22, 1.00]
#>
#> - One-sided CIs: upper bound fixed at (1).
oneway.test(mtcars$mpg ~ mtcars$cyl, var.equal = TRUE) |>
effectsize()
#> Eta2 | 95% CI
#> -------------------
#> 0.73 | [0.57, 1.00]
#>
#> - One-sided CIs: upper bound fixed at (1).
kruskal.test(mtcars$mpg ~ mtcars$cyl) |>
effectsize()
#> Epsilon2 (rank) | 95% CI
#> ------------------------------
#> 0.83 | [0.78, 1.00]
#>
#> - One-sided CIs: upper bound fixed at (1).
RoundingTimes <- matrix(c(5.40, 5.50, 5.55,
5.85, 5.70, 5.75,
5.20, 5.60, 5.50,
5.55, 5.50, 5.40,
5.90, 5.85, 5.70,
5.45, 5.55, 5.60),ncol = 3)
friedman.test(RoundingTimes) |>
effectsize()
#> Kendall's W | 95% CI
#> --------------------------
#> 0.33 | [0.08, 1.00]
#>
#> - One-sided CIs: upper bound fixed at (1).
Default to alternative="two.sided"
:
mcnemar.test(tab) |>
effectsize()
#> Cohen's g | 95% CI
#> ------------------------
#> 0.22 | [0.20, 0.24]
Default to alternative
from htest:
t.test(mtcars$mpg[mtcars$am=="0"], mtcars$mpg[mtcars$am=="1"],
alternative = "less") |>
effectsize()
#> Cohen's d | 95% CI
#> -------------------------
#> -1.41 | [-Inf, -0.67]
#>
#> - Estimated using un-pooled SD.
#> - One-sided CIs: lower bound fixed at (-Inf).
wilcox.test(mtcars$mpg[mtcars$am=="0"], mtcars$mpg[mtcars$am=="1"],
alternative = "less") |>
effectsize()
#> Warning in wilcox.test.default(mtcars$mpg[mtcars$am == "0"],
#> mtcars$mpg[mtcars$am == : cannot compute exact p-value with ties
#> r (rank biserial) | 95% CI
#> ----------------------------------
#> -0.66 | [-1.00, -0.42]
#>
#> - One-sided CIs: lower bound fixed at (-1).
Other htest objects just passes to parameters::model_parameters
cor.test(mtcars$mpg, mtcars$hp, alternative = "greater") |>
effectsize()
#> Warning: This 'htest' method is not (yet?) supported.
#> Returning 'parameters::model_parameters(model)'.
#> Pearson's product-moment correlation
#>
#> Parameter1 | Parameter2 | r | 95% CI | t(30) | p
#> ----------------------------------------------------------------
#> mtcars$mpg | mtcars$hp | -0.78 | [-0.87, 1.00] | -6.74 | > .999
#>
#> Alternative hypothesis: true correlation is greater than 0
prop.test(3, 10, alternative = "greater") |>
effectsize()
#> Warning: This 'htest' method is not (yet?) supported.
#> Returning 'parameters::model_parameters(model)'.
#> 1-sample proportions test
#>
#> Proportion | 95% CI | Chi2(1) | Null_value | p
#> --------------------------------------------------------
#> 30.00% | [0.10, 1.00] | 0.90 | 0.50 | 0.829
#>
#> Alternative hypothesis: true p is greater than 0.5
matrix(c(3, 1, 1, 3), 2) |>
fisher.test(alternative = "greater") |>
effectsize() # prints bad CI! <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
#> Warning: This 'htest' method is not (yet?) supported.
#> Returning 'parameters::model_parameters(model)'.
#> Fisher's Exact Test for Count Data
#>
#> Odds.Ratio | CI_low | p
#> ---------------------------
#> 6.41 | 0.31 | 0.243
#>
#> Alternative hypothesis: true odds ratio is greater than 1
Created on 2021-09-15 by the reprex package (v2.0.1)
It seems to me this somehow contradicts https://github.com/easystats/parameters/issues/584#issuecomment-901661666.
So, does this now apply to all htest objects? I'm still not sure where to add the alternative
argument in the htest-methods for model_parameters()
and where not...
I think maybe don't let user override these defaults, as they match the p-values for the tests.
I think the only change you need in parameters
is to add a footnote about about one-sided CIs (when alternative
isn't two.sided
) . Should be enough.
But I haven't added the functionality that passed down alternative
to effectsize yet, because I thought not all htest can handle alternative
and result in an error?
My question is, for which of those htests, for which we can have effectsizes from model_parameters()
, do I also pass alternative
to effectsize::effectsize()
?
see this commit for my start: https://github.com/easystats/parameters/commit/4caee74b3efef38a43ceaa6492470fcdd164890f
I don't think you need to pass, as effectsize
is smart enough to do the correct on by default to match the htest on its own.
But all supported htest tests can have a different alternative
- it wont fail for any.
as per https://github.com/easystats/effectsize/pull/366
This affects, by default, Phi, Cohen's w, Cramer's V, ANOVA effect sizes, rank Epsilon squared, Kendall's W - which now will all default to one-sided CIs.
Old behavior
NEW behavior
Information about the "side" can be found in the "alternative" attribute:
Created on 2021-08-18 by the reprex package (v2.0.1)