Open jd-a opened 3 years ago
These things are good to double check. If you want, you can find an example calculation online and add it to the tests. If this package gets the same results, then the calculation is very likely to be right.
To investigate this a bit more, I have just compared the CI for Cohen's D produced by EffectSizes.jl with the CI produced by the R effsize package (version 0.8.1 on CRAN). They do not match (note: when using the "noncentral=FALSE" parameter for effsize's "cohen.d" function.) This should be easy to fix by removing the square root in line 86 (as well as some parentheses). After doing so, the CIs of the Julia and R functions match much better.
But, probably more importantly, a (noncentral) t-distribution should be a better match for a confidence interval (references e.g. Hedges LV; Distribution Theory for Glass's Estimator of Effect Size and Related Estimators; https://doi.org/10.2307/1164588 or Cumming G, Finch S; A Primer on the Understanding, Use, and Calculation of Confidence Intervals that are Based on Central and Noncentral Distributions; https://doi.org/10.1177/0013164401614002). This would require larger modifications. It might not be worth it as there is a growing preference for bootstrap confidence intervals for these effect sizes (but I am no statistician, and bootstrapping can be very time-consuming of course). I have looked at Stata and SAS Cohen D functions, these appear to use a t-distribution (at least by default), just like effsize.
Sorry for not responding earlier! For some reason, I didn't get a notification
This should be easy to fix by removing the square root in line 86 (as well as some parentheses). After doing so, the CIs of the Julia and R functions match much better.
Sounds like low hanging fruit. Could you open a PR for this with some information on why it's better without the square root?
But, probably more importantly, a (noncentral) t-distribution should be a better match for a confidence interval
Well, if you want you can also send a PR for this. However, this package has only two stars, so it's unlikely that people will actually use your work if you decide to implement it.
I have made some adjustments to the estimation functions, and I believe the confidence intervals, both the bootstrapped as well as the parametric, need more work. But I fear I have neither the skills nor the time to pull that off.
Probably interesting references for a person who considers working on it: Kelley; The Effects of Nonnormal Distributions on Confidence Intervals Around the Standardized Mean Difference: Bootstrap and Parametric Confidence Intervals; https://doi.org/10.1177/0013164404264850 and Algina, Keselman, Penfield; Confidence Interval Coverage for Cohen's Effect Size Statistic; https://doi.org/10.1177/0013164406288161
I gave the CI-calculation another look, came across this article which takes a nice look at a few ways to estimate the variance: http://dx.doi.org/10.20982/tqmp.14.4.p242 (it has a corrigendum (http://dx.doi.org/10.20982/tqmp.15.1.p054)).
I wonder if a square root is supposed to occur in lines 86 as well as 89 of confidence_interval.jl. The formula for the confidence interval for hedges Hedges g I am familiar with would be the standard error of the g statistic, which would be the part [σ² = √(((nx + ny) / (nx * ny)) + (es^2 / 2(nx + ny)))], times the percent point function of the normal distribution [z = Distributions.quantile(Normal(), uq)] added or subtracted from the point estimate. In the current code σ² is rooted once more. in line 89.
But it could be that I am just missing something...