Closed jbuncher closed 2 years ago
The only general purpose way I can think of doing this is via bootstrapping, which could take a very long time given how complex these IRT models can be. Plus, these measures are strongly related to the DRF statistics in the package, which naturally come with CIs/SEs. Why not use those?
The boot.mirt()
function now has an additional boot.fun
input in which the empirical_ES()
function can be passed to obtain bootstrapped estimates of everything. HTH.
This is phenomenal, but I think I'm encountering a bug. When using boot.mirt() with empirical_ES(), specifically to look at the ESSD stat, the values returned by boot.mirt() in "original" do not match the original output generated by a custom-written boot.fun.
This behavior seems to happen when the multipleGroup object passed to boot.mirt has something passed to it in the "invariance" parameter, specifically free_mean, free_var, and an item to serve as an anchor. When nothing is passed to the invariance parameter, the "original" output of boot.mirt() and the output of the custom boot.fun match, as expected.
I'm attaching a datafile and a script to reproduce the issue. Is this expected behavior, or a bug?
Darn, you're right the invariance
component was not extracted from the multiple-group objects in this function (you would have to pass boot.mirt(mod, invariance = ...)
for this to work regardless of the state of mod
). I've sent a patch to the dev to address this problem, which resolves the issue you observed
Is it possible to add (or extract via the provided output) the 95% confidence intervals for the effect size stats returned by empirical_ES? I'm particularly interested in the CI for ESSD, but I'd assume others are interested in the CIs for SIDS, UIDS, etc.
I'd be willing to help contribute to writing this as well.