Closed daveflora closed 4 years ago
p.s. and then the estimate I get from MBESS::ci.reliability is different from all of the above!
Hi Dave,
I have noticed this inconsistency between semTools::reliability()
and psych::omegaFromSem()
when teaching SEM. I am under the impression that the differences have something to do with the fact that psych
is expecting to account for method factors, whereas semTools
would interpret multiple factors as multidimensional and calculate omega for each factor's (subscale's) and all factors combined (the whole multidimensional scale, if that is actually what the multiple factors are). If your model has no error covariances, then I doubt that is the source of the difference you see.
Honestly, I'm quite surprised that MBESS::ci.reliability()
provides a different point estimate, because I was under the impression Sunthud wrote that as well as reliability()
. ci.reliability()
was even originally a semTools
function, but was put in MBESS because it coincided with Sunthud and Ken Kelley's paper.
I want to get to the bottom of it, but the source-code rabbit hole gets a bit deep, especially with categorical indicators. So I just haven't been able to find time to prioritize that. But I'll keep this issue open and let you know if I find anything.
Thanks Terrence. In my example, I'm dealing with just a one-factor model, which psych::omegaFromSem() explicitly recognizes in the output and so it's not doing anything with method factors; i.e., it can't do a Schmid-Leiman transformation of a one-factor model, can it? But just now I've discovered that the discrepancy I described in my first post above occurs when I specify ordered indicators. When I consider the indicators continuous, then reliability() gives the result that I expect from hand calculation, but omegaFromSem() doesn't!
So to summarize: Specify a simple one-factor model. Consider indicators continuous: semTools::reliability() results match hand calculations, but omega-total estimate from psych::omegaFromSem() is too low. Consider indicators ordered categorical: psych::omegaFromSem() gives "Omega total" that matches hand calculations, but omega estimates from semTools::reliability() are too low.
OK, that's interesting. When the data are categorical, are your hand calculations equivalent to the method described in the help-page reference?
Green, S. B., & Yang, Y. (2009). Reliability of summed item scores using structural equation modeling: An alternative to coefficient alpha. Psychometrika, 74(1), 155–167. doi:10.1007/s11336-008-9099-3
Not sure if psych
uses this, or treats ordered indicators as numeric.
Yes, thanks Terrence, we now have an explanation. My "hand calculations" are incorrect for the categorical case because of what Green & Yang (2009) say on the bottom of p. 157 to p. 158. I see now that source code of reliability() for the categorical case is adapted from the SAS program in the appendix of Green & Yang. psych::omegaFromSem() is probably making the same mistake that I was making.
We had the same issue. Psych package does not use Green & Yang (2009) correction for threshold (and thus its "ordinal reliability" has the same problem as e.g. Zumbo (Gadermann, Guhn, & Zumbo, 2007) "ordinal alpha" as it is not a reliability (see Chalmers, 2018, for an explanation). The second problem is that the default in psych package is using correlation instead covariance matrix (at least in omega function, not sure about omegaFromSem), which is quit surprising especially in omega coefficient.
Gadermann, A. M., Guhn, M., & Zumbo, B. D. (2012). Estimating Ordinal Reliability for Likert-Type and Ordinal Item Response Data: A Conceptual, Empirical, and Practical Guide. Practical Assessment, Research & Evaluation, 17(3). Retrieved from http://eric.ed.gov/?id=EJ977577 Chalmers, R. P. (2018). On Misconceptions and the Limited Usefulness of Ordinal Alpha. Educational and Psychological Measurement, 78(6). https://doi.org/10.1177/0013164417727036
Thanks @hynekcigler for the references! I think this wraps up this issue.
Hi! When I fit a one-factor model with categorical variables (1=correct, 0=incorrect response) in lavaan, i.e.: crt1f <-' crt=~ CRTranda_rec + CRTrandb_rec + CRTrandc_rec + CRTrandd_rec + CRTrande_rec + CRTrandf_rec + CRTrandg_rec + CRTrandh_rec + CRTrandi_rec + CRTrandj_rec + CRTrandk_rec ' fit.crt1f <- cfa(crt1f, data=podaci.CRT.rec, ordered=T, estimator="WLSMV")
I have noticed an inconsistency between semTools::reliability() and psych::alpha(podaci.CRT.rec), as well as CMC::alpha.cronbach(podaci.CRT.rec). Both psych::alpha(podaci.CRT.rec) and CMC::alpha.cronbach(podaci.CRT.rec) result in 0.85 and 0.8519878 respectively, but semTools::reliability(fit.crt1f) gives the estimate of 0.9277013.
On the other hand the omegas I get from semTools::reliability(fit.crt1f) and MBESS::ci.reliability(podaci.CRT.rec) differ slightly: 0.8638514 and 0.8591703 respectively.
I would be very grateful if someone could explain the cause of these differences.
Thank you in advance! Marina
Just to add to my confusion, when I use userfriendlyscience::scaleReliability(podaci.CRT.rec), the estimates for omega and alpha assuming ordinal level differ from the ones obtained from semTools::reliability():
semTools::reliability(fit.crt1f)
crt alpha 0.9277013 omega 0.8638514 omega2 0.8638514 omega3 0.8676372 avevar 0.5590196
userfriendlyscience::scaleReliability(podaci.CRT.rec) #Omega (total): 0.86,
Information about this analysis: Dataframe: podaci.CRT.rec Items: all Observations: 482 Positive correlations: 55 out of 55 (100%)
Estimates assuming interval level: Omega (total): 0.86 Omega (hierarchical): 0.8 Revelle's omega (total): 0.87 Greatest Lower Bound (GLB): 0.85 Coefficient H: 0.88 Cronbach's alpha: 0.85 Confidence intervals: Omega (total): [0.84, 0.88] Cronbach's alpha: [0.83, 0.87]
Estimates assuming ordinal level: Ordinal Omega (total): 0.93 Ordinal Omega (hierarch.): 0.92 Ordinal Cronbach's alpha: 0.93 Confidence intervals: Ordinal Omega (total): [0.92, 0.94] Ordinal Cronbach's alpha: [0.92, 0.94]
Hi. There are two problems:
psych
package (and possibly also CMC
and userfriendlyscience
, I don't know them) don't consider thresholds in ordinal model, and thus an omega estimates are based directly on polychoric correlation matrix, which overestimate the true reliability. See Green and Yang (2009, formula 21). semTools
in ordinal model, the estimate is based on polychoric correlation matrix, which overestimates its value similarly as psych
overestimates alpha. This is called ordinal alpha (e.g. Zumbo, Gadermann, & Zeisser, 2007; Gadermann, Guhn, & Zumbo, 2012) and have several serious limitations (e.g. Chalmers, 2018). The correct approach is therefore:
psych
package or elsewhere (e.g. in lavaan
continuous model). This is correct alpha estimate.lavaan
and estimate omega using semTools
. This is correct omega estimate.References: Green, S. B., & Yang, Y. (2009). Reliability of summed item scores using structural equation modeling: An alternative to coefficient alpha. Psychometrika, 74(1), 155–167. doi:10.1007/s11336-008-9099-3 Gadermann, A. M., Guhn, M., & Zumbo, B. D. (2007). Estimating Ordinal Reliability for Likert-Type and Ordinal Item Response Data: A Conceptual, Empirical, and Practical Guide - Practical Assessment, Research & Evaluation. Cortina, 17(3). http://www.pareonline.net/getvn.asp?v=17&n=3 Chalmers, R. P. (2018). On Misconceptions and the Limited Usefulness of Ordinal Alpha. Educational and Psychological Measurement, 78(6). https://doi.org/10.1177/0013164417727036 Zumbo, B. D., Gadermann, A., & Zeisser, C. (2007). Ordinal Versions of Coefficients Alpha and Theta for Likert Rating Scales. In Journal of Modern Applied Statistical Methods (Vol. 6, Issue 1). http://digitalcommons.wayne.edu/jmasm/vol6/iss1/4
hynekcigler than you very much!
So, to to summarize:
Best, Marina
Hi Dave,
I have noticed this inconsistency between
semTools::reliability()
andpsych::omegaFromSem()
when teaching SEM. I am under the impression that the differences have something to do with the fact thatpsych
is expecting to account for method factors, whereassemTools
would interpret multiple factors as multidimensional and calculate omega for each factor's (subscale's) and all factors combined (the whole multidimensional scale, if that is actually what the multiple factors are). If your model has no error covariances, then I doubt that is the source of the difference you see.Honestly, I'm quite surprised that
MBESS::ci.reliability()
provides a different point estimate, because I was under the impression Sunthud wrote that as well asreliability()
.ci.reliability()
was even originally asemTools
function, but was put in MBESS because it coincided with Sunthud and Ken Kelley's paper.I want to get to the bottom of it, but the source-code rabbit hole gets a bit deep, especially with categorical indicators. So I just haven't been able to find time to prioritize that. But I'll keep this issue open and let you know if I find anything.
Dear professor Jorgensen, can you maybe explain why Estimates of reliability assuming ordinal level from the package userfriendlyscience (function scaleReliability) differ from the ones from semTools::reliability()?
Thank you in advance! Best, Marina
I think this article answers a lot of the questions in this thread:
http://dx.doi.org/10.1037/met0000144
Unfortunately, the author does not include semTools
among the comparisons with other R packages: psych
, userfriendlyscience
, and MBESS
. But referring to McNeish's (2018) Table 1, semTools::reliability
only provides alpha and omega total.
can you maybe explain why Estimates of reliability assuming ordinal level from the package userfriendlyscience (function scaleReliability) differ from the ones from semTools::reliability()?
Following from the discussion above, psych
does not take thresholds into account. From the scaleReliability
help page:
This function is basically a wrapper for functions from the psych, GPArotation, ltm, and MBESS packages
McNeish's paper is great, and I recommend it too; I use it in my psychometric classes. Unfortunately, it defines reliability as a correlation between parallel tests: model-free reliability (e.g., glb or omega total coefficients), not as an explained variance - model-based reliability (e.g., omega hierarchical). So it misses half of the answer :)
FYI, the latest version now supplies both "ordinal alpha" (previously semTools only result) and traditional alpha (provided by psych package)
hi guys, When I fit a one-factor model in lavaan, with no error covariances, I can calculate omega "by hand" with param <- lavInspect(lavfit, "est") fitted <- lavInspect(lavfit, "cov.ov") num <- sum(param$lambda)^2 den <- sum(num + sum(param$theta)) den2 <- sum(fitted) myOmega <- num/den myOmega2 <- num/den2
For the data set I'm using, both of these calculations give me approximately (within .01) the same result as library(psych) omegaFromSem(lavfit)
Specifically, for my data I get around .78 or .79 for omega from the calculations above. If I use the observed covariances in the denominator, I get omega = .81.
But when I run reliability(lavfit) , the 3 omega estimates are all much lower, around .64. I've been banging my head on the semTools documentation, trying to figure out how the formulas might be different, but I can't figure it out. Does it have something to do with the residual correlations?
thanks, Dave