Closed aneumann-science closed 6 years ago
This has come up before with pooling scaled/shifted statistics from DWLS (WLS"MV"). You can read this thread for my initial thoughts:
https://groups.google.com/d/msg/lavaan/WM2Ynmatsmk/q_5S9PXgAwAJ
The pooling behavior is not entirely predictable, but it might be a bug (hopefully!). So thank you for providing a data set and reproducible example so I can actually investigate it this time :-) I'll look into it soon, and get back to you here.
Still no evidence of a bug, just the odd behavior of pooling test statistics.
I decided to post my very long reply on the lavaan forum, where others can also benefit from this discussion. So please read my response there, and if you wish to continue the discussion, we can do so there (since this does not appear to be a software issue).
https://groups.google.com/d/msg/lavaan/WM2Ynmatsmk/kAKH8yZ8AwAJ
Thanks again for the example!
Dear all,
I frequently use multiple imputation with semTools and came across an issue, that I frequently encounter highly implausible pooled fit measures, such as 0 values for CFI or negative values for NFI. The fit indices for each individual imputed dataset are much higher, as well as their average, leading me to believe this is a bug. It is difficult to reproduce, because it appears to work correctly for most data sets and models, and it is not clear to me which situations cause it.
Fortunately, I could now reproduce the error with a public dataset, specifically the rosenberg self-esteem scale from http://openpsychometrics.org/_rawdata/RSE.zip. In this particular case the bug does not appear with MLR, but does appear with WLSMV. I use the latest version semTools 0.4-15.913, but this problem was also present in earlier releases. Below the code to reproduce the bug: