easystats / performance

:muscle: Models' quality and performance metrics (R2, ICC, LOO, AIC, BF, ...)
https://easystats.github.io/performance/
GNU General Public License v3.0
965 stars 87 forks source link

`test_performance` for `lavaan` #427

Open rempsyc opened 2 years ago

rempsyc commented 2 years ago

It would be desirable to eventually support lavaan fit objects (CFA/SEM) in performance::test_performance. Reprex below:

library(performance)
library(lavaan)

# Testing performance for lm models works
m1 <- lm(Sepal.Length ~ Petal.Length, data = iris)
m2 <- lm(Sepal.Length ~ Petal.Width, data = iris)
test_performance(m1, m2)
#> Name | Model |      BF
#> ----------------------
#> m1   |    lm |        
#> m2   |    lm | < 0.001
#> Each model is compared to m1.

# But not for lavaan models
HS.model <- ' visual  =~ x1 + x2 + x3
              textual =~ x4 + x5 + x6
              speed   =~ x7 + x8 + x9 '
fit <- cfa(HS.model, data = HolzingerSwineford1939)
HS.model2 <- ' visual  =~ x1 + x2
               textual =~ x4 + x5
               speed   =~ x7 + x8 + x9 '
fit2 <- cfa(HS.model2, data = HolzingerSwineford1939)
test_performance(fit, fit2)
#> Error: evaluation nested too deeply: infinite recursion / options(expressions=)?

# Yet it is possible to compare performance for lavaan models
compare_performance(fit, fit2)
#> # Comparison of Model Performance Indices
#> 
#> Name |  Model |   Chi2 | Chi2_df | p (Chi2) | Baseline | Baseline_df | p (Baseline) |   GFI |  AGFI |   NFI |  NNFI |   CFI | RMSEA |    RMSEA  CI | p (RMSEA) |   RMR |  SRMR |   RFI |  PNFI |   IFI |   RNI | Loglikelihood |      AIC | AIC weights |      BIC | BIC weights | BIC_adjusted
#> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
#> fit  | lavaan | 85.306 |  24.000 |   < .001 |  918.852 |      36.000 |       < .001 | 0.943 | 0.894 | 0.907 | 0.896 | 0.931 | 0.092 | [0.07, 0.11] |    < .001 | 0.082 | 0.065 | 0.861 | 0.605 | 0.931 | 0.931 |     -3737.745 | 7517.490 |     < 0.001 | 7595.339 |     < 0.001 |     7528.739
#> fit2 | lavaan | 55.186 |  11.000 |   < .001 |  545.030 |      21.000 |       < .001 | 0.952 | 0.877 | 0.899 | 0.839 | 0.916 | 0.116 | [0.09, 0.15] |    < .001 | 0.077 | 0.063 | 0.807 | 0.471 | 0.917 | 0.916 |     -2991.866 | 6017.731 |        1.00 | 6080.752 |        1.00 |     6026.838

Created on 2022-05-25 by the reprex package (v2.0.1)

(Interestingly, while the error above mentions evaluation nested too deeply, in my console I am getting a different error: Error: C stack usage 15928784 is too close to the limit.)

Given that it is possible to compare performance of lavaan models, would also testing performance of lavaan models be something implementable?

ConnorEsterwood commented 1 month ago

+1 to this for sure. Just ran into this limitation.

DominiqueMakowski commented 1 month ago

should be straightforward to add, just need someone to do the PR :)