@sfcheung, I was going to reply to you about casewise likelihood, but when I saw your email, it made me think that there should be some ways to approximate the case influence without needing to rerun the models. I can think of one approach of using the scores given by lavaan::lavScores(), which gives the gradient of the individual loglikelihood with respect to the parameter vector. The results seem promising, and can be found in this vignette in the scores branch I just created. Would you take a look and see what you think? I think the approximation should be more accurate in large samples, and also is more needed in a large sample to avoid rerunning the model many times.
Thanks, @marklhc! It looks excellent! :smile: 👍 I will update other pages and docs to mention this method along with the conventional leave-one-out method.
@sfcheung, I was going to reply to you about casewise likelihood, but when I saw your email, it made me think that there should be some ways to approximate the case influence without needing to rerun the models. I can think of one approach of using the scores given by
lavaan::lavScores()
, which gives the gradient of the individual loglikelihood with respect to the parameter vector. The results seem promising, and can be found in this vignette in thescores
branch I just created. Would you take a look and see what you think? I think the approximation should be more accurate in large samples, and also is more needed in a large sample to avoid rerunning the model many times.