Closed IndrajeetPatil closed 6 years ago
I think lmerTest uses Satterthwaite approximation for df by default. sjstats only the "quick" solution or Kernward-Roger Approximation.
Thanks. That explains the differences between outputs when p.kr = TRUE
.
But why are results different when p.kr = FALSE
. In that case, Kernward-Roger approximation is not being used, right? Why are p-values different in that case?
Because p_value()
calculates Wald, while lmerTest
calculates Satterthwaite. See ? lmerTest::summary
for options.
I think that if you pass a lmerModlmerTest
model to p_value()
it returns the same p-values as for summary()
(can't check here, sorry for short answers, using the mobile phone here).
In their "Keep it maximal" paper (http://talklab.psy.gla.ac.uk/KeepItMaximalR2.pdf), Barr and colleagues tabulate different ways one can compute p-values for
lmer
models:If I am not mistaken, both
lmerTest
andsjstats
use the first approach to compute the p-values for linear mixed-effects models?What I am confused by is why the p-values are different when computed with
lmerTest
as compared to when computed withsjstats
, especially whenp.kr = FALSE
and the standard errors for estimates from these two functions (lmerTest::as_lmerModLmerTest
andsjstats::p_value
) are identical. I'm a bit worried because the differences are pretty big, with p-values sometimes going fromp < 0.01
top < 0.001
. I read the details for thesjstats
function, but still couldn't come up with an explanation.Created on 2018-07-16 by the reprex package (v0.2.0).