Closed pitakakariki closed 4 years ago
HI all, I got a similar error: Power for predictor 'HANDRIGHT', (95% confidence interval):============================================================| 0.00% ( 0.00, 3.62) Test: t-test with Satterthwaite degrees of freedom (package lmerTest) Effect size for HANDRIGHT is 0.16 Based on 100 simulations, (0 warnings, 100 errors) alpha = 0.05, nrow = 146 Time elapsed: 0 h 0 m 2 s
The code I used (where I insert the name of the fixed factor of interest and the related estimate) was: library(simr) fixef(mimodel)["HANDRIGHT"] <-0.15519 powerSim(mimodel,test=fixed("HANDRIGHT","t"),nsim=100)
I have the following version: R 3.5, simr 1.0.4. What is very strange is that the same code with the same data run properly on a Mac. What is the problem?
Ah, I forget if I use the call for lastResult()$errors, here is what I receive: stage index message 1 Fitting 1 object of type 'symbol' is not subsettable ... this is repeated from 1 to 100 simulations (I guess) Thank you for your help!
What are the lmerTest versions here and on the Mac?
Thanks for your answer! On the pc the lmerTest version is 3.0-1, on the Mac is 2.0.33 However if I comment the line #library(lmerTest) and I do not run any other library, I got the same error.
What version of simr on the Mac? It should give an error for anything less than 3.0-0.
Did you restart the session after you commented out library(lmerTest)
?
Simr on Mac is 1.0.3, no error on the Mac Yes I have restarted the session.
What does getCall(mimodel)
look like?
lmer(formula = formula, data = mi, REML = FALSE)
Thanks. And class(mimodel)
please?
I think if you specify the formula directly in the lmer call it might work.
[1] "lmerModLmerTest" attr(,"package") [1] "lmerTest" and I try to insert the formula directly on mimodel <- lmer(formula,REML=FALSE,data=mi) and you are right, it works!!!
So the problem is due to an "incompatibility" lmerTest? I have to avoid to run library ( lmerTest)
lme4 does some magic to convert the stored call to look like you entered the formula directly. lmerTest for some reason overrides this and stores the indirect version.
Options for now are either not to load lmerTest
or to specify the formula directly; at some point I'll try to write a workaround.
Thank you.I am trying different analyses as you suggested, and it works. However, by re-running many times the code, without changing the estimate or the t (so every input is the same), the power I obtained is slight different each time (e.g. from 75% to 81%). What this means? Have I to increase the number of simulations ? I have chosen 100 simulations, with 1000 seems more stable (but it is always changing a bit). In this example I have 146 observation. Moreover, is it correct to use as input in the formula the "estimate" for fixed effect I obtain from the model (linear mixed model,lmer)? For instance if I have a fixed factor (HAND) with 4 levels (LEFT,RIGHT,HEALTHY,PLEGIC) and I obtain this: Estimate Std. Error df t value Pr(>|t|) (Intercept) 0.35096 0.02304 89.28572 15.235 < 2e-16 * HANDHEALTHY -0.02019 0.01468 73.00000 -1.376 0.1731 HANDLEFT 0.07622 0.02871 89.28572 2.655 0.0094 * HANDRIGHT 0.13500 0.02871 89.28572 4.702 9.32e-06
to estimate the power of the contrast between HANDLEFT and the fourth level of the factor (PLEGIC, that is the level not reported here because it is the level against which all the contrasts are presented), is the following code correct? and is the test chosen (t) correct even the main fixed factor has 4 levels? fixef(mimodel)["HANDLEFT"] <-0.07622 powerSim(mimodel,test=fixed("HANDLEFT","t"),nsim=100)
What I would like to know is if the contrast between for instance HANDPLEGIC and HANDLEFT has a good power, this is why I have choose to use the "t" as test. Thank you again for your precious help!
simr works by "Monte Carlo" simulation, so if the number of simulations is too small the estimate will jump around a lot. You'll need to increase the number of simulations until the confidence interval for the power estimate is narrow enough for your purposes.
It's usually a bad idea to use the estimate from your model as the effect size in a power analysis. This is the dreaded "post-hoc power analysis".
It would be better to use a value for the effect that is "just barely interesting". Remember that at 80% power you still have a 20% chance of failing to detect the effect.
Print a warning if there are any.