pitakakariki / simr

Power Analysis of Generalised Linear Mixed Models by Simulation
69 stars 19 forks source link

simr error message #172

Open Hana-HK opened 4 years ago

Hana-HK commented 4 years ago

Hello,

I would like to estimate my power to detect a 3way interaction in the following model:

model= lmer(SBP~Treatment*Stressor*Time+(Time|ID), contrasts=list(Treatment="contr.treatment",Stressor="contr.treatment", Time="contr.poly"), data = BPdata) 

So I first tried:

doTest(model_sbp_lmer_simr, fcompare(~ Treatment + Stressor + Time+(Time|ID)))

I got:

p-value for model comparison: 4.969777e-11
          --------------------
Test: Likelihood ratio
      Comparison to ~Treatment + Stressor + Time + (Time | ID) + [re]

Then I continued:

powerSim(model, nsim=100, test=fcompare(~ Treatment + Stressor + Time +(Time|ID)))

and got:

Power for model comparison, (95% confidence interval):======================================================|
       0.00% ( 0.00, 30.85)

Test: Likelihood ratio
      Comparison to ~Treatment + Stressor + Time + [re]

Based on 100 simulations, (0 warnings, 10 errors)
alpha = 0.05, nrow = NA

lastResult()$errors told me following:

 message
1  mismatch between "beta" parameter vector names and internal names ((Intercept),TreatmentTestosterone,StressorCPT,StressorLWT,TimePre,TreatmentTestosterone:StressorCPT,TreatmentTestosterone:StressorLWT,TreatmentTestosterone:TimePre,StressorCPT:TimePre,StressorLWT:TimePre,TreatmentTestosterone:StressorCPT:TimePre,TreatmentTestosterone:StressorLWT:TimePre) (....)

Do you have any idea on what I might be doing wrong? Your help will be greatly appreciated! Thank you very much! (P.S. I know this is an observed power calculation, but that`s what the reviewer asked for....)

pitakakariki commented 4 years ago

I can't see any mistakes in what you've done, so my best guess is it's a bug somewhere in my code to handle contrasts.

Does it run properly without contrasts?

Sorry about your reviewer.

Hana-HK commented 4 years ago

Hi Peter,

Thanks a lot for a prompt reply. If I run the model and the simulation without contrasts I get:

Power for model comparison, (95% confidence interval):======================================================|
      100.0% (69.15, 100.0)

Test: Likelihood ratio
      Comparison to ~Treatment + Time + Stressor + (Time | ID) + [re]

Based on 10 simulations, (8 warnings, 0 errors)
alpha = 0.05, nrow = 141836

the warnings are : 1 Testing 1 Model failed to converge with max|grad| = 0.00327315 (tol = 0.002, component 1).

Does it mean I get imprecise power estimations because my model does not converge? Alos, when I try to change the fixed effects, for instance this way: fixef(model_sbp_lmer_simr6)["Treatment1:Time1:Stressor1"] <- 2 the estimated power does not change, so I guess something must be wrong....

Thanks again!

pitakakariki commented 4 years ago

The convergence warnings are happening at the testing stage, which suggests they're coming from the smaller model. That suggests to me (along with the 100% power estimate) that the interactions are so large that the model without interactions can't handle them.

Note that the comparison you're making is no interactions versus all interactions. I count seven interaction terms, so if you're testing them all simultaneously then you may need to make them all smaller to have any impact on power.

My next recommendation would be to set all 7 interaction terms to zero. That should give you 5% power (i.e. the Type I error rate) if everything is working properly. If that works, it should give you some confidence in the power analysis at other parameter values.

Hana-HK commented 4 years ago

Ahh, I see. I will try that out. Thanks a lot!