Open tmharrison opened 6 years ago
It looks like summary(model2)
won't give you the updated sample size - is that what you mean when you say it appears not to work? The easiest way to check is with nrow(getData(model2))
, or alternatively you could check the printout from powerSim
.
When I ran powerSim on model2 nrow was still equal to 68, so I worried I had not extended the model in the way that I meant to. In the extend command, I was hoping to add 75 "new" subjects to the model which should have increased the power but, again, in the model2 powerSim output the power was still in the mid-20% range.
Update: I got the extend command working and now model2 nrow equals what I expect. I tried extending so subject n=100 so the nrow=200. The powerSim on this model, however, has a power of 33% so still quite low. Should I interpret this as we would need many, many more subjects than 100 to detect this effect?
I have some work planned to make the extend function easier to use - if you could let me know what went wrong that you needed to fix that would be quite helpful.
My best guess at the moment is that your model is struggling to estimate the effect because the two repeated observations are too close together for any differential change to be obvious. You could test this by extending along your time variable.
I am not sure why my attempt to use the extend function did not work at first, I think it was an error on my part and not a problem with the function itself.
Thank you for the suggestion about extending along the time variable to test if more follow-ups are needed to power the model. It makes sense and I will try it!
I used the approach I described in my original post (with the successful application of extend to model a larger number of subjects this time) to predict a different DV that differs between my two groups more obviously -- in this case my model1 had 35% power and and model2 (with along="subject" n=125) had 82% power. I used powerCurve to plot model2. Is this approach sound to make a claim about sample size needed for this type of effect (group*time interaction)?
Thanks for the great toolset.
It sounds like you're on the right track.
I'd recommend thinking carefully about what sort of minimum effect size you want to be able to detect though (that's what the observed power warning above is about). Remember that 80% isn't really great in terms of power - I prefer to treat it as an "acceptable" benchmark for a "worst case" small effect size.
Note that the power simulation is only as good as the model and parameter values fed into it. It's usually worthwhile to play around with some of the parameters to make sure you're result's not too sensitive to departures from the pilot values.
Makes sense. Thanks for the very good advice and for being so responsive!
Hi,
From a preliminary model with 34 subjects (2 repeated observations per subject) I want to determine the sample size needed to detect the effect of an interaction term with 80% power. We hypothesize that there is a group by time interaction (such that the DV changes more over time in one group compared to the other) and want to know how many subjects we would need to observe a significant effect.
My model is:
So I tried:
This runs and gives me this output:
Next, I tried to extend model1 along "subject" (
model2 <- extend(model1, along="subject", n=75)
) and re-run powerSim but this did not appear to work because the number of subjects was unchanged? My ultimate goal would be to run powerCurve and see how power changes with increasing sample size.Thanks in advance for any advice!