Closed MTbaX closed 2 months ago
When performing a simulated power analysis, data are generated and a model is fit to the generated data. The latter typically does not succeed when the number of manifest variables exceeds the number of observations. The message is telling you that you likely obtain the desired power of .90 with less than the observed number of variables and thus refuses to go on. Note that the sample size to obtain a certain level of power is a different issue from having a sufficiently large sample to support model estimation.
You could do a simulated post-hoc power analysis setting N to a number larger than the number of variables to verify that power is indeed higher than you initially requested. For instance, the following should yield a power > .90:
powerLav = semPower.powerLav( type = "ph", alpha = .05, N = 200, modelH0 = mod_0_sigma, fitH1model = TRUE, Sigma = powerSEM$Sigma, simulatedPower = TRUE, simOptions = list( nReplications = 1000, nCores = 10 ) )
Thank you so much for your answer! Due to the complexity of the models, I assumed this was very unlikely to happen and thought I might have made a mistake. I will try your suggested approach. Again, thank you for your help.
Hi! I'm relatively new to computing simulated power analysis, which may explain why I cannot solve this problem. I hope you'll be able to help me.
I experienced this error
"Error in semPower.aPriori(...) : The required N of 38 is most likely smaller than twice the number of variables. Simulated a priori power will probably not work in this case. If N exceeds p, you can try a simulated post-hoc analyses."
when I was conducting a power analysis using semPower.powerLav with simulated data. When I only compute an analytical power analysis, everything works fine, and I get a reasonable result, as in the image below.
I tried raising the required N manually by overwriting it in the SimResult object, but that did not work. However, when simulating, I get the error quoted above and do not know how to solve it.
Thank you very much in advance!
Those are the models and the code I used:
population model
popModel <- '
measurement model with altered loadings
y1 =~ 0.9x1 + 0.8x2 + 0.7x3
y2 =~ 0.8x4 + 0.7x5 + 0.6x6 y3 =~ 0.7x7 + 0.6x8 + 0.5x9 y4 =~ 0.9x10 + 0.8x11 + 0.7x12 y5 =~ 0.8x13 + 0.7x14 + 0.6x15 y6 =~ 0.9x16 + 0.8x17 + 0.7x18 y7 =~ 0.7x19 + 0.6x20 + 0.5x21 y8 =~ 0.9x22 + 0.8x23 + 0.7x24 y9 =~ 0.8x25 + 0.7x26 + 0.6x27 y10 =~ 0.9x28 + 0.8x29 + 0.7x30
y11 =~ 0.9x31 + 0.8x32 + 0.7x33 y12 =~ 0.7x34 + 0.6x35 + 0.5x36 y13 =~ 0.8x37 + 0.7x38 + 0.6x39 y14 =~ 0.9x40 + 0.8x41 + 0.7x42 y15 =~ 0.7x43 + 0.6x44 + 0.5x45 y16 =~ 0.8x46 + 0.7x47 + 0.6x48 y17 =~ 0.9x49 + 0.8x50 + 0.7x51 y18 =~ 0.8x52 + 0.7x53 + 0.6x54
y19 =~ 0.8x55 + 0.7x56 + 0.6x57 y20 =~ 0.8x58 + 0.7x59 + 0.6x60 y21 =~ 0.8x61 + 0.7x62 + 0.6x63 y22 =~ 0.8x64 + 0.7x65 + 0.6x66
variances of latent variables (set to 1 for simplicity)
y1 ~~ 1y1 y2 ~~ 1y2 y3 ~~ 1y3 y4 ~~ 1y4 y5 ~~ 1y5 y6 ~~ 1y6 y7 ~~ 1y7 y8 ~~ 1y8 y9 ~~ 1y9 y10 ~~ 1y10 y11 ~~ 1y11 y12 ~~ 1y12 y13 ~~ 1y13 y14 ~~ 1y14 y15 ~~ 1y15 y16 ~~ 1y16 y17 ~~ 1y17 y18 ~~ 1y18 y19 ~~ 1y19 y20 ~~ 1y20 y21 ~~ 1y21 y22 ~~ 1y22
covariances (moderate values)
y1 ~~ 0.4y2 y3 ~~ 0.3y4 y6 ~~ 0.4y7 y9 ~~ 0.3y10 y19 ~~ 0.1y18 y20 ~~ 0.1y22
regressions (adjusted coefficients)
y11 ~ 0.5y18 + 0.4y15 + 0.6y14 + 0.3y13 + 0.7y12 + 0.8y19 + 0.4y21 + 0.7y22 y5 ~ 0.4y2 + 0.5y1 + 0.6y3 + 0.3y4 + 0.4y6 + 0.5y7 + 0.4y8 + 0.3y18 + 0.6y20 y6 ~ 0.6y3 + 0.4y4 + 0.5y9 + 0.3y10 + 0.5y18 + 0.6y16 + 0.7y17 + 0.4y14 + 0.5y13 + 0.6y15 y7 ~ 0.5y3 + 0.6y4 + 0.7y9 + 0.8y10 + 0.5y18 + 0.4y16 + 0.5y17 + 0.4y14 + 0.6y12 y8 ~ 0.7y9 + 0.6y10 + 0.5y6 + 0.4y7 + 0.3y3 + 0.4y4 + 0.5*y18 '
analysis model
mod <- '
measurement model (including dummy latent variables to match the dimensionality)
y1 =~ 1x1 + x2 + x3 y2 =~ 1x4 + x5 + x6 y3 =~ 1x7 + x8 + x9 y4 =~ 1x10 + x11 + x12 y5 =~ 1x13 + x14 + x15 y6 =~ 1x16 + x17 + x18 y7 =~ 1x19 + x20 + x21 y8 =~ 1x22 + x23 + x24 y9 =~ 1x25 + x26 + x27 y10 =~ 1x28 + x29 + x30
y11 =~ 0x31 + 0x32 + 0x33 # Dummy latent variable y12 =~ 0x34 + 0x35 + 0x36 # Dummy latent variable y13 =~ 0x37 + 0x38 + 0x39 # Dummy latent variable y14 =~ 0x40 + 0x41 + 0x42 # Dummy latent variable y15 =~ 0x43 + 0x44 + 0x45 # Dummy latent variable y16 =~ 0x46 + 0x47 + 0x48 # Dummy latent variable y17 =~ 0x49 + 0x50 + 0x51 # Dummy latent variable y18 =~ 0x52 + 0x53 + 0x54 # Dummy latent variable
y19 =~ 0x55 + 0x56 + 0x57 # Dummy latent variable y20 =~ 0x58 + 0x59 + 0x60 # Dummy latent variable y21 =~ 0x61 + 0x62 + 0x63 # Dummy latent variable y22 =~ 0x64 + 0x65 + 0x66 # Dummy latent variable
variances of latent variables (set to 1 for consistency)
y1 ~~ 1y1 y2 ~~ 1y2 y3 ~~ 1y3 y4 ~~ 1y4 y5 ~~ 1y5 y6 ~~ 1y6 y7 ~~ 1y7 y8 ~~ 1y8 y9 ~~ 1y9 y10 ~~ 1y10
y11 ~~ 0y11 y12 ~~ 0y12 y13 ~~ 0y13 y14 ~~ 0y14 y15 ~~ 0y15 y16 ~~ 0y16 y17 ~~ 0y17 y18 ~~ 0y18 y19 ~~ 0y19 y20 ~~ 0y20 y21 ~~ 0y21 y22 ~~ 0y22
regressions (trivial or null regressions to maintain structure)
y4 ~ y1 + y2 + y3 + y5 + y6 + y7 y5 ~ y2 + y3 + y8 + y9 y6 ~ y2 + y3 + y8 + y9 y7 ~ y8 + y9 + y5 + y6 + y2 + y3 y8 ~ 0y9 + 0y10 + 0y6 + 0y7 + 0y3 + 0y4 + 0*y18
Add trivial paths or covariances to ensure dimensional alignment
y11 ~ 0y1 + 0y2 y12 ~ 0y3 + 0y4 y13 ~ 0y5 + 0y6 y14 ~ 0y7 + 0y8 y15 ~ 0y9 + 0y10 y16 ~ 0y11 + 0y12 y17 ~ 0y13 + 0y14 y18 ~ 0y15 + 0y16 y19 ~ 0y17 + 0y18 y20 ~ 0y19 + 0y20 y21 ~ 0y21 + 0y22 y22 ~ 0y1 + 0y2 + 0y3 + 0y4 + 0y5 + 0y6 '
computing Sigma
powerSEM <- semPower.powerLav( type = "a-priori", alpha = .05, power = .90, modelPop = popModel, modelH0 = mod_0, simulatedPower = F )
simulation
set.seed(12345)
power analysis
powerLav = semPower.powerLav( type = "a-priori", alpha = .05, beta = .01, N = 100, modelH0 = mod_0_sigma, fitH1model = TRUE, Sigma = powerSEM$Sigma, simulatedPower = TRUE, simOptions = list( nReplications = 1000, N = 2500,
normal distribution or non-normal