Open stevencarlislewalker opened 11 years ago
With lower tolerance, convergence is very slow (~10-15s on my Macbook).
The problem is the first step chosen for the fixed-effects coefficients. The magnitudes of the two coefficients for the age variable are so small that a step of, say, 0.1 sends the algorithm to never-never land. The solution is to use a more reasonable initial step.
Interesting. Why does glmer
work fine? Is it choosing a more reasonable initial step somehow?
This is a case where the initial iterations with nAGQ=0
are helpful in obtaining a refinement of the fixed-effects coefficients and in giving reasonable standard errors to determine the step size for the optimizer when using nAGQ>0
.
Of course. Thanks Doug. But maybe we only need one or two nAGQ=0
iterations.
I'm a bit confused by this behaviour,
gmod <- glFormula(form, data = Contraception, family = binomial, nAGQ = 1)
devf <- do.call(mkGlmerDevfun, gmod)
# skip initial nAGQ=0 optimization
devf <- updateGlmerDevfun(devf, gmod$reTrms)
optimizeGlmer(devf, stage=2)$par
[1] 0.474010082 -1.006445615 0.006255540 -0.004635385 0.860439478
[6] 0.692959336
Which works fine, despite skipping the initial nAGQ=0
step. I could be wrong about the fact that the initial step is completely skipped. Perhaps the resolution is that in setting up devf
, we get one round of nAGQ=0
, which smooths everything over. I look into it.
More to the point, this also fails,
glmer0 <- glmer(form, data = Contraception, family = binomial, nAGQ = 0)
ll <- plsform(form, data = Contraception, family = binomial)
devf <- do.call(pirls, c(ll, list(family=binomial,eta=qlogis(getME(glmer0, 'mu')), tol=1e-6)))
opt <- minqa:::bobyqa(c(glmer0@theta,glmer0@beta), devf)
Error in fn(x, ...) : Step-halving failed
Which is surprising because I've started the optimizer at the nAGQ=0
optimum. Which seems to suggest that there might actually be something wrong with step-halving itself.
Failure of step-halving is not the problem, it's the symptom. Switch from tol=1e-6
to verbose=2L
and see where the algorithm is trying to evaluate the Laplace approximation. It is way out in the boondocks and the nature of the link function means that small changes in value of u
do not change the penalized weighted residual sum-of-squares.
Here's the example,
If we lower the tolerance it gets close to the
glmer
answer,