Closed tamas-ferenci closed 5 years ago
Hi @tamas-ferenci
There are many reason for convergence problems; Some of the most common are:
In general all these possible problem areas all lead to a difficult likelihood surface with many local minima. This is not a unique problem.
Any of the above can lead to these problems. It doesn't actually mean you are making a mistake, nor that the results are practically useless either. The likelihood values can be used do discern the best model parameter estimates to make predictions from. If there are more than one model that is similar in AIC, you can also use model averaging techniques to produce a good simulation of future events.
Note, that with your example and the latest nlmixr I have the following results
This shows fit2 is better with the updated data. Notice that the nlmixr vignette initial estimates have been updated too.
The other possibility, of course, is that there is a local minimia that the algorithm found. By restarting with the same initial estimates you can try to avoid this. Both SAEM and FOCEi algorithms change the search path based on new initial conditions and can cause nlmixr to find new and possibly better solutions. NONMEM has a similar option, I am unsure about Monolix, though.
This is also likely the case for theo_sd
. Notice running from the same initial estimates a third time gives similar parameter estimates (and objective function values).
Hence in any mixed-effect model it is best to rerun the model with the final estimate to make sure it is stable.
Thanks @mattfidler for such detailed response!
I also realized that re-running a third time doesn't change the estimates, but this actually contributed to my belief that I am making a mistake: SAEM is a very sophisticated algorithm and I thought only a very primitive one would stop at a point, where, if restarted from the exact same point converges to a very-very substantially better fit (see the AICs).
(This belief was exacerbated by the fact that I was under the impression that this model and the theo_sd
is extremely basic and totally well-defined, so if anything goes smoothly, this should.)
So one final question remains: do you think that - based on these findings - I should make a habit from re-running every model with nlmixr
two times in the above fashion to make sure estimates no longer change...?
Hi @tamas-ferenci ,
My practice has always been to restart the models to make sure they are stable. I did this in NONMEM before I started working on nlmixr, and don't expect the algorithms to change enough to make this step unnecessary. I'm unsure if everyone does this; In a phoenix
course I took they also encouraged the practice even with their qprm
algorithm (another EM
algorithm). I wasn't encouraged to do this at a monolix
training course, though I'm sure that it is still likely a good practice.
However, in this case a simple plot(fit)
shows the initial model isn't doing too well and something needs to be done to fix it. In general goodness of fit plots are more informative than log likelihoods/AICs etc.
Of course you could also look at the traceplots to see if convergence has been achieved, and play with the settings there (nBurn
and nEm
in particular). But there is no guarantee the model converges to the same solution.
To be a little more clear, I didn't always do the step for covariate selection, but otherwise my statement is correct.
Clear, thank you very much again!
When working on this, I found something quite strange.
Let's take the most basic example from the vignette:
Now, let's run the model, and then re-run it with the only change being that the starting values of the parameters are set to the estimated results of the first run:
So what we see is that there are differences above an order of magnitude (!) just by this simple change in the initial value!
This would of course mean that the results are practically useless, which is hard to believe (as this is a horribly simple model! if there are convergence problems here, then there are everywhere), so I think I'm making a mistake somewhere, but I don't where...