Closed Lrgnmllr closed 3 years ago
Hello,
thank you for your suggestion. The estimation of the final model uses effectively the default number of iteration, which is 100. If not sufficient, you can you could run again the model with more iterations :
m <- gridsearch(lcmm(fixed = Yvar ~ fixedvar, mixture = ~ mixturevar, random = ~ randomvar, link = "link",
subject = 'id', ng = 5, data = data, returndata = TRUE),
minit = mod1, maxiter = 60, rep = 30) # will do 100 iterations
mm <- lcmm(fixed = Yvar ~ fixedvar, mixture = ~ mixturevar, random = ~ randomvar, link = "link",
subject = 'id', ng = 5, data = data, maxiter = 1000, returndata = TRUE, B = m$best)
The estimation of mm starts at the final point of model m (because B=m$best is specified), so it adds 1000 more iterations in order to achieve convergence.
Best,
Viviane
Ah, what a nice and simple solution!
Thank you for your answer :)
Since it is easy to fix, I did the correction. The maxiter in the model call is now used to fit the final model. ;)
Viviane
I have noticed that when using gridsearch(), the lcmm call within gridsearch() will run for 100 iteration regardless of what is specified through maxiter in the lcmm call. For example:
This will perform the gridsearch as requested, but the final lcmm call within will only run for 100 iterations, instead of 1000 as specified by maxiter.
If others have this issue, and untill this is fixed, I have two possible workarounds: 1 - run gridsearch for a higher amount of iterations and repetitions, and hope that one gets close enough to the global maximum
2 - For those with access to somekind of server with plenty of cores, parallelize the lcmm call. Here is a code that works for me (perhaps not the most efficient R programming though :) ):
First, load the lcmm package and the doParallel package
Second, run a model with one class. Here the model is called mod1
Third, parallelization stuff (here I used 15 cores, and it will run twice on each)
Fourth, extracting, then using the parameters that gave the best loglik:
This should work. However, you may get a Warning after the parallelization stuff is finished, that reads:
I do not know how to avoid this warning, but it is of no consequence for the user :)
I hope this helps whoever encounters the same problem