kingaa / pomp

R package for statistical inference using partially observed Markov processes
https://kingaa.github.io/pomp
GNU General Public License v3.0
112 stars 27 forks source link

Add better global optimization method for trajectory matching #57

Closed mikeaalv closed 6 years ago

mikeaalv commented 6 years ago

Is it possible to add some better optimization method in traj.match()? I would recommend add genetic algorithm or alike method to make it better on finding global optimum. Thanks a lot.

kingaa commented 6 years ago

@mikeaalv, check out traj.match.objfun. This constructs an objective function that can be used in any R optimizer. Note too, that even within traj.match, all the methods of the nloptr package are available to you.

Since the functionality you request exists, I'm going to close this issue. Feel free to re-open it if I haven't fully addressed your question.

mikeaalv commented 6 years ago

thanks a lot. I will look into it.

mikeaalv commented 6 years ago

Actually, I didn't find the correct to use nloptr in traj.match.

My code is:

   res=traj.match(pertussis,start=vec,
                  est=c("Beta","alpha","delta","gamma","psi","b2","phi",
                            "rho","k","w","I_0","R_0","S_0","E_0"),
                  method="nloptr",
                  transform=TRUE,
                  opts=list("algorithm"="NLOPT_GN_MLSL")
                )

However, it is complaining about not using right algorithm.

Warning in nloptr.add.default.options(opts.user = opts, x0 = x0, num_constraints_ineq = num_constraints_ineq,  :
  No termination criterium specified, using default (relative x-tolerance = 1e-04)
Error in is.nloptr(ret) : 
  Incorrect algorithm supplied. Use one of the following:
 NLOPT_GN_DIRECT
NLOPT_GN_DIRECT_L
NLOPT_GN_DIRECT_L_RAND
NLOPT_GN_DIRECT_NOSCAL
NLOPT_GN_DIRECT_L_NOSCAL
NLOPT_GN_DIRECT_L_RAND_NOSCAL
NLOPT_GN_ORIG_DIRECT
NLOPT_GN_ORIG_DIRECT_L
NLOPT_GD_STOGO
NLOPT_GD_STOGO_RAND
NLOPT_LD_SLSQP
NLOPT_LD_LBFGS_NOCEDAL
NLOPT_LD_LBFGS
NLOPT_LN_PRAXIS
NLOPT_LD_VAR1
NLOPT_LD_VAR2
NLOPT_LD_TNEWTON
NLOPT_LD_TNEWTON_RESTART
NLOPT_LD_TNEWTON_PRECOND
NLOPT_LD_TNEWTON_PRECOND_RESTART
NLOPT_GN_CRS2_LM
NLOPT_GN_MLSL
NLOPT_GD_MLSL
NLOPT_GN_MLSL_LDS
NLOPT_GD_MLSL_LDS
NLOPT_LD_MMA
NLOPT_LN_COBYLA
NLOPT_LN_NEWUOA
NLOPT_LN_NEWUOA_BOUND
NLOPT_LN_NELDERMEAD
NLOPT_LN_SBPLX
NLOPT_LN_AUGLAG
NLOPT_LD_AUGLAG
NLOPT_LN_AUGLAG_EQ
NLOPT_LD_AUGLAG_EQ
NLOPT_LN_BOBYQA
NLOPT_GN_ISRES
kingaa commented 6 years ago

I've updated the documentation for traj.match. See the latest release on the github site. In particular, do ?traj.match and look at the examples. The correct call is something like:

res <- traj.match(pertussis,start=vec,
                  est=c("Beta","alpha","delta","gamma","psi","b2","phi",
                        "rho","k","w","I_0","R_0","S_0","E_0"),
                  method="nloptr",
                  algorithm="NLOPT_GN_MLSL",
                  transform=TRUE)

You will also need to specify a stopping criterion. For details, read the documentation for nloptr, which will direct you in turn to the online documentation for the NLopt library.

mikeaalv commented 6 years ago

Thanks a lot!

mikeaalv commented 6 years ago

I have tried:

  res=traj.match(pertussis,start=param,
                 est=c("Beta","alpha","delta","gamma","psi","b2","phi","rho","k","w","I_0","R_0","S_0","E_0"),
                 method="nloptr",#subplex BFGS Nelder-Mead SANN nloptr
                 algorithm="NLOPT_GN_MLSL",
                 lb=parRange[,"min"],
                 ub=parRange[,"max"],
                 maxit=5e8,
                 xtol_rel=1.0e-6,
                 maxeval=100000,
                 local_opts=list(algorithm="NLOPT_LN_SBPLX",xtol_rel=1.0e-9)
               )

There is no error, but it is clear that no optimization has been run as coef(res)==param

kingaa commented 6 years ago

Sorry, I don't know much about this specific algorithm. Nor can I investigate further without knowing what your model and data (pomp object) looks like. From reading the description, it sounds like MLSL is a very sensible algorithm. Is it possible that none of the individual local optimizations are able to get started because they do not find a finite likelihood?

kingaa commented 6 years ago

I've been looking more deeply into it, and this does appear to be a bug. Specifically, the ub and lb options are meant to be passed directly to nloptr::nloptr and not via the opts argument, as pomp assumes.

I can fix this in the next release. In the meantime, consider the following workaround:

library(pomp)
pompExample(ricker)

lims <- list(upper=2*head(coef(ricker),5),lower=0.5*head(coef(ricker),5))

traj.match.objfun(ricker,est=head(names(coef(ricker)),5)) -> ofun

library(nloptr)
nloptr(x0=head(coef(ricker),5),eval_f=ofun,lb=lims$lower,ub=lims$upper,
       opts=list(algorithm="NLOPT_GN_MLSL",xtol_rel=1e-5,maxeval=1e5,
                 local_opts=list(algorithm="NLOPT_LN_SBPLX",xtol_rel=1e-4))
       ) -> res
mikeaalv commented 6 years ago

Thanks a lot. The traj.match.objfun method did work, though I still have to fix problem on bad initial values.

mikeaalv commented 6 years ago

Actually the differences between traj.match.objfun() and traj.match() kind of puzzled me. res=traj.match(data,est=est,start=start,method=method) objfun=traj.match.objfun(data,est=est) res2=optim(objfun,par=start,method=method) Will they get same result(res and res2)? Will logLik(res)==res2$value? Will objfun(coef(res)) == logLik(res)?

They turn out to be not same in my case? What is objfun returning? How can I change it to logLiklihood?

Thanks a lot.

kingaa commented 6 years ago

@mikeaalv: I believe that the bug is fixed on branch iss57. Would you be so good as to test your example with this branch to verify?

mikeaalv commented 6 years ago

That seems to me works for my example.

By the way, about my last question, Is objfun return loglike?

kingaa commented 6 years ago

The objective function constructed by traj.match.objfun returns minus the log likelihood.

I will close this issue now and merge the iss57 branch with the master.

mikeaalv commented 6 years ago

Thanks a lot!