Mathematical Optimization in Julia. Local, global, gradient-based and derivative-free. Linear, Quadratic, Convex, Mixed-Integer, and Nonlinear Optimization in one simple, fast, and differentiable interface.
Currently, the final state of the Adam and BFGS is dropped when solve stops. However, it can be beneficial to resume training, and then it would help to resume Adam and BFGS from their last settings.
This could be handled by making PolyOpt an object, as when using Adam from Flux.
As an example:
opt = PolyOpt()
// Start training
solve(problem, opt);
// Change something - e.g. adding data points when fitting a neural ode
problem = ...
// Softly restart training
solve(problem, opt);
Currently, the final state of the Adam and BFGS is dropped when solve stops. However, it can be beneficial to resume training, and then it would help to resume Adam and BFGS from their last settings. This could be handled by making PolyOpt an object, as when using Adam from Flux.
As an example: opt = PolyOpt() // Start training solve(problem, opt); // Change something - e.g. adding data points when fitting a neural ode problem = ... // Softly restart training solve(problem, opt);
opt then stores the relevant information.