SciML / Optimization.jl

Mathematical Optimization in Julia. Local, global, gradient-based and derivative-free. Linear, Quadratic, Convex, Mixed-Integer, and Nonlinear Optimization in one simple, fast, and differentiable interface.
https://docs.sciml.ai/Optimization/stable/
MIT License
718 stars 79 forks source link

Let OptimizationPolyalgorithms return or save the optimization state #307

Open aplesner opened 2 years ago

aplesner commented 2 years ago

Currently, the final state of the Adam and BFGS is dropped when solve stops. However, it can be beneficial to resume training, and then it would help to resume Adam and BFGS from their last settings. This could be handled by making PolyOpt an object, as when using Adam from Flux.

As an example: opt = PolyOpt() // Start training solve(problem, opt); // Change something - e.g. adding data points when fitting a neural ode problem = ... // Softly restart training solve(problem, opt);

opt then stores the relevant information.

ChrisRackauckas commented 2 years ago

It should be placed in the sol.original spot.