Mathematical Optimization in Julia. Local, global, gradient-based and derivative-free. Linear, Quadratic, Convex, Mixed-Integer, and Nonlinear Optimization in one simple, fast, and differentiable interface.
With remake this workflow is more flexible and now documented I don't plan on having explicit support for this but we can improve examples/docs for it if needed
It seems like we often have code like:
Here, the result of optimizing with
ADAM
is further optimized viaLBFGS.
As a first pass, I imagine a new
OptimizerChain
type andsciml_train
entrypoint.That
sciml_train
method could then do (something like) a left fold over the list of optimizers + kwargs.Does this interface seem reasonable? Is any information missing?