SciML / Optimization.jl

Mathematical Optimization in Julia. Local, global, gradient-based and derivative-free. Linear, Quadratic, Convex, Mixed-Integer, and Nonlinear Optimization in one simple, fast, and differentiable interface.
https://docs.sciml.ai/Optimization/stable/
MIT License
688 stars 75 forks source link

Multi objective optimization #18

Open freemin7 opened 3 years ago

freemin7 commented 3 years ago

The purpose of Multi Objective Optimization is to find to find the efficient (pareto) frontier. If the goal is to wrap all optimizer it might be good to think of an interface to expose such functionality too. In the opposite direction the presence of such an interface encourage optimizer to implement such a functionality. It might be possible to convince normal optimizer to do this task using constraints, although they might be less efficient than an algorithm made for that purpose.

xlxs4 commented 4 months ago

Any thoughts on this?

Vaibhavdixit02 commented 4 months ago

It'll be added very soon. Check back in 2-3weeks!

ChrisRackauckas commented 4 months ago

Yes the thought is as follows. The basic form of a multi-objective optimization problem is just an optimization function which returns an array of objective values. Now, one of the things that needs to get cleaned up for that to make sense is that we need to improve how Optimization.jl treats oop vs iip and thus explicit mutation vs output generation. While a minor detail in API, it's a pretty major detail in terms of AD support and so that does need to be made to have the two clear forms of support. With that done, we will need a MultiObjectiveOptimizationFunction that is clear on its return prototype and thus the size. To see a similar version of this, see the output APIs of Integrals.jl which this will be modeled on.

That MultiObjectiveOptimizationFunction in an OptimizationProblem defines a multi-objective optimization problem, but then we'd need to do the real work of hooking that up to optimizers. There are already some optimizers in the ecosystem though, so using MOI and such to get some immediate optimizers would be next. Also, we could directly convert such OptimizationProblems into NonlinearLeastSquaresProblems, which then opens up https://docs.sciml.ai/NonlinearSolve/stable/solvers/nonlinear_least_squares_solvers/ as methods of solution as well under that interpretation.

After that... the world is then our oyster to explore the topic in many ways. But that's the plan for now.

Vaibhavdixit02 commented 4 months ago

I was also thinking about this at a more abstract level, the fact that multiobjective is not too distinct from constrained optimization. There is this duality of the equations that make up the problem where they can be classified as either a constraint or part of the objective, since most multiobjective methods just tends to scalarize the vector output using some lagrangian or similar approach and the fact that constraints can be moved in and out of the multi-objective, it feels like there might be a smarter way to do this, it actually also opens up the possibility of creating the lagrangian for the usual optimization problem at our end. But this is pretty abstract in my head still and I don't know what this could look like so will probably just do the straightforward thing lol

manuelbb-upb commented 4 months ago

Just wanted to chime in and tell you how excited I am about this. I have this unregistered package to explore descent based MOO and the problem evaluator was heavily inspired by the OptimizationFunction signature. However, all the things @ChrisRackauckas mentioned (in-place vs out-of-place calls, type and size metadata) are implemented haphazardly, so it would be great to see something more optimized.

@Vaibhavdixit02 I am generally in favor of having access to objectives and constraints separately. I see a straight-forward duality in terms of implementation (indexing into a problem structure and taking derivatives), but I guess there are approaches to MOO where you would really like to treat them differently. Even if one of the standard scalarization approaches was to be used (weighted sum, epsilon constraint etc.) and handed down to a single-objective optimizer, then the variation of weights to generate multiple Pareto optimal solutions/trade-offs would constitute a multi-objective optimizer for which a new interface would be nice.