Closed odow closed 1 year ago
Another option is to define
function MOI.set(model, ::ObjectiveFunction, func::AbstractVectorFunction)
for i in 1:MOI.output_dimension(func)
MOI.set(model, MultiObjectiveFunction(i), func[i])
end
end
so that both syntax works for the user
The more I've thought about this, the more sure I am that 1 is the correct way. You really are declaring a vector valued objective, instead of multiple different objectives. It causes a natural interpretation of the return type of ObjectiveValue and needs no change to MOI.
Great to see a renewed and improved effort to bring vector-optimzation to JuMP :) I've left academia and, therefore, sadly stopped using Julia. I don't think I can provide much useful input, but look forward to trying these features at some point in the future:)
Thanks @odow for having initiated this discussion, and I am happy to see it.
We are developping vOptSolver which is currently composed of two packages, vOptSpecific and vOptGeneric. vOptGeneric is devoted to multi-objective linear optimization problems with discrete variables, see https://github.com/vOptSolver (but we have also some beta versions of algorithms for problems with mixed integer variables and continuous variables).
The version online is compliant with julia 1.x and JuMP 0.18. You can give a look on the notebook (here : https://github.com/vOptSolver/vOptSolver-notebook) that I have prepared as support for a seminar held in November 2018. The version of vOptGeneric compliant with JuMP 0.19 is ready on a branch (https://github.com/vOptSolver/vOptGeneric.jl/branches). It is a matter of free time for me for updating all the documents (for some reasons I have been much less available since the last summer).
Here an example with vOptGeneric+JuMP0.19 for the bi-objective knapsack problem:
using vOptGeneric, JuMP, Cbc, LinearAlgebra
m = vModel(with_optimizer(Cbc.Optimizer, logLevel=0))
p1 = [77,94,71,63,96,82,85,75,72,91,99,63,84,87,79,94,90,60,69,62]
p2 = [65,90,90,77,95,84,70,94,66,92,74,97,60,60,65,97,93,60,69,74]
w = [80,87,68,72,66,77,99,85,70,93,98,72,100,89,67,86,91,79,71,99]
c = 900
size = length(p1)
@variable(m, x[1:size], Bin)
@addobjective(m, Max, dot(x, p1))
@addobjective(m, Max, dot(x, p2))
@constraint(m, dot(x, w) <= c)
vSolve(m, method=:dichotomy)
Y_N = getY_N(m)
for n = 1:length(Y_N)
X = value.(x, n)
print(findall(elt -> elt ≈ 1, X))
println("| z = ",Y_N[n])
end
As you can see:
Thus:
a discussion about how to pass the resolution method to invoke (and possibly the required parameters) is needed.
a discussion about how to retrieve Y_N and X_E is needed as soon as the problem is not discrete. Indeed, for the MO-MILP case, Y_N may be composed of points, segments (open or closed), facets (full or not; open or closed). We have not yet stated about a protocol in the vOpt team. One option may be to return each object separately, with its proprieties, and a description of their adjacency relations.
a discussion about how to pass the resolution method to invoke (and possibly the required parameters) is needed.
vOpt would can either create
model = Model(with_optimizer(vOptSolver.Chalmet.Optimizer)
ormodel = Model(with_optimizer(vOptSolver.Optimizer)
and then set_parameter(model, "method", "Chalmet")
.a discussion about how to retrieve Y_N and X_E is needed as soon as the problem is not discrete
You can freely define any interface, JuMP will not restrict you. You just need to define new MOI attributes, so that the user can do MOI.get(model, VOptSolver.Y_N())
and then you can define convenience function with the interface you like, e.g.: getY_N(model) = MOI.get(model, VOptSolver.Y_N())
.
we have chosen to handle the objectives separately. Adding an index which denote a specific objective as suggested by @odow is our preferred option because we would like in a near future to remove one objective without redefining all the model.
Is it okay to just redefine the objective? (E.g., below.) Or are you keeping the pre-computed frontier and projecting it onto the lower-dimensional space when you remove an objective?
@objective(model, Min, C * x)
# Then later
@objective(model, Min, C[1, :] * x)
If X_E
and Y_N
are points, then you will be able to query them from JuMP using value(x, result=i)
and objective_value(model, result=i)
as I outlined. If you have more solver-specific solutions, it's easy to define MOI attributes as @blegat mentioned. For example:
model = Model()
@variable(model, x[1:2])
@objective(model, Min, C * x)
Y_N = MOI.get(model, vOpt.EfficientFacets())
Gurobi.jl uses this to query the IIS, for example.
There is one very big change for vOptSolver: you should re-write it as a MOI solver, rather than a JuMP-extension. That means you don't have to worry about macros like @addobjective
, and defining types like vModel
which wrap JuMP models. You would have first-class support for everything JuMP offers :)
From the user/modeler perspective it seems that option 2 is better. Mode one is sort of not natural mainly if the two objectives are completely different.
So my reasoning is this:
MOI.ObjectiveFunction{MOI.VectorOfVariables}
, MOI.ObjectiveFunction{MOI.VectorAffineFunction}
or MOI.ObjectiveFunction{MOI.VectorQuadraticFunction}
.ResultCount
attribute. We already have support for this, and all VariablePrimal
etc are parameterized by the ResultCount
.So, at the MOI level, we don't need to change anything to support M.O.
One exception is if solvers want to return more complicated solution artifacts (e.g., facets). If so, they should define solver-specific MOI attributes.
So, given we don't need to change MOI, what is the closest mapping to JuMP? Vector-valued objectives.
If you want to give the objectives separately, you can go:
@expression(model, first_objective, ...)
@expression(model, second_objective, ...)
@objective(model, Min, [first_objective; second_objective])
The other approach, JuMP giving objectives separately, would require changes to MOI to parameterize the objective function based on the index. This raises some questions.
ObjectiveValue
return?ObjectiveFunction{1}
and ObjectiveFunction{3}
?ObjectiveValue
to change? Or do they expect a 0
in that element?expressions are reasonable for a quick fix, although doing separately seems a good future goal. Delete should delete and not set to zero, otherwise it could grow a lot...
The inconvenience I see to the @objective(model, Min, C * x)
syntax is that people can get confused when accidentally providing a vector function, compared to having a multi-objective specific syntax
The inconvenience I see to the @objective(model, Min, C * x) syntax is that people can get confused when accidentally providing a vector function, compared to having a multi-objective specific syntax
To which I counter: there is already precedent for vector-valued functions in constraints; most solvers would throw "VectorAffineFunction in objective not supported"; and even if they did solve it, objective_value
would return a vector not a Float64
.
People could get equally confused with
@objective(model, Min, x)
@objective(model, Min, y)
And how would we handle multiple objective senses?
Here is @xgandibleux's example with what I am proposing:
using vOptGeneric, JuMP, Cbc, LinearAlgebra
p1 = [77,94,71,63,96,82,85,75,72,91,99,63,84,87,79,94,90,60,69,62]
p2 = [65,90,90,77,95,84,70,94,66,92,74,97,60,60,65,97,93,60,69,74]
w = [80,87,68,72,66,77,99,85,70,93,98,72,100,89,67,86,91,79,71,99]
c = 900
size = length(p1)
model = Model(with_optimizer(
vOptGeneric.Optimizer(
Cbc.Optimizer(logLevel=0), method=:dichotomy
)
))
@variable(model, x[1:size], Bin)
@objective(model, Max, [dot(x, p1), dot(x, p2)])
@constraint(m, dot(x, w) <= c)
optimize!(model)
for n = 1:result_count(model)
X = value.(x; result = n)
print(findall(elt -> elt ≈ 1, X))
println("| z = ", objective_value(model; result = n))
end
In writing
@objective(model, Max, [dot(x, p1), dot(x, p2)])
the user cannot (1) blend functions to minimize and to maximize in the model, and (2) point out a function to delete without rebuilding all the model.
• Concerning (1): of course, it is trivial to rewrite the function from min to max, but having the choice here is natural (same level of why not express the constraints only in <=).
• Concerning (2): MO-models are also solved within an interactive approach, where the user can adopt a "what-if" behavior with the objectives. This is a feedback that I have received from users of vOptGeneric.
I am working with my students on a case study (a car sequencing problem) with 2 or 3 objective functions. The (technical) constraints are static but the objectives may change between production plants. This is an other illustration where preferences of a decision-maker may suggest to delete an objective or replace an objective by an other (with the idea of comparing different optimization policy) in a (large) MIP model.
Is it technically possible to imagine of naming the objectives as the constraints:
@objective(model, obj1, Max, [dot(x, p1)])
@objective(model, obj2, Min, [dot(x, p2)])
or
@objective(model, cost[1], Min, [dot(x, p1)])
@objective(model, stability[2], Max, [dot(x, p2)])
or
@objective(model, [1], Min, [dot(x, p1)])
@objective(model, [2], Max, [dot(x, p2)])
or
@objective(model, obj[i = 1:3], [ dot(x, p[i]) ])
and then to refer to one objective by name (e.g. stability) or by its index (e.g. 2)?
blend functions to minimize and to maximize in the model
This is a reasonable point to make. The question of "objective sense" isn't obvious. The hang-up we seem to be having is the standard form. Is it:
min: f_0(x)
s.t. f_i(x) in S_i
where f_0
can be vector-valued, or is it
sense_i: f_i(x), i=1,2,...
s.t. g_j(x) in S_j, j=1,2,...
where f_i
must be scalar-valued.
I'm going to argue strongly that it's the first, since it is the simplest concept. More importantly, it's the design of MathOptInterface as it stands today. We wouldn't need to add any special features for multi objective problems. This is pretty important for actually implementing this, because I don't think we want to re-think the entire concept of objectives this close to the release of JuMP 1.0.
So at the expense of (i) providing a list of expressions not being the nicest syntax and (ii) not being able to specify different objective senses, perhaps we can table this idea until JuMP 2.0?
point out a function to delete without rebuilding all the model.
They can just reset the objective:
model = Model()
@variable(model, x[1:2])
@objective(model, Min, [x[1], x[2]])
optimize!(model)
@objective(model, Min, x[1] + x[2])
optimize!(model)
Is it technically possible to imagine of naming the objectives as the constraints:
This would be possible using @expression
:
model = Model()
@variable(model, x[1:2])
@expression(model, obj[i=1:2], x[i])
@objective(model, Min, obj)
optimize!(model)
value(obj[1], result=1)
Relating to the changing objectives point:
model = Model()
@variable(model, x[1:3])
@expression(model, stability, x[1])
@expression(model, cost, x[2])
@expression(model, production, x[3])
@objective(model, Max, [stability, production, -cost])
optimize!(model)
@show value(stability; result=2)
@show value(cost; result=2)
@objective(model, Max, [stability, production])
optimize!(model)
# Note: we didn't have cost as an objective, but we can still query the value!
@show value(cost; result=2)
Okay, so I started to implement this, and I found that one thing would have to change in MOI. Essentially, we just need to relax the AbstractScalarFunction
aspect of this definition:
"""
ObjectiveFunction{F<:AbstractScalarFunction}()
A model attribute for the objective function which has a type `F<:AbstractScalarFunction`.
`F` should be guaranteed to be equivalent but not necessarily identical to the function type provided by the user.
Throws an `InexactError` if the objective function cannot be converted to `F`,
e.g. the objective function is quadratic and `F` is `ScalarAffineFunction{Float64}` or
it has non-integer coefficient and `F` is `ScalarAffineFunction{Int}`.
"""
struct ObjectiveFunction{F<:AbstractScalarFunction} <: AbstractModelAttribute end
So I talked to @mlubin, and we won't be pushing this into JuMP unless there is general agreement on the approach. We definitely don't want to force @xgandibleux et al. to implement our (edit) my approach when they are the subject matter experts.
However, since I require this for my personal work, I will move forward with an implementation. Once it's ready for review, we can have another discussion about the pro's and con's of each approach, this time with some actual working code. But, to re-iterate, we won't merge it unless there is general agreement.
Okay, I now have a proof of concept multi-objective solver: https://github.com/odow/MOO.jl
The best place to look is the test, where we solve a trivial 2 variable BOLP: https://github.com/odow/MOO.jl/blob/4a1efcbfe04802ca477932b2c1d7fca55eaccb16/test/nise.jl
Arguments for min VectorFunction
as opposed to Vector[min ScalarFunction1, max ScalarFunction2]
:
MOI.Utilities
), everything JustWorks™. Note in particular that we can use loadfromstring!
without modifications.ResultCount
to represent the solver returning a set of Pareto-optimal solutionsObjectiveBound
as the utopian pointmin/max F(x)
If we want to move forward with this, now that JuMP has support for accessing multiple solutions (https://github.com/JuliaOpt/JuMP.jl/pull/2100), the only missing piece is JuMP passing vector-valued functions to solvers as the objective function.
I recently had a conversation with Kaisa Miettinen who leads the multiobjective optimization group at University of Jyväskylä. (She made me aware of @xgandibleux 's work using Julia) The reason for using multi objective optimization according to her is that the proper weighing of the objectives depends on querying a decision maker. There a 4 ways how that might be done:
I see that 1&4 can be accomplished with the current frame work. 3 could be accomplished using callbacks. Although a standardized interface for user interaction would be really nice so that many Multi objective solvers can share one exploratory interface (or multiple interfaces could compete in a far away future). While i imagine that 2 could be accomplished using either callback or parsing the multiple results but i imagine that a solver might have more useful information for constructing surrogate than just the set of the pareto front. (I can think right now of local gradient information on the pareto manifold (think tangent space) or rejected points close to the pareto front so that the surrogate doesn't touch them.)
If consider those features non-essential but think they are low hanging fruits on the way of JuMP becoming a best in class multi objective framework. Although my knowledge on multi objective optimization is second hand only and i am open to be corrected.
I would like to add my two cents to the discussion, mainly concerning way objectives are declared. (tldr: I go with @xgandibleux and am in favor of declaring them individually).
Background: I have recently worked on a trust region solver for nonlinear multiobjective problems. This solver is specifically aimed at expensive or heterogeneous problems where some objectives are considered blackbox and might not even be Julia functions. Hence, it is not necessarily something I would write a MOI interface for.
Despite this fact, the need for separate evaluation of the different objectives might arise in other approaches as well. • There is, of course, the Pascoletti-Serafini scalarization, a very powerful tool that can be used on its own or be integrated in iterative schemes. It requires an utopia point in objective space that results from the minimization of the individual objectives. • I guess for the ε-constraint method, accessibility of the individual objectives would be of use too. • Suppose, for a you want to calculate the multiobjective Newton direction. This requires positive definite Hessians which is why often times BFGS approximations are used. A sophisticated solver could allow for the user to specify (or automatically determine) how the Hessian should be calculated/approximated for each objective. • I know of investigations into the hierarchical structure of the Pareto Set/Front, where reduced problems were used, i.e., where one or several objectives were dropped.
Hope this helps in evaluating how to proceed :)
@/all: Thanks for the input.
If anyone watching this thread (or your students) is interested in progressing this further, I've proposed this as a Google Summer of Code (GSOC) for this (Northern) summer: https://github.com/jump-dev/GSOC2021#multiobjective-optimization
You can find more about GSOC here:
If we get selected, student applications run March 30 - April 14.
Hi @odow , how is the progress about MOO? or recommend to use https://github.com/anriseth/MultiJuMP.jl?
No progress. MultiJuMP hasn't been updated to JuMP 1.0. Digging in to understand how it works and updating it to JuMP 1.0 would be a good project, if you have the time.
vOptGeneric solves generic linear problems with 2 objectives with e.g. an epsilon constraint method (+ MIP solver available from JuMP). It is compliant with the last versions of julia and JuMP. A quick starter is presented in recent slides here. Several examples are provided here. Several extensions/new features are under developments, I hope to come back soon with news. Feedbacks are welcome.
I'm back at this in #3176.
Here's a related project that we should look to for ideas: https://pymoo.org. It's a bit limited though. It supports only minimize
of a vector of objectives.
Min
and Max
: https://www.localsolver.com/docs/last/modelingfeatures/multiobjectiveresolution.htmlN-1
before calling the solverIf I may comment on Gurobi's capability: You can specify both weights and priorities for the objectives. The objectives will then be grouped by the priority, and each group will be "blended" with a weighted sum.
In addition, the user can specify tolerances (both rel. and abs.?) to allow for some relaxation when going one level deeper into the hierarchy. This feature in particular is quite useful in practice, we found, as you'd otherwise quickly end up in extreme corners.
You can specify both weights and priorities for the objectives
Yeah. I haven't tested, but this should work:
set_optimizer_attribute.(
model,
Gurobi.MultiObjectivePriority.(1:3),
[2, 1, 1],
)
set_optimizer_attribute.(
model,
Gurobi.MultiObjectiveWeight.(1:3),
[1.0, 0.25, 0.75],
)
I think these are attributes of the optimizer though. They're unrelated to how we model the objectives at the JuMP level.
I think these are attributes of the optimizer though. They're unrelated to how we model the objectives at the JuMP level.
Fair enough, I'm not sure about the scope of this issue. If this is just about defining attributes that some of the solvers may support, there's not much more to say about it.
But the algorithm for hierarchical objectives is easy enough to implement on top of any (LP/MIP) solver, so it could make sense to implement some kind of wrapper for it.
But the algorithm for hierarchical objectives is easy enough to implement on top of any (LP/MIP) solver, so it could make sense to implement some kind of wrapper for it.
Yeah. That's my plan for: https://github.com/odow/MOO.jl
Currently it just has a single algorithm, but the idea is to collate a few simple ones.
Before the integration of MOP features into CPLEX and Gurobi, we have integrated in vOptGeneric two methods aiming to solve weighted sum and the lexicographic LP/MIP with any solver interfaced to JuMP. The methods to invoke are
:dicho or :dichotomy :lex or :lexico
These last months, we have extended HiGHS to 2LP (expected by several colleagues for research needs) and developped the epsilon-contraint method for 3 objectives based on the Kirlik-and-Sayin algorithm (vOptGeneric is mainly used today for solving bi-objective problems with the epslion-contraint method with declared with JuMP). Both should be integrated to vOptGeneric shorthly (it is a matter of freetime for us).
BTW, FICO has also announced recently the integration of MOP features into XPRESS.
These last months, we have extended HiGHS to 2LP
Nice :smile: Is there a link to the code? Does it support independent objective senses?
The code is already on github. We are now testing it. It will be available in the coming weeks (as soon as the tests will be completed, and cleaned from french comments).
HiGHS2LP implements the 3rd phase of the simplex algorithm. From the optimal solution obtained with the simplex algorithm, it iterates for generating all the non-dominated points.
HiGHS2LP is used exactly as HiGHS (see hereafter); the only difference comes the second objective that the user has to specify. Both objectives are given in maximization or in minimization.
The solver computes a minimum complete set of efficient solutions $X_E$ and the corresponding non-dominated points $Y_N$. It returns also the vector of critical weights $\lambda$. Here an example (coming from the book of Matthias Ehrgott, pages 154-155):
The 2LP problem to solve:
min 3x1 + x2
min -x1 -2x2
s/t x2 <= 3
3x1 - x2 <= 6
x1, x2 >= 0
The description for HiGHS2LP:
model.lp_.num_col_ = 2;
model.lp_.num_row_ = 2;
model.lp_.sense_ = ObjSense::kMinimize;
model.lp_.col_cost_ = {3 , 1};
model.lp_.col_cost2_ = {−1, −2};
model.lp_.col_lower_ = {0 , 0};
model.lp_.col_upper_ = {inf , inf };
model.lp_.row_lower_ = {−inf , −inf };
model.lp_.row_upper_ = {3 , 6};
model.lp_.a_matrix_.start_ = {0, 1, 3};
model.lp_.a_matrix_.index_ = { 1, 0, 1};
model.lp_.a_matrix_.value_ = { 3, 1, −1};
The output:
Running HiGHS 1.2.2 [date: 2022−09−27, git hash: n/a]
Copyright (c) 2022 ERGO−Code under MIT licence terms Solving LP without presolve or with basis
Iteration Objective Infeasibilities num(sum)
0 0.0000000000e+00 Pr: 0(0) 0s
2 1.2000000000e+01 Pr: 0(0); Du: 2(3) 0s
Model status : Optimal
Simplex iterations : 2
Objective value : 1.2000000000e+01
Objective value2 : −9.0000000000e+00
HiGHS run time : 0.00
Model status : Optimal
Simplex iteration count : 2
Objective function value for c1 : 12
Objective function value for c2 : −9
Primal solution status : Feasible
Dual solution status : Feasible
Basis : Valid
Column 0; value = 3; dual c1 = 0; dual c2 = 0; status: Basic
Column 1; value = 3; dual c1 = 0; dual c2 = 0; status: Basic
Row 0; value = 3; dual c1 = 2; dual c2 = −2.33333; status: At upper bound
Row 1; value = 6; dual c1 = 1; dual c2 = −0.333333; status: At upper bound
Basic feasible solutions :
(0 0 ) −> (0 0 )
(0 3 ) −> (3 −6 )
(3 3 ) −> (12 −9 )
Poids :
1; 0.666667; 0.25;
In the output we get:
To do:
Both objectives are given in maximization or in minimization
Okay. I think this settles it for me. Having a vector-valued objective function with a single objective sense is the way to go, since that's what all solvers supporting multiple objectives will support.
If, in future, we find that users keep asking for [Min, Max]
, we could add some sort of bridge to JuMP that would convert it into a pure minimization problem to pass to the solver.
So the PR for people to review if interested is https://github.com/jump-dev/MathOptInterface.jl/pull/2070
soon, develop a bridge from JuMP/MOI for calling the binary of HiGHS2LP
This should be easy, we can copy HiGHS.jl, but it will need https://github.com/jump-dev/MathOptInterface.jl/pull/2070 first.
@xgandibleux here's the documentation for the PR: https://jump.dev/JuMP.jl/previews/PR3176/manual/objective/#Set-a-vector-valued-objective
Take a read. Any comments on the syntax? Are you happy to model problems with a vector-valued objective?
@odow, I installed and tested this code on my computer, it is great! Having these two ways to set the objectives fits the expectations discussed here, the practice of vOptGeneric users, and viewed as convenient by colleagues active in MOP (I did a quick tour of my close collaborators).
Shall we moving to
(1) the integration of 2D/3D MOP solving methods called by optimize!()
, in taking in parameter
(2) the retrieval of outputs at least for 2 and 3 objective cases:
Perhaps starting with a 2D e-constraint method (which call a MILP solver among the 3 listed) for solving an IP will help to design the coming steps. The code corresponding to this method is available in vOptGeneric, which can be a starting point. An other tentative could be to revisit gurobi.jl in order to deliver a 2-IP and to return $Y{N}$ and $X{E}$ corresponding to a lexicographic resolution.
I've already implemented support in Gurobi.jl
, and MOO.jl
contains a couple of simple algorithms that we can move to vOptGeneric once this is released.
Here's what it looks like with these branches:
(hgh) pkg> st
Status `/private/tmp/hgh/Project.toml`
[2e9cd046] Gurobi v0.11.5 `https://github.com/jump-dev/Gurobi.jl.git#od/vector-optimization`
[87dc4568] HiGHS v1.4.2
[4076af6c] JuMP v1.7.0 `https://github.com/jump-dev/JuMP.jl.git#od/vector-optimization`
[0327d340] MOO v0.0.1 `https://github.com/odow/MOO.jl#master`
[b8f27783] MathOptInterface v1.11.5 `https://github.com/jump-dev/MathOptInterface.jl.git#od/vector-optimization`
julia> using JuMP
julia> import Gurobi, HiGHS, MOO
julia> begin
p1 = [77, 94, 71, 63, 96, 82, 85, 75, 72, 91, 99, 63, 84, 87, 79, 94, 90]
p2 = [65, 90, 90, 77, 95, 84, 70, 94, 66, 92, 74, 97, 60, 60, 65, 97, 93]
w = [80, 87, 68, 72, 66, 77, 99, 85, 70, 93, 98, 72, 100, 89, 67, 86, 91]
model = Model()
set_silent(model)
@variable(model, x[1:length(w)], Bin)
@objective(model, Max, [p1' * x, p2' * x])
@constraint(model, w' * x <= 900)
model
end
A JuMP Model
Maximization problem with:
Variables: 17
Objective function type: Vector{AffExpr}
`AffExpr`-in-`MathOptInterface.LessThan{Float64}`: 1 constraint
`VariableRef`-in-`MathOptInterface.ZeroOne`: 17 constraints
Model mode: AUTOMATIC
CachingOptimizer state: NO_OPTIMIZER
Solver name: No optimizer attached.
Names registered in the model: x
julia> set_optimizer(model, Gurobi.Optimizer)
julia> optimize!(model)
julia> for i in 1:result_count(model)
Y = objective_value(model; result = i)
println("Result $i : Objective value = $Y")
X = findall(xi -> xi > 0.9, value.(x; result = i))
println(" Items = $X")
end
Result 1 : Objective value = [934.0, 971.0]
Items = [2, 3, 5, 6, 8, 10, 11, 12, 15, 16, 17]
Result 2 : Objective value = [905.0, 897.0]
Items = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
julia> set_optimizer(model, () -> MOO.Optimizer(HiGHS.Optimizer))
julia> set_optimizer_attribute(model, MOO.Algorithm(), MOO.NISE())
julia> optimize!(model)
julia> for i in 1:result_count(model)
Y = objective_value(model; result = i)
println("Result $i : Objective value = $Y")
X = findall(xi -> xi > 0.9, value.(x; result = i))
println(" Items = $X")
end
Result 1 : Objective value = [955.0, 906.0]
Items = [2, 3, 5, 6, 9, 10, 11, 14, 15, 16, 17]
Result 2 : Objective value = [948.0, 939.0]
Items = [1, 2, 3, 5, 6, 8, 10, 11, 15, 16, 17]
Result 3 : Objective value = [934.0, 971.0]
Items = [2, 3, 5, 6, 8, 10, 11, 12, 15, 16, 17]
Result 4 : Objective value = [918.0, 983.0]
Items = [2, 3, 4, 5, 6, 8, 10, 11, 12, 16, 17]
julia> set_optimizer_attribute(model, MOO.Algorithm(), MOO.Hierarchical())
julia> optimize!(model)
julia> for i in 1:result_count(model)
Y = objective_value(model; result = i)
println("Result $i : Objective value = $Y")
X = findall(xi -> xi > 0.9, value.(x; result = i))
println(" Items = $X")
end
Result 1 : Objective value = [955.0, 906.0]
Items = [2, 3, 5, 6, 9, 10, 11, 14, 15, 16, 17]
Note that Gurobi returns multiple results, but the second is not on the frontier, it is just an integer feasible point.
Merged! Thanks for all of the input over the last few years.
This issue is for a discussion on introducing multiobjective support to JuMP (and MOI).
Passing multiple objectives to the solver
There are two possibilities:
@objective
to support vector-valued functions, so the user would write:index
keyword, so the user would write:Option (1) is probably the easiest, since it fits quite well into the MOI framework, and JuMP already has support for parsing vector-valued functions.
M.O. solvers would just declare:
The biggest downside with (1) is that it would only support linear and quadratic objectives. We wouldn't be able to directly support nonlinear objectives. (You could, of course, introduce a dummy variable with a nonlinear equality constraint.)
Querying results from the solver
I think the easiest way for M.O. solvers to return the Pareto frontier is for them to sligthly abuse the notion of
ResultCount
.We should implement
And then add keyword arguments to
value
,objective_value
, so we have:The only issue with
objective_value
(andobjective_bound
anddual_objective_bound
) is that they expect aFloat64
return type. This would be relaxed to depend on the type of the objective function.Example
If
MultiJuMP
implemented aMOI.Optimizer
solver, we could write:cc @anriseth, @matbesancon