Closed JWally closed 8 years ago
A multi-objective solver would be a nice addition!
May be you can simplify the description of the model to something like this:
{
"name": "Food Problem",
"optimize": {
"bacon": "max",
"cheddar cheese": "max",
"french fries": "max"
},
...
}
I understand that solving each function individually would get you 3 optimal solutions. But I am not sure I understand the scoring system, if you could provide some insight behind the logic.
Also, your algorithm seem to provide only one output but what you want is a list of all the pareto-optimal solutions: https://en.wikipedia.org/wiki/Pareto_efficiency.
One more thing to consider: it would be nice if the method was compatible with the MILP solver.
By the way, the solution I would naively use to solve multi-objective functions would be to solve the weighted sum of the objectives: max lambda1 * bacon + lambda2 * chedder + lambda3 * fries
with the constraint lambda1 + lambda2 + lambda3 = 1
The algorithm would be as follow:
First, I would solve two sub-problems where lambda1 = 0
and lambda1 = 1
. If both solutions are the same, then an interval lambda1 in [0, 1]
is obtain inside of which changing lambda1
is not going to have any effect on the objective. On the other hand, if the solutions are different, the problem where lambda1 = 0.5
is solved, if the solution is the same with lambda1 = 0
then an interval lambda1 in [0, 0.5]
is obtained inside of which changing lambda1
is not going to have any effect on the objective. By repeating the process it should be possible to obtains a list of such intervals for lambda1
and then, for each of those intervals, the process should be repeated with lambda2
. It will generate a list of pairs of intervals for lambda1
and lambda2
for which the process should be repeated with lambda3
.
I am not completely sure whether the solution I propose would work. I imagine it could work due to the nature of linear problems (they are linear!).
Your method looks simpler and faster, but I need to dig into it to understand it better. Here's my logic for the how the scoring function would work (description assumes all objectives to be max'd for east of description):
SETUP Find the max and min of each objective-attribute. These are the extremes for this attribute. For any solution in the future, this value of this attribute has to fall somewhere in this range or the model is infeasible.
EVALUATION Take an array of values, which will become min-constraints for their objective-attributes. Update the model with these new constraints and solve.
SCORING If the solution is feasible, figure out where each objective-attribute falls in its min / max scale. Take bacon for instance in the model above.
I can have at most 138g of bacon and have the model be feasible and I can have at least 0g of bacon and have the model be feasible. Say that when I solve the model for whatever I put in, it gives me 48g of bacon. On a scale of 0 to 138, with 138 being perfect, 48 is 35% perfect. This is my bacon score. I then multiply all of my scores together, and get a total score for the model.
I'm not sure how well this scoring method does, but so far it seems to work ok. I tried adding at first, but that led to lop-sided solutions. For example
addition (100% bacon + 0% cheese + 0% fries = 33% bacon + 33% cheese + 33% fries = 100% )
multiplication (100% b * 0% c * 0% f < 33% b * 33% c * 33% f < 100% b * 100% c * 100% f )
I think I understand what your algorithm does. But you actually reduce the multi-objective problem to a single objective problem and keep only the solution that maximizes the multiplication of the 3 scores?
When solving a MO (multi-objective) problem you should obtain a set of solutions in the pareto-front.
For instance (150 bacons, 30 cheese, 0 fries)
and (110 bacons, 30 cheese, 10 fries)
could both be part of the pareto-front and they cannot be compared to one another as one solution is better with respect to the quantity of bacon but worse on the fries, and who can tell which is best between bacon and fries?
By the way, I thought about the suggestion I made and there is something that bothers me perform-wise. The algorithm will keep splitting the interval that encloses a valid value for lambda
(that corresponds to a pareto-optimal solution) until a given precision is reached. It makes the method a little bit intensive in terms of operations. Therefore I think it would work better if the MO problem was reformulated using Karush Kuhn Tucker's conditions and trying to minimize or maximize the lambda values.
N.B I am currently working on implementing an API to solve quadratic objective functions, such as min (x - 4)^2 + (y-1)^2
and part of the solution implies reformulating the problem using Karush Kuhn Tucker's conditions. So I will try to make a PR to generate the additional constraints necessary to verify the Karush Kuhn Tucker's conditions in order to use them in both the MO and the QP (quadratic problem) solvers.
Sweet! I'll hold off until your next pr.
Also, I think I'm going to try and update the readme.md to reflect some of the functionality adds since I first wrote it. (Maybe not here), but could you explain your PR from earlier today, and what a use case might be for it.
@bchevalier
Was thinking about this earlier and had a question. Going back to this model:
{
"name": "Food Problem",
"optimize": {
"bacon": "max",
"cheddar cheese": "max",
"french fries": "min"
},
"constraints": {
"carb": { "equal": 375 },
"protein": { "equal": 225 },
"fat": { "equal": 66.666 }
},
"variables": {
"egg white":{ "carb": 0.0073, "protein": 0.109, "fat": 0.0017, "egg white": 1 },
"egg whole":{ "carb": 0.0072, "protein": 0.1256, "fat": 0.0951, "egg whole": 1 },
"cheddar cheese":{ "carb": 0.0128, "protein": 0.249, "fat": 0.3314, "cheddar cheese": 1 },
"bacon":{ "carb": 0.00667, "protein": 0.116, "fat": 0.4504, "bacon": 1 },
"potato": { "carb": 0.1747, "protein": 0.0202, "fat": 0.0009, "potato": 1 },
"french fries": { "carb": 0.3902, "protein": 0.038, "fat": 0.1612, "french fries": 1 }
}
}
When I solve for each objective, I get the following values for that objective and the other objectives:
[
{cheese: 189.11, fries: 0, bacon: 0}, // => cheese: 189.11
{cheese: 0, fries: 388.84, bacon: 0} // => fries: 388.84
{cheese: 0, fries: 0, bacon: 138.08}, // => bacon: 138.08
]
If I look at these 3 points as vertices of a plane in (delicious) 3d space, would anything on that plane be Pareto optimal, and anything outside of it be infeasible?
If I wanted to find a simultaneous max for all 3 objectives, I think I could just do a mid-point formula between the points and come up with a combination where nothing could be added to any objective without having to take away from another. Then add these as equality constraints to the model and re-solve on some non-sense variable. The same approach seems like it should work for any model like this where the objective functions are homogeneous (all max / min).
I'm still trying to work out how to handle multi-objective models with mixed type objective functions(min and max).
Does this make sense at all?
Something like this:
var solver = require("./src/solver");
var model = {
"name": "Food Problem",
"optimize": {
"bacon": "max",
"cheddar cheese": "max",
"french fries": "max"
},
"constraints": {
"carb": { "equal": 375 },
"protein": { "equal": 225 },
"fat": { "equal": 66.666 }
},
"variables": {
"egg white":{ "carb": 0.0073, "protein": 0.109, "fat": 0.0017, "egg white": 1 },
"egg whole":{ "carb": 0.0072, "protein": 0.1256, "fat": 0.0951, "egg whole": 1 },
"cheddar cheese":{ "carb": 0.0128, "protein": 0.249, "fat": 0.3314, "cheddar cheese": 1 },
"bacon":{ "carb": 0.00667, "protein": 0.116, "fat": 0.4504, "bacon": 1 },
"potato": { "carb": 0.1747, "protein": 0.0202, "fat": 0.0009, "potato": 1 },
"french fries": { "carb": 0.3902, "protein": 0.038, "fat": 0.1612, "french fries": 1 }
}
};
// 1. Optimize for each constraint
// 2. Find the mid-point for each attribute
var objectives = model.optimize,
new_constraints = JSON.parse(JSON.stringify(model.optimize)),
keys = Object.keys(model.optimize),
tmp,
max_counter = 0,
min_counter = 0;
// Delete the optimize object from the model
delete model.optimize;
// Iterate and Clear
for(var i = 0; i < keys.length; i++){
// Clean up the new_constraints
new_constraints[keys[i]] = 0;
}
// Solve and add
for(var i = 0; i < keys.length; i++){
// Prep the model
model.optimize = keys[i];
model.opType = objectives[keys[i]];
// add 1 to the counter...:-) !
if(model.opType === "max"){
max_counter += 1;
} else {
min_counter += 1;
}
// solve the model
tmp = solver.Solve(model);
// console.log(keys[i], tmp.result);
// Iterate over the keys
// and update our new constraints
for(var j = 0; j < keys.length; j++){
if(tmp[keys[j]]){
new_constraints[keys[j]] += tmp[keys[j]];
}
}
}
// Trying to find the mid-point
// divide each constraint by the
// number of constraints
// *midpoint formula*
// (x1 + x2 + x3) / 3
for(i = 0; i < keys.length; i++){
// Tell me...are we using min count or max
if(objectives[keys[i]] === "max"){
model.constraints[keys[i]] = {"equal": new_constraints[keys[i]] / max_counter};
} else {
model.constraints[keys[i]] = {"equal": new_constraints[keys[i]] / min_counter};
}
}
// Give the model a fake thing to optimize on
model.optimize = "cheater"
model.opType = "max"
// And add the fake attribute to the variables
// in the model
for(var x in model.variables){
model.variables[x].cheater = 1;
}
console.log("\n\nLOOK HERE \n\n");
console.log(solver.Solve(model));
thinking through this; right now its only able to do this to variables. Pure attributes don't work yet...
It doesn't seem that you will get the full pareto front this way. The method i am working on right now to tackle Quadratic Optimization problems could be used to solve MO problems, i do not have much time to work on it right now but i will try to integrate it to the JSLP within 2 weeks.
The idea is to transform the optimisation problem into a satisfaction problem using the KKT conditions and then maximize/minimize the lagrange multipliers while adding cutting planes (like in the MILP solver) in order to cover the full range of solutions. Some preprocesses can also be applied in order to ignore useless constraints (constraints that are going to be inactive at optimal points) by checking for colinearity with other constraints and linear independence with the objective functions.
I think I figured a solution that would be quite efficient and is straight forward enough. Add a solveMO
method to the Tableau class. This method will work in 4 steps:
Step 1 - Find an initial feasible solution that is part of the pareto front, i.e solve max 0.5 * objective1 + 0.5 * objective2
. Go to Step 2
.
Step 2 - Add the current solution to the set of pareto-optimal solutions. Create a set of candidates to perform a pivot operation from the current solution. A pivot candidate is a couple of variables that can be used to perform a pivot operation, it consists of a leaving variable and an entering variable. Therefore finding all the candidates consists in finding all the pairs of (entering variable, leaving variable)
just as in the phase2
method (except that the current phase2
selects only one pair). Go to Step 3
.
Step 3 - For each candidate, check whether pivoting will improve at least one objective. If it is true, then perform the pivot, and apply Step 2
while pairs of variables that have already been candidates. If there is no more candidate go to Step 4
.
Step 4 - The set of pareto-optimal solutions now consists of constraint intersections points (the simplex moves from one intersection to another). However, if you draw a MO problem on a graph, you can notice that the pareto front also consists of edges on some sides of the feasible region. These edges that contain infinitely many pareto-optimal solutions and they need to be added to the set of pareto-optimal solutions. And to find these edges is now trivial: select a pareto-optimal solution A, and among all the other pareto-optimal solutions, find a solution B that is connected to solution A by only one constraint. This means that the edge going from solution A to solution B contains pareto-optimal solutions. If you repeat the process you should get all the edges. This will allow you to get only 1d edges (segments), in order to get 2d edge (surfaces) you have to find triple of solutions (A,B,C) that differ by only one constraint. And so on until you find edges up to dimension kd where k is the number of objectives.
While ignoring Step 4
for now, it would already be a good start to have an algorithm for steps from 1 to 3.
Couple of questions:
Say that I want to make a drink consisting of scotch and soda. My glass can hold 100 mL of fluid. I want to have as much scotch and soda as possible.
How should I mix my drink? Is there a wrong answer as long as 100 mL of something is in the cup? In this scenario, does every combination that equals 100 mL represent the pareto front?
Given this model (dumb as it is) what do you think solver should return?
As long as there is 100mL of either scotch or soda there is no possible wrong answer. Yes, any combination that equals 100mL would be part of the pareto-front and the pareto- front would only consists of combinations of scotch and soda that equals 100mL.
One solution of the pareto-front would be (scotch, soda) = (100, 0)
. Another solution would be (scotch, soda) = (0, 100)
. In the algorithm I propose these would be two pareto-optimal solutions of dimension 0. And the edge {(scotch, soda) = (100, 0) <-> (scotch, soda) = (0, 100)}
would be a pareto-optimal solution of dimension 1 that represents all the combinations scotch and soda that equal 100mL such as (scotch, soda) = (75, 25)
and (scotch, soda) = (50, 50)
.
The solver should return paretoFront: [[(100, 0), (0, 100)], [{(100, 0) <-> (0, 100)}]]
.
In reality, you should be able to remove solutions of dimensions k-1 if they appear in a solution of dimenson k. In our example we could remove solutions (100, 0)
and (0, 100)
because they belong to {(100, 0) <-> (0, 100)}
. But it is probably more user friendly to also return them.
Couple of questions:
[{soda: 0, scotch: 100},{soda: 100, scotch: 0}]
? If so, could you just find the solution for each objective's optimum and save it as a vertex? y=mx+b
but for a polytope I don't know where to start.If this is possible (no. 2) what might be neat is if the solution you got back included the paretos and min/max for each objective (which would be found by looking through the paretos) like this:
...
paretos: [{scotch: 0, soda: 100},{scotch: 100, soda: 0}],
extremes: {scotch: {min: 0, max: 100}, soda: {min: 0, max: 100}}
...
Then, if you wanted to know how much scotch you need @ 55 ml of soda you could do this:
var a = solver.Locate({soda: 55}, [{scotch: 0, soda: 100},{scotch: 100, soda: 0}]);
where a would equal [{soda: 55, scotch: 45}]
.
Thoughts?
Edit:
If my line of thinking is right (no promise it is), you wouldn't even need to necessarily have a "Locate" method. You could just adjust your model setting the value of the objective that you want as an equality constraint and remove it as an objective then re-solve. The new pareto front that this would generate should be on the same front as the original model, no?
var solver = require("./src/Polyopt"),
validate = require("./src/Validation"),
model = {
optimize: {
scotch: "max",
soda: "max"
},
constraints: {
fluid: {equal: 100},
},
variables: {
scotch: {fluid: 1, scotch: 1},
soda: {fluid: 1, soda: 1}
}
}
console.log(solver(model));
which produces this:
{ midpoint: { feasible: true, result: -0, scotch: 50, soda: 50 },
pareto: [ { scotch: 100, soda: 0 }, { soda: 100, scotch: 0 } ],
ranges: { scotch: { min: 0, max: 100 }, soda: { min: 0, max: 100 } } }
Soda = 55, Scotch = ? can be solve by solving this:
model = {
optimize: {
scotch: "max",
soda: "max"
},
constraints: {
fluid: {equal: 100},
soda: {equal: 55}
},
variables: {
scotch: {fluid: 1, scotch: 1},
soda: {fluid: 1, soda: 1}
}
}
yielding this:
{ midpoint: { feasible: true, result: -0, scotch: 45, soda: 55 },
pareto: [ { scotch: 45, soda: 55 } ],
ranges: { scotch: { min: 45, max: 45 }, soda: { min: 55, max: 55 } } }
keeping the solutions in the new pareto on the old pareto. Is this correct?
For question 1. the answer is no, it is not enough to just store the vertices because it doesn't tell the user which combination of vertices form an edge.
For question 2. it looks like the equation for a 2d plane has 3 variables (see plane on wiki). So I imagine that an equation for a hyperplane of dimension k has k + 1 variables. To find the equation it is necessary to solve a square system of linear questions, which should be feasible. Giving the equation as an output to the user is nice but not enough, a method to test whether a point on the plane is outside the boundaries of the edge is also necessary.
As an alternative to providing an equation for the hyperplane and a method to check for the inclusion of a point on an edge, the Locate
method you describe can be useful and I think it works as you obtain a subset of the initial pareto-front.
I went ahead and added a new method to solver, "MultiObjective" (I kept it out of Solve because it uses solve, and the models for each are different). It takes 2 arguments, "model" (self explanatory) and "detail". If detail is true, it'll return
if false, it'll just return the mid-point.
For what I use this for; I think these three values give me more than enough information; but am still open to changing it. I still need to update the read-me and change version number.
btw, do you want to be listed in the authors in the package.json? If so, what would you like me to put?
May be the 2 arguments could take 3 possible values:
About being list in package.json, I think you could add the following lines:
"contributors": [
"Brice Chevalier <bchevalier@wizcorp.jp>"
]
Thank you!
Updated package.json to include your information as contributor. Thank you!
Also merged your PR. Just an FYI, the "real" entry point to the library is at src/main.js
. src/solver.js
gets re-written every time that I run grunt prod
by browserify. This process combines everything from the requires
and puts it into a format that works in node and the browser in the file src/solver.js
which is where its always been (a.k.a, it doesn't break anyone using an older version).
Your comment above about having the user specify "midpoint", "partial", or "full"; do you think this makes for a better interface because the user has to be explicit about what they're asking for? Or is this just preference? I'm cool with it either way, but was interested to know your rationale behind the suggestion.
Thanks for the info.
About my comment, it's because when it comes to solving an multi-objective optimisation problem, the exhaustive result is the set of all the points on the pareto-front. If your argument has the name "detail" it might confuse the user as to what is the returned result. "midpoint", "partial", or "full" seem more understandable from the point of view of someone trying to solve a multi-objective optimisation problem. In my opinion, the default value should even be the full pareto-front. If the user simply wants to solve a combination of 2 objectives he can do the math before and transform it into a single objective (although it's true that it is less user friendly).
Good point. For brevity, I think I'm going to just have it return the whole thing (object with midpoint, ranges, and vertices) and let the user handle it from there. This also makes it pretty easy to add pareto to the return without breaking anything in the API.
Done with this issue for now. Closing. Bumped new version out to NPM last night.
Thanks!
@bchevalier
What are your thoughts on trying to handle optimization problems with multiple objectives? I've figured out a way to do it, but it's not pretty; and I wanted your input on the route I'm taking, and if its even worth including.
Here's a use case:
Below is what I'm proposing multi-opt models look like (no code changes need to happen with what you've done, since models with an array for the optimize attribute could be handled differently).
Here's what I've come up with so far, referencing the above model:
For example, I could give it [44, 66, 115](for bacon, cheese, and fries), and it would update the model to look like this:
3a. The function would then solve the model with these new constraints, returning the following:
3b. Compare the results for the optimization attributes to their potential best (step 1). For instance, for cheese in this example, you'd go:
Which returns 0.35. Multiply all of the results together to get 0.0366677. This is our score. Return it out.
Below is what I have so far (since I can't upload it) where I use brute force to solve the above problem. Very un-refined at this point and very much a work in progress.