JWally / jsLPSolver

Simple OOP javaScript library to solve linear programs, and mixed integer linear programs
The Unlicense
417 stars 67 forks source link

Multi-Objective Optimization #19

Closed JWally closed 8 years ago

JWally commented 8 years ago

@bchevalier

What are your thoughts on trying to handle optimization problems with multiple objectives? I've figured out a way to do it, but it's not pretty; and I wanted your input on the route I'm taking, and if its even worth including.

Here's a use case:

Some diets allow you to eat as much junk food as you want and still lose weight and gain muscle as long as your ratio of carbs to protein to fat stays in check.

My optimal ratio requires me to eat 375 grams of carbs, 225 grams of protein, and 66.666 grams of fat per day. I also want to eat as much bacon, cheese, and french fries as possible.

How much of what should I eat to satisfy these requirements?

Below is what I'm proposing multi-opt models look like (no code changes need to happen with what you've done, since models with an array for the optimize attribute could be handled differently).

{ 
    "name": "Food Problem",
    "optimize": [
        {"key":"bacon", "opType": "max"},
        {"key": "cheddar cheese", "opType": "max"},
        {"key": "french fries", "opType": "max"}
    ],
    "constraints": { 
        "carb": { "equal": 375 },
        "protein": { "equal": 225 },
        "fat": { "equal": 66.666 } 
    },
    "variables": { 
         "egg white":{ "carb": 0.0073, "protein": 0.109, "fat": 0.0017, "egg white": 1 },
         "egg whole":{ "carb": 0.0072, "protein": 0.1256, "fat": 0.0951, "egg whole": 1 },
         "cheddar cheese":{ "carb": 0.0128, "protein": 0.249, "fat": 0.3314, "cheddar cheese": 1 },
         "bacon":{ "carb": 0.00667, "protein": 0.116, "fat": 0.4504, "bacon": 1 },
         "potato": { "carb": 0.1747, "protein": 0.0202, "fat": 0.0009, "potato": 1 },
         "french fries": { "carb": 0.3902, "protein": 0.038, "fat": 0.1612, "french fries": 1 }
    } 
}

Here's what I've come up with so far, referencing the above model:

  1. Maximize and minimize the model for each attribute to be optimized
bacon: {most: 138, least: 0}
cheese: {most: 189, least: 0}
fries: {most: 388, least: 0}
  1. Add a nonsense attribute to each variable that you'll use to solve on, since this solution works by updating constraints and making sure the model is feasible
  2. Create a function that accepts an array which represents what the min for each optimization attribute should be (or max if the attribute is being minimized).

For example, I could give it [44, 66, 115](for bacon, cheese, and fries), and it would update the model to look like this:

    "constraints": { 
        "carb": { "equal": 375 },
        "protein": { "equal": 225 },
        "fat": { "equal": 66.666 },
        "bacon": { "min": 44},
        "cheddar cheese": { "min": 66 },
        "french fries": { "min": 115 }        
    }

3a. The function would then solve the model with these new constraints, returning the following:

{ feasible: true,
  result: 3534.71621236,
  bacon: 49.05518453,
  'egg white': 1483.68745307,
  potato: 1820.97357477,
  'cheddar cheese': 66,
  'french fries': 115}

3b. Compare the results for the optimization attributes to their potential best (step 1). For instance, for cheese in this example, you'd go:

   (66 - 0 (worst)) / (189(best) - 0(worst))

Which returns 0.35. Multiply all of the results together to get 0.0366677. This is our score. Return it out.

  1. Create some optimization function (gradient descent?) that will maximize the score in as few steps as possible.

Below is what I have so far (since I can't upload it) where I use brute force to solve the above problem. Very un-refined at this point and very much a work in progress.


var solver = require("./src/solver");

var the_model = { 
    "name": "Food Problem",
    "optimize": [
        {"key":"bacon", "opType": "max"},
        {"key": "cheddar cheese", "opType": "max"},
        {"key": "french fries", "opType": "max"}
    ],
    "constraints": { 
        "carb": { "equal": 375 },
        "protein": { "equal": 225 },
        "fat": { "equal": 66.666 } 
    },
    "variables": { 
         "egg white":{ "carb": 0.0073, "protein": 0.109, "fat": 0.0017, "egg white": 1 },
         "egg whole":{ "carb": 0.0072, "protein": 0.1256, "fat": 0.0951, "egg whole": 1 },
         "cheddar cheese":{ "carb": 0.0128, "protein": 0.249, "fat": 0.3314, "cheddar cheese": 1 },
         "bacon":{ "carb": 0.00667, "protein": 0.116, "fat": 0.4504, "bacon": 1 },
         "potato": { "carb": 0.1747, "protein": 0.0202, "fat": 0.0009, "potato": 1 },
         "french fries": { "carb": 0.3902, "protein": 0.038, "fat": 0.1612, "french fries": 1 }
    } 
};

function Fit_Fx(){};

Fit_Fx.prototype.load = function(model){

    // Some variables to help
    // keep track of stuff down
    // below
    var tmp_obj = {},
        tmp_sol = {},
        tmp_key = "",
        tmp_num = 0;

    // Create an object on *this* to house
    // all of our fitness info
    this._data = {

        // Where we hold the things
        // we want to optimize
        "targets": {},

        // This'll be easier than
        // having to eat up `Object.keys`
        "keys": [],

        // And we'll want a hard and fast
        // version of the model
        // so here
        "model": JSON.parse(JSON.stringify(model))
    };

    // Loop over the array of objectives
    // storing the pertinents of each
    // somewhere in our _data object
    for(var i = 0; i < model.optimize.length; i++){

        // get the key we're working with
        tmp_key = model.optimize[i].key;

        // Load the target's key into the "Keys" object
        this._data.keys.push(tmp_key)

        // push the model's details into this._data.targets
        tmp_obj = this._data.targets[model.optimize[i].key] = model.optimize[i];

        // add a best and worst to each of these objects
        // not sure if we'll need them; but in case we do,
        // it'll be nice #YAGNI
        tmp_obj.best = 0;
        tmp_obj.worst = tmp_obj.opType === "max" ? 1e99 : 0;

    }

    // Loop over our keys, and figure out what the optimum
    // is for each target in isolation...
    for(i = 0; i < this._data.keys.length; i++){

        // since we're not doing anything *too* weird
        // we don't need to `JSON.parse(stringify))`
        // the model
        tmp_obj = this._data.model;

        // set the model's optimize and optypes for best
        tmp_obj.optimize = this._data.keys[i];
        tmp_obj.opType = this._data.targets[tmp_obj.optimize].opType;

        // Solve the model
        tmp_sol = solver.Solve(tmp_obj);

        // Update the best
        this._data.targets[tmp_obj.optimize].best = tmp_sol.result;

        // Do the whole thing again, but aimed @ worst
        tmp_obj.opType = tmp_obj.opType === "max" ? "min" : "max";

        // Solve the bad model
        tmp_sol = solver.Solve(tmp_obj);

        // Note to self...
        // this feels like shadow prices
        // or something...
        // Because this is aimed at "trivial"
        // examples, this isn't expensive; 
        // but there's probably
        // a lot more effecient way of
        // doing this...

        // Update the worst
        this._data.targets[tmp_obj.optimize].worst = tmp_sol.result;
    }

    // Finally, give each variable some non-sense
    // attribute to optimize
    for(i in this._data.model.variables){
        this._data.model.variables[i].fake_target = 1;
    }

}

// Easy function to get details out
// should we want them...
Fit_Fx.prototype.details = function(){
    return this._data;
}

// Now we do the actual scoring function
Fit_Fx.prototype.score = function(vector){

    // Copy the model, get the keys, and targets
    var model = this._data.model,
        keys = this._data.keys,
        targets = this._data.targets,
        tmp_opt = "",
        tmp_sol = {},
        tmp_obj = {},
        tmp_num = 0,
        total = 1;

    // Update the model to go after the fake target
    // and set optype to "max" because...who cares
    model.opType = "max";
    model.optimize = "fake_target";

    // Set Constraints based on the vectors
    for(var i = 0; i < keys.length; i++){
        model.constraints[keys[i]] = model.constraints[keys[i]] || {};
        tmp_opt = targets[keys[i]].opType = "max" ? "min" : "max";
        model.constraints[keys[i]][tmp_opt] = vector[i];
    }

    // Solve it now!
    tmp_sol = solver.Solve(model);

    // Make sure its feasible
    if(tmp_sol.feasible){

        // Iterate over the keys again...
        for(i = 0; i < keys.length; i++){

            tmp_obj = targets[keys[i]];

            // First off, does the key
            // exist in the solution?
            if(tmp_sol[keys[i]]){
                if(tmp_obj.opType = "max"){
                    tmp_num = (tmp_sol[keys[i]] - tmp_obj.worst) / (tmp_obj.best - tmp_obj.worst);
                    total *= tmp_num;
                } else {

                }
            } else {

                // target doesn't exist in the solution
                // set Return a score of 0
                return 0
            }
        }

        // Return what we got
        tmp_sol.score = total;
        return tmp_sol;
    } else {

        // Solution isn't feasible...
        return 0;
    }

}

function loop_index(nbr, arry){
    var rtrn = [];

    for(var i = 0; i < arry.length; i++){
        rtrn.push(nbr % arry[i]);
        nbr = Math.floor(nbr / arry[i]);
    }
    return rtrn;
}

// Instantiate the function
var fitness = new Fit_Fx();

// Load the model into the function
// to set up scoring
fitness.load(the_model);

// Play Example for testing

var ary = [138, 189, 388],
        tst = [],
        tmp,
        rslts = [],
        global = {score: -1e99};

var num = 11;

for(i = 0; i < num * num * num; i++){
    tst = (loop_index(i,[num, num, num]).map(function(d, b){return d/(num - 1) * ary[b]}));
    tmp = fitness.score(tst);

    if(tmp !== 0){
        global = tmp.score > global.score ? tmp : global;
    }
}

console.log(global);
bchevalier commented 8 years ago

A multi-objective solver would be a nice addition!

May be you can simplify the description of the model to something like this:

{
    "name": "Food Problem",
    "optimize": {
        "bacon": "max",
        "cheddar cheese": "max",
        "french fries": "max"
    },
    ...
}

I understand that solving each function individually would get you 3 optimal solutions. But I am not sure I understand the scoring system, if you could provide some insight behind the logic.

Also, your algorithm seem to provide only one output but what you want is a list of all the pareto-optimal solutions: https://en.wikipedia.org/wiki/Pareto_efficiency.

bchevalier commented 8 years ago

One more thing to consider: it would be nice if the method was compatible with the MILP solver.

bchevalier commented 8 years ago

By the way, the solution I would naively use to solve multi-objective functions would be to solve the weighted sum of the objectives: max lambda1 * bacon + lambda2 * chedder + lambda3 * fries with the constraint lambda1 + lambda2 + lambda3 = 1

The algorithm would be as follow: First, I would solve two sub-problems where lambda1 = 0 and lambda1 = 1. If both solutions are the same, then an interval lambda1 in [0, 1] is obtain inside of which changing lambda1 is not going to have any effect on the objective. On the other hand, if the solutions are different, the problem where lambda1 = 0.5 is solved, if the solution is the same with lambda1 = 0 then an interval lambda1 in [0, 0.5] is obtained inside of which changing lambda1 is not going to have any effect on the objective. By repeating the process it should be possible to obtains a list of such intervals for lambda1 and then, for each of those intervals, the process should be repeated with lambda2. It will generate a list of pairs of intervals for lambda1 and lambda2 for which the process should be repeated with lambda3.

I am not completely sure whether the solution I propose would work. I imagine it could work due to the nature of linear problems (they are linear!).

JWally commented 8 years ago

Your method looks simpler and faster, but I need to dig into it to understand it better. Here's my logic for the how the scoring function would work (description assumes all objectives to be max'd for east of description):

SETUP Find the max and min of each objective-attribute. These are the extremes for this attribute. For any solution in the future, this value of this attribute has to fall somewhere in this range or the model is infeasible.

EVALUATION Take an array of values, which will become min-constraints for their objective-attributes. Update the model with these new constraints and solve.

SCORING If the solution is feasible, figure out where each objective-attribute falls in its min / max scale. Take bacon for instance in the model above.

I can have at most 138g of bacon and have the model be feasible and I can have at least 0g of bacon and have the model be feasible. Say that when I solve the model for whatever I put in, it gives me 48g of bacon. On a scale of 0 to 138, with 138 being perfect, 48 is 35% perfect. This is my bacon score. I then multiply all of my scores together, and get a total score for the model.

I'm not sure how well this scoring method does, but so far it seems to work ok. I tried adding at first, but that led to lop-sided solutions. For example

addition (100% bacon + 0% cheese + 0% fries = 33% bacon + 33% cheese + 33% fries = 100% )

multiplication (100% b * 0% c * 0% f < 33% b * 33% c * 33% f < 100% b * 100% c * 100% f )

bchevalier commented 8 years ago

I think I understand what your algorithm does. But you actually reduce the multi-objective problem to a single objective problem and keep only the solution that maximizes the multiplication of the 3 scores?

When solving a MO (multi-objective) problem you should obtain a set of solutions in the pareto-front. For instance (150 bacons, 30 cheese, 0 fries) and (110 bacons, 30 cheese, 10 fries) could both be part of the pareto-front and they cannot be compared to one another as one solution is better with respect to the quantity of bacon but worse on the fries, and who can tell which is best between bacon and fries?

By the way, I thought about the suggestion I made and there is something that bothers me perform-wise. The algorithm will keep splitting the interval that encloses a valid value for lambda (that corresponds to a pareto-optimal solution) until a given precision is reached. It makes the method a little bit intensive in terms of operations. Therefore I think it would work better if the MO problem was reformulated using Karush Kuhn Tucker's conditions and trying to minimize or maximize the lambda values.

N.B I am currently working on implementing an API to solve quadratic objective functions, such as min (x - 4)^2 + (y-1)^2 and part of the solution implies reformulating the problem using Karush Kuhn Tucker's conditions. So I will try to make a PR to generate the additional constraints necessary to verify the Karush Kuhn Tucker's conditions in order to use them in both the MO and the QP (quadratic problem) solvers.

JWally commented 8 years ago

Sweet! I'll hold off until your next pr.

Also, I think I'm going to try and update the readme.md to reflect some of the functionality adds since I first wrote it. (Maybe not here), but could you explain your PR from earlier today, and what a use case might be for it.

JWally commented 8 years ago

@bchevalier

Was thinking about this earlier and had a question. Going back to this model:

{ 
    "name": "Food Problem",
    "optimize": {
        "bacon": "max",
        "cheddar cheese": "max",
        "french fries": "min"
    },
    "constraints": { 
        "carb": { "equal": 375 },
        "protein": { "equal": 225 },
        "fat": { "equal": 66.666 } 
    },
    "variables": { 
         "egg white":{ "carb": 0.0073, "protein": 0.109, "fat": 0.0017, "egg white": 1 },
         "egg whole":{ "carb": 0.0072, "protein": 0.1256, "fat": 0.0951, "egg whole": 1 },
         "cheddar cheese":{ "carb": 0.0128, "protein": 0.249, "fat": 0.3314, "cheddar cheese": 1 },
         "bacon":{ "carb": 0.00667, "protein": 0.116, "fat": 0.4504, "bacon": 1 },
         "potato": { "carb": 0.1747, "protein": 0.0202, "fat": 0.0009, "potato": 1 },
         "french fries": { "carb": 0.3902, "protein": 0.038, "fat": 0.1612, "french fries": 1 }
    } 
}

When I solve for each objective, I get the following values for that objective and the other objectives:

[
{cheese: 189.11, fries: 0, bacon: 0}, // => cheese: 189.11
{cheese: 0, fries: 388.84, bacon: 0} // => fries: 388.84
{cheese: 0, fries: 0, bacon: 138.08}, // => bacon: 138.08
]

If I look at these 3 points as vertices of a plane in (delicious) 3d space, would anything on that plane be Pareto optimal, and anything outside of it be infeasible?

If I wanted to find a simultaneous max for all 3 objectives, I think I could just do a mid-point formula between the points and come up with a combination where nothing could be added to any objective without having to take away from another. Then add these as equality constraints to the model and re-solve on some non-sense variable. The same approach seems like it should work for any model like this where the objective functions are homogeneous (all max / min).

I'm still trying to work out how to handle multi-objective models with mixed type objective functions(min and max).

Does this make sense at all?

JWally commented 8 years ago

Something like this:

var solver = require("./src/solver");

var model = { 
    "name": "Food Problem",
    "optimize": {
        "bacon": "max",
        "cheddar cheese": "max",
        "french fries": "max"
    },
    "constraints": { 
        "carb": { "equal": 375 },
        "protein": { "equal": 225 },
        "fat": { "equal": 66.666 }
    },
    "variables": { 
         "egg white":{ "carb": 0.0073, "protein": 0.109, "fat": 0.0017, "egg white": 1 },
         "egg whole":{ "carb": 0.0072, "protein": 0.1256, "fat": 0.0951, "egg whole": 1 },
         "cheddar cheese":{ "carb": 0.0128, "protein": 0.249, "fat": 0.3314, "cheddar cheese": 1 },
         "bacon":{ "carb": 0.00667, "protein": 0.116, "fat": 0.4504, "bacon": 1 },
         "potato": { "carb": 0.1747, "protein": 0.0202, "fat": 0.0009, "potato": 1 },
         "french fries": { "carb": 0.3902, "protein": 0.038, "fat": 0.1612, "french fries": 1 }
    } 
};

// 1. Optimize for each constraint
// 2. Find the mid-point for each attribute
var objectives = model.optimize,
    new_constraints = JSON.parse(JSON.stringify(model.optimize)),
    keys = Object.keys(model.optimize),
    tmp,
    max_counter = 0,
    min_counter = 0;

// Delete the optimize object from the model
delete model.optimize;

// Iterate and Clear
for(var i = 0; i < keys.length; i++){
    // Clean up the new_constraints
    new_constraints[keys[i]] = 0;
}

// Solve and add
for(var i = 0; i < keys.length; i++){
    // Prep the model
    model.optimize = keys[i];
    model.opType = objectives[keys[i]];

    // add 1 to the counter...:-) !
    if(model.opType === "max"){
        max_counter += 1;
    } else {
        min_counter += 1;
    }

    // solve the model
    tmp = solver.Solve(model);

    // console.log(keys[i], tmp.result);

    // Iterate over the keys
    // and update our new constraints
    for(var j = 0; j < keys.length; j++){
        if(tmp[keys[j]]){
            new_constraints[keys[j]] += tmp[keys[j]];
        }
    }
}

// Trying to find the mid-point
// divide each constraint by the
// number of constraints
// *midpoint formula*
// (x1 + x2 + x3) / 3
for(i = 0; i < keys.length; i++){   
    // Tell me...are we using min count or max
    if(objectives[keys[i]] === "max"){
        model.constraints[keys[i]] = {"equal": new_constraints[keys[i]] / max_counter};
    } else {
        model.constraints[keys[i]] = {"equal": new_constraints[keys[i]] / min_counter};
    }

}

// Give the model a fake thing to optimize on
model.optimize = "cheater"
model.opType = "max"

// And add the fake attribute to the variables
// in the model
for(var x in model.variables){
    model.variables[x].cheater = 1;
}
console.log("\n\nLOOK HERE \n\n");
console.log(solver.Solve(model));
JWally commented 8 years ago

thinking through this; right now its only able to do this to variables. Pure attributes don't work yet...

bchevalier commented 8 years ago

It doesn't seem that you will get the full pareto front this way. The method i am working on right now to tackle Quadratic Optimization problems could be used to solve MO problems, i do not have much time to work on it right now but i will try to integrate it to the JSLP within 2 weeks.

The idea is to transform the optimisation problem into a satisfaction problem using the KKT conditions and then maximize/minimize the lagrange multipliers while adding cutting planes (like in the MILP solver) in order to cover the full range of solutions. Some preprocesses can also be applied in order to ignore useless constraints (constraints that are going to be inactive at optimal points) by checking for colinearity with other constraints and linear independence with the objective functions.

bchevalier commented 8 years ago

I think I figured a solution that would be quite efficient and is straight forward enough. Add a solveMO method to the Tableau class. This method will work in 4 steps:

Step 1 - Find an initial feasible solution that is part of the pareto front, i.e solve max 0.5 * objective1 + 0.5 * objective2. Go to Step 2.

Step 2 - Add the current solution to the set of pareto-optimal solutions. Create a set of candidates to perform a pivot operation from the current solution. A pivot candidate is a couple of variables that can be used to perform a pivot operation, it consists of a leaving variable and an entering variable. Therefore finding all the candidates consists in finding all the pairs of (entering variable, leaving variable) just as in the phase2 method (except that the current phase2 selects only one pair). Go to Step 3.

Step 3 - For each candidate, check whether pivoting will improve at least one objective. If it is true, then perform the pivot, and apply Step 2 while pairs of variables that have already been candidates. If there is no more candidate go to Step 4.

Step 4 - The set of pareto-optimal solutions now consists of constraint intersections points (the simplex moves from one intersection to another). However, if you draw a MO problem on a graph, you can notice that the pareto front also consists of edges on some sides of the feasible region. These edges that contain infinitely many pareto-optimal solutions and they need to be added to the set of pareto-optimal solutions. And to find these edges is now trivial: select a pareto-optimal solution A, and among all the other pareto-optimal solutions, find a solution B that is connected to solution A by only one constraint. This means that the edge going from solution A to solution B contains pareto-optimal solutions. If you repeat the process you should get all the edges. This will allow you to get only 1d edges (segments), in order to get 2d edge (surfaces) you have to find triple of solutions (A,B,C) that differ by only one constraint. And so on until you find edges up to dimension kd where k is the number of objectives.

While ignoring Step 4 for now, it would already be a good start to have an algorithm for steps from 1 to 3.

JWally commented 8 years ago

Couple of questions:

Say that I want to make a drink consisting of scotch and soda. My glass can hold 100 mL of fluid. I want to have as much scotch and soda as possible.

How should I mix my drink? Is there a wrong answer as long as 100 mL of something is in the cup? In this scenario, does every combination that equals 100 mL represent the pareto front?

Given this model (dumb as it is) what do you think solver should return?

bchevalier commented 8 years ago

As long as there is 100mL of either scotch or soda there is no possible wrong answer. Yes, any combination that equals 100mL would be part of the pareto-front and the pareto- front would only consists of combinations of scotch and soda that equals 100mL.

One solution of the pareto-front would be (scotch, soda) = (100, 0). Another solution would be (scotch, soda) = (0, 100). In the algorithm I propose these would be two pareto-optimal solutions of dimension 0. And the edge {(scotch, soda) = (100, 0) <-> (scotch, soda) = (0, 100)} would be a pareto-optimal solution of dimension 1 that represents all the combinations scotch and soda that equal 100mL such as (scotch, soda) = (75, 25) and (scotch, soda) = (50, 50).

The solver should return paretoFront: [[(100, 0), (0, 100)], [{(100, 0) <-> (0, 100)}]]. In reality, you should be able to remove solutions of dimensions k-1 if they appear in a solution of dimenson k. In our example we could remove solutions (100, 0) and (0, 100) because they belong to {(100, 0) <-> (0, 100)}. But it is probably more user friendly to also return them.

JWally commented 8 years ago

Couple of questions:

  1. Would it be possible to just capture an array of the vertices of these points and still be able to represent the pareto front? e.g. [{soda: 0, scotch: 100},{soda: 100, scotch: 0}] ? If so, could you just find the solution for each objective's optimum and save it as a vertex?
  2. If you know the edge points of the polytope, is there a way that I could plug in a value for 1 thing and get the coordinates for everything else along the pareto front? e.g. Being able to answer how much scotch do I need if I use 75 ml of soda? How much soda do I need for 10 ml of scotch? For a line, it seems like you could just figure out y=mx+b but for a polytope I don't know where to start.

If this is possible (no. 2) what might be neat is if the solution you got back included the paretos and min/max for each objective (which would be found by looking through the paretos) like this:

...
paretos: [{scotch: 0, soda: 100},{scotch: 100, soda: 0}],
extremes: {scotch: {min: 0, max: 100}, soda: {min: 0, max: 100}}
...

Then, if you wanted to know how much scotch you need @ 55 ml of soda you could do this:

var a = solver.Locate({soda: 55},  [{scotch: 0, soda: 100},{scotch: 100, soda: 0}]);

where a would equal [{soda: 55, scotch: 45}].

Thoughts?

Edit:

If my line of thinking is right (no promise it is), you wouldn't even need to necessarily have a "Locate" method. You could just adjust your model setting the value of the objective that you want as an equality constraint and remove it as an objective then re-solve. The new pareto front that this would generate should be on the same front as the original model, no?

var solver = require("./src/Polyopt"),
    validate = require("./src/Validation"),
    model = {
        optimize: {
            scotch: "max",
            soda: "max"
        },
        constraints: {
            fluid: {equal: 100},
        },
        variables: {
            scotch: {fluid: 1, scotch: 1},
            soda: {fluid: 1, soda: 1}
        }
    }

console.log(solver(model));

which produces this:

{ midpoint: { feasible: true, result: -0, scotch: 50, soda: 50 },
  pareto: [ { scotch: 100, soda: 0 }, { soda: 100, scotch: 0 } ],
  ranges: { scotch: { min: 0, max: 100 }, soda: { min: 0, max: 100 } } }

Soda = 55, Scotch = ? can be solve by solving this:

    model = {
        optimize: {
            scotch: "max",
            soda: "max"
        },
        constraints: {
            fluid: {equal: 100},
            soda: {equal: 55}
        },
        variables: {
            scotch: {fluid: 1, scotch: 1},
            soda: {fluid: 1, soda: 1}
        }
    }

yielding this:

{ midpoint: { feasible: true, result: -0, scotch: 45, soda: 55 },
  pareto: [ { scotch: 45, soda: 55 } ],
  ranges: { scotch: { min: 45, max: 45 }, soda: { min: 55, max: 55 } } }

keeping the solutions in the new pareto on the old pareto. Is this correct?

bchevalier commented 8 years ago

For question 1. the answer is no, it is not enough to just store the vertices because it doesn't tell the user which combination of vertices form an edge.

For question 2. it looks like the equation for a 2d plane has 3 variables (see plane on wiki). So I imagine that an equation for a hyperplane of dimension k has k + 1 variables. To find the equation it is necessary to solve a square system of linear questions, which should be feasible. Giving the equation as an output to the user is nice but not enough, a method to test whether a point on the plane is outside the boundaries of the edge is also necessary.

As an alternative to providing an equation for the hyperplane and a method to check for the inclusion of a point on an edge, the Locate method you describe can be useful and I think it works as you obtain a subset of the initial pareto-front.

JWally commented 8 years ago

I went ahead and added a new method to solver, "MultiObjective" (I kept it out of Solve because it uses solve, and the models for each are different). It takes 2 arguments, "model" (self explanatory) and "detail". If detail is true, it'll return

  1. The mid-point (midpoint)
  2. The solution for each objective solved in isolation (pareto)
  3. The value ranges of each variable in the solutions (range)

if false, it'll just return the mid-point.

For what I use this for; I think these three values give me more than enough information; but am still open to changing it. I still need to update the read-me and change version number.

JWally commented 8 years ago

btw, do you want to be listed in the authors in the package.json? If so, what would you like me to put?

bchevalier commented 8 years ago

May be the 2 arguments could take 3 possible values:

About being list in package.json, I think you could add the following lines:

  "contributors": [
    "Brice Chevalier <bchevalier@wizcorp.jp>"
  ]

Thank you!

JWally commented 8 years ago

Updated package.json to include your information as contributor. Thank you!

Also merged your PR. Just an FYI, the "real" entry point to the library is at src/main.js. src/solver.js gets re-written every time that I run grunt prod by browserify. This process combines everything from the requires and puts it into a format that works in node and the browser in the file src/solver.js which is where its always been (a.k.a, it doesn't break anyone using an older version).

Your comment above about having the user specify "midpoint", "partial", or "full"; do you think this makes for a better interface because the user has to be explicit about what they're asking for? Or is this just preference? I'm cool with it either way, but was interested to know your rationale behind the suggestion.

bchevalier commented 8 years ago

Thanks for the info.

About my comment, it's because when it comes to solving an multi-objective optimisation problem, the exhaustive result is the set of all the points on the pareto-front. If your argument has the name "detail" it might confuse the user as to what is the returned result. "midpoint", "partial", or "full" seem more understandable from the point of view of someone trying to solve a multi-objective optimisation problem. In my opinion, the default value should even be the full pareto-front. If the user simply wants to solve a combination of 2 objectives he can do the math before and transform it into a single objective (although it's true that it is less user friendly).

JWally commented 8 years ago

Good point. For brevity, I think I'm going to just have it return the whole thing (object with midpoint, ranges, and vertices) and let the user handle it from there. This also makes it pretty easy to add pareto to the return without breaking anything in the API.

JWally commented 8 years ago

Done with this issue for now. Closing. Bumped new version out to NPM last night.

bchevalier commented 8 years ago

Thanks!