mbelmadani / moead-py

A Python implementation of the decomposition based multi-objective evolutionary algorithm (MOEA/D)
GNU Lesser General Public License v3.0
76 stars 27 forks source link

MOEAD for a multi-objective minimisation problem #3

Closed samantha2017 closed 3 years ago

samantha2017 commented 5 years ago

Hi,

I am testing your MOEAD implementation on a minimisation problem and comparing the results with NSGA2. The results show that MOEAD does not show enough exploration compared to NSGA2. As the current version works for the knapsack problem , should I expect this version works for a minimisation problem as well, or I should do some alteration inside the MOEAD code to set it up for a minimisation problem?

Thank you,

mbelmadani commented 5 years ago

Hi @samantha2017, thank you for your interest.

Short(ish) answers:

The minimization/maximization is handled by the weights vector. See in knapsack.py:

  88 │     # Create the item dictionary: item name is an integer, and value is                                                                                                                                                                 
  89 │     # a (weight, value) 2-uple.                                                                                                                                                                                                         
  90 │     if objectives == 2:
  91 │         creator.create("Fitness", base.Fitness, weights=(-1.0, 1.0))
  92 │     elif objectives == 3:
  93 │         creator.create("Fitness", base.Fitness, weights=(-1.0, 1.0, -1.0))
  94 │     else:
  95 │         print "No evaluation function available for", objectives, "objectives."
  96 │         sys.exit(-1)

and also there's a specific case that set solutions above max-weight to a penalty value

  50 │     if len(individual) > MAX_ITEM or weight > MAX_WEIGHT:
  51 │         return 1e30, 0 # Ensure overweighted bags are dominated                                                                                                                                                                            
  52 │     return weight, value

The weight parameter in creator.create is not the weight for the knapsack problem by the way, rather it's a DEAP-specific argument which is used to determine if an objective is minimize or maximized (by giving the score a negative or positive weight.)

In the 2-objective problem, the objectives are weight and value. The vector weights=(-1.0, 1.0) means minimize weight and maximize value (because of the negative and positive weight.). Running the problem as default, I get:

PF:
Individual([]) (0.0, 0.0)
Individual([18]) (1.0, 58.809604705903865)
Individual([8]) (3.0, 91.8681725120232)
Individual([16, 15]) (5.0, 98.12940887039005)
Individual([8, 1]) (7.0, 140.89181437934283)
Individual([16, 8, 13]) (8.0, 204.3494391855252)
Individual([16, 8, 13, 7]) (11.0, 204.79152553404785)
Individual([8, 16, 18, 7, 15]) (12.0, 249.2492724368398)
Individual([0, 8, 18, 6]) (18.0, 268.9954652525366)
Individual([8, 16, 6, 14]) (19.0, 291.9489400299374)
Individual([3, 8, 13, 14, 16, 19]) (25.0, 315.9748289679994)
Individual([2, 6, 8, 9, 16, 18]) (28.0, 357.6331562911988)
Individual([0, 1, 6, 8, 12, 18]) (29.0, 376.4639707940928)
Individual([1, 6, 8, 11, 13, 14, 15, 16]) (34.0, 451.22704972862465)
Individual([1, 2, 4, 6, 8, 9, 13, 14, 16]) (45.0, 503.6735822904543)
Individual([1, 3, 4, 8, 9, 12, 13, 14, 15, 16, 18]) (47.0, 564.3772647858195)

Which makes sense: One of the solution is a 0-weight knapsack, but also with a score of 0, but nothing will beat a knapsack with no items in terms of weight, so it makes it into the pareto front.

On the other end, there's a solution with 10 items, a weight of 47 (the heaviest in the pareto front but still under the maximum of 50 declared in the problem) and a value a 564. Anything lighter than that has a lower value.

If I edit the vector to be (1.0, 1.0), both objectives should be maximized (also changing the penalty to be 0 for both objectives, the worst possible solution. The results are:

PF:
Individual([1, 2, 3, 4, 5, 6, 7, 8, 10, 17, 19]) (50.0, 709.1012667321274)
Individual([1, 4, 6, 7, 8, 9, 10, 12, 13, 16, 17]) (49.0, 722.9057222751666)

Here the algorithm explicitly searched for the heaviest and most valuable objects. The solutions are much higher in value here, but none of them are lighter than 49, meaning the algorithms wasn't able to search for anything just as valuable but lighter than 49. That can be a reason why you're not exploring solutions you would hope to find.

Hope this is helpful. Feel free to continue to discuss or let me know if there's anything you think can be improved with the code or documentation, and don't hesitate if you have further questions!

samantha2017 commented 5 years ago

Hi @mbelmadani,

Thank you so much for the great explanation, very helpful! And the idea of initial weights rather than uniform is really cool, I should work on it!

By the way, actually the problem that I am solving is EMO for binary classification with minority and majority class rates as conflicting objectives. Here I have a few question for you that would be great if you could help me, please.

1- In NSGA2 the final internal population is return as the approximation to the Pareto Front (PF). So the number of solutions in PF of NSGA2 is equal to the pop_size. However, in your MOEAD implementation, you have used halloffame which is a DEAP PF (tools.ParetoFront()). The PF of MOEAD is the solutions in halloffame and the number of solutions is not necessary equal to the pop_size. Now, for the purpose of comparison between the PF of these two algorithms, is it fair to compare these two PFs although the number of solutions are not the same? (I have seen in the original MOEAD paper (Qingfu Zhang) doesn't use the external population in MOEAD and instead it uses the final population. I tried this but the PF was terrible!)

2- In my problem (which is application of EMO) I mainly care about finding a set of solutions with good trade-off close to the z-point. So this means that I care less about the diversity of solutions along the PF. With the current MOEAD I get better solutions close to the ideal point compared to NSGA2 (means the middle part of the PF of MOEAD dominates the solutions in PF of NSGA2). Considering I would like to bold this outcome, do you have any suggestion about appropriate metrics/indicators for comparisons? HyperVolume is not great as in my problem MOEAD does not have great exploration, but it has good exploitation. DEAP has convergence, diversity, IGD, not sure any of them is appropriate for this purpose.

Thank you so much in advance for your time,

mbelmadani commented 5 years ago

Sorry for the delay,

1) I don't think it's unfair to compare different sizes of PF (as long as your evaluation measure doesn't unfairly benefit having multiple solutions.) I guess it depends if it's easy for your to test different solutions, or if you'd rather have just a few solutions but very good ones instead of many solutions where one is near-optimal. Even different runs using exactly the same algorithm and a pareto front can have different numbers of solutions, since the number changes based on what dominates other solutions (in the first front, at least.)

2) Convergence might make sense in your case, if you're able to use an "optimal front", and you could compare your convergence as your algorithm runs to see if it continuously increases or if it has more sporadic jumps. Something that improves gradually and steadily seems ideal but depending on the problem that might now always be easy.

Maybe an additional suggestion: You could plot objectives' stats log output and see how they individually progress one versus another. Maybe one of your objective is exploited more rapidly with one algorithm versus the other. For me I could tell which method worked better because one of my objective wasn't finite (a p-value) and the other was (accuracy, i.e. a percentage) so I was looking for solutions with the lowest p-value possible while maintaining a reasonable accuracy. It wasn't necessarily true that the absolute lowest p-value was the best solution but generally speaking it did follow that trend.

In case you're interested, when I was developing an MOEA for my thesis, I had a few assumptions I could make:

a) I had some external validation tool I could use to validate solutions. Specifically, it was for prediction of DNA binding sites, and I was able to check if I predicted the correct/expected ones, and how close my prediction is to the reference target (giving me p-values/E-Values). So to me, the best solution was the pareto front that gave the best predictions after validation. Maybe you could design a data set where you know what the optimal solution would be and see which method comes the closest.

b) It was very cheap/fast for me to test all the solutions in a PF. However it was generally not possible for me to find a consistent way to know which solution in the PF was going to be the best (either the middle, or one of the objective being the highest, etc.), though later I found an objective that would make much smaller fronts (1 or 2 solutions) that were pretty optimal and might have been better in practice. But at the time I wasn't concerned about PF size and was able to just test everything every time I had a new PF, and in my case NSGA-IIR was very consistent at identifying the "best" solutions.

Hope this helps, will try to get back sooner this time if you have any questions!

NimishVerma commented 4 years ago

Hi @mbelmadani , thanks for this thread. I am trying to evaluate the performance of MOEA/D on benchmark functions and was wondering if have developed a problem file similar to the knapsack.py for the benchmarks. Since you are using Tchebycheff approach for decomposition in moead.py. I wanted to make sure what do you mean by the lines 394 and 395. Is it related to the implementation in jmetal and DEAP, or is it related to the nature of problem (MIN OR MAXIMIZE)? Thanks, and apologies for not opening a separate issue.

mbelmadani commented 4 years ago

Hi @NimishVerma , at the line https://github.com/mbelmadani/moead-py/blob/4fcedb4a4df9bf98634c81a2aac1d42367a9e183/moead.py#L394, I believe I had to flip the comparison because internally DEAP maximizes the problem whereas the JMetal equivalent was minimizing. Effectively, if the fitness is greater for the new solution, then update the population to include the individual.

I don't have any other benchmarks. You could take a look at DEAP and JMetal to see if they have any toy examples you could replicate. I think my master's thesis had an implementation of MOEA/D (https://github.com/mbelmadani/motifgp/blob/master/motifgp/motifgp.py) but I did not benchmark/publish any results (NSGAII-R by Fortin et al. 2013 was the one I focused most on/got better results for.) I haven't worked on this in a while but feel free to open an issue on the MotifGP repository if you run into any issues.

If you're using this as a benchmark and are interested in 3-objective problems, consider followings issue #6 , it appears the method to initialize default weights might have a bug for small population sizes and overall not super uniform.

NimishVerma commented 4 years ago

Hi @mbelmadani , thanks for the quick response. I have already implemented bi-objective benchmarks so that wont be an issue. I am curious I flipped https://github.com/mbelmadani/moead-py/blob/4fcedb4a4df9bf98634c81a2aac1d42367a9e183/moead.py#L394 again since I am working on minimization problems, but I end up getting an empty HOF. Am I doing it wrong by flipping the operator?

mbelmadani commented 4 years ago

Try leaving the code as is, but set your minimization/maximization in the DEAP creator object:

https://github.com/mbelmadani/moead-py/blob/4fcedb4a4df9bf98634c81a2aac1d42367a9e183/knapsack.py#L93

1.0 is for maximization objectives and -1.0 for minimization. This lets you mix and match direction between objectives.

The order of the weights matches the order of the values returned by the evaluation function. E.g.

https://github.com/mbelmadani/moead-py/blob/4fcedb4a4df9bf98634c81a2aac1d42367a9e183/knapsack.py#L114

On Mon, Oct 19, 2020 at 10:52 AM Nimish Verma notifications@github.com wrote:

Hi @mbelmadani https://github.com/mbelmadani , thanks for the quick response. I have already implemented bi-objective benchmarks so that wont be an issue. I am curious I flipped https://github.com/mbelmadani/moead-py/blob/4fcedb4a4df9bf98634c81a2aac1d42367a9e183/moead.py#L394 again since I am working on minimization problems, but I end up getting an empty HOF. Am I doing it wrong by flipping the operator?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/mbelmadani/moead-py/issues/3#issuecomment-712220398, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADLWUTVC23P2BYJDIQYJZWTSLRHBTANCNFSM4H6YFMZQ .