jMetal / jMetalPy

A framework for single/multi-objective optimization with metaheuristics
https://jmetal.github.io/jMetalPy/index.html
MIT License
497 stars 150 forks source link

Reference front for quality indicators for real world problems #124

Closed mishras9 closed 2 years ago

mishras9 commented 2 years ago

As per the the discussion in https://github.com/jMetal/jMetal/issues/171#issuecomment-260675888.

As is suggested by @ajnebro Most of indicators require the Pareto front to be computed, but this front is rarely obtained with dealing with real-world problems. The most commonly used strategy is to build a reference Pareto front, which is composed of the result of merging all the solutions obtained by all the algorithms in all their runs.

Can this be achieved in jMetalPy while we are running several algorithms in an experiment?

ajnebro commented 2 years ago

If you mean whether the reference front is dynamically calculated during algorithm execution, this is not provided by jMetalPy (nor by jMetal).

mishras9 commented 2 years ago

@ajnebro So, how can we call the pareto front from each run of an algorithm to create a reference front? Basically, can we store solutions, function value, and time for each run?

ajnebro commented 2 years ago

I suggest to do the experimentation in two steps: first, execute all the algorithms, so you can generate the reference Pareto front approximations from all the obtained fronts; second, you can use those fronts to compute the quality indicators and generate the tables with statistical information.

ajnebro commented 2 years ago

In jMetal we use the GenerateReferenceParetoSetAndFrontFromDoubleSolutions class, but it is not included in jMetalPy.