esa / pagmo2

A C++ platform to perform parallel computations of optimisation tasks (global and local) via the asynchronous generalized island model.
https://esa.github.io/pagmo2/
GNU General Public License v3.0
808 stars 160 forks source link

[FEATURE] Allow incorporation of problem specific knowledge #507

Closed luishpmendes closed 2 years ago

luishpmendes commented 2 years ago

It would be really nice if we could improve the algorithms with problem specific knowledge, i.e., local search procedures. One way I think this can be accomplished is by allowing the fitness method of the Problem class to modify the decision vector.

That is, changing the method signature from: vector_double fitness(const vector_double&) const to: vector_double fitness(vector_double&) const

For example, if I am solving the TSP, it would be nice to apply the 2-opt local search procedure on the solution at hand, and then update the decision vector to reflect the new locally-optimal solution.

darioizzo commented 2 years ago

We have implemented memetic algorithms (lifelong learning) in pagmo (mbh can be seen as one )

Is that solution not viable for you?

On Tue, 12 Apr 2022, 22:15 Luis H. P. Mendes, @.***> wrote:

It would be really nice if we could improve the algorithms with problem specific knowledge, i.e., local search procedures. One way I think this can be accomplished is by allowing the fitness method of the Problem class to modify the decision vector.

That is, changing the method signature from: vector_double fitness(const vector_double&) const to: vector_double fitness(vector_double&) const

For example, if I am solving the TSP, it would be nice to apply the 2-opt local search procedure on the solution at hand, and then update the decision vector to reflect the new locally-optimal solution.

— Reply to this email directly, view it on GitHub https://github.com/esa/pagmo2/issues/507, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAZMI32WYWB4GRVBJRK4EULVEXKUJANCNFSM5TINJTGQ . You are receiving this because you are subscribed to this thread.Message ID: @.***>

luishpmendes commented 2 years ago

That is not viable for me =/

In my PhD thesis, I am adapting a BRKGA to deal with multi-objective problems, by incorporating the non-dominated sorting of NSGA-2. However, in the literature its is pretty common to apply local search inside the decoder of the BRKGA, and then update the chromossome to reflect the new locally-optimal solution. Right now, I am using the multi-objective algorithms avaiable at PAGMO to access the performance of my algorithm for solving the MOTSP and MOKP. As I did not found a way to apply the local-search within the PAGMO framework, for now I am also not applying local-search on my algorithm.

As I said, this problem could be solved by changing the signature of the fitness method of the Problem class. Is that viable? Or is there a reason for this method to be like that?

darioizzo commented 2 years ago

The task to change the decision vector in pagmo is solely delegated to the uda evolve method. In there you can run also local searches.

In pagmo, therefore, to do what you want to do you need to run the local search inside the uda evolve method, after having produced the new candidate chromosomes.