stevengj / nlopt

library for nonlinear optimization, wrapping many algorithms for global and local, constrained or unconstrained, optimization
Other
1.79k stars 553 forks source link

Parallel Optimization #173

Open jbehura opened 6 years ago

jbehura commented 6 years ago

Hi, I could not locate any reference to parallel computation of objective functions in the documentation. Are all the optimization codes serial? Thanks, Jyoti

jschueller commented 6 years ago

Funny I had this discussion recently. The callback to interface the objective function operates on one point at a time for now, but maybe a new callback that operates on several points could take advantage of some algorithms, maybe the evolutionary ones, but I've not looked into it, can you think of one ?

jbehura commented 6 years ago

Most of the global optimization algorithms should be good candidates...and yes evolutionary algorithms should be great candidates. I believe there are two ways of doing it --

  1. Parallelize the computation of the objective function for one point...that way one can make it algorithm independent
  2. Parallelize the computation of objective functions for multiple points...this will be algorithm dependent
jschueller commented 6 years ago
  1. this would not be the responsibility of nlopt, this can already be done on the user side to
  2. The way I see it Nlopt cannot be responsible of the parallelization in itself but provide input points by bulk, then it's the user fonction to parallelize the evaluation of the multiple points and return the result to nlopt.
jbehura commented 6 years ago
  1. Agreed. In the first case, the user should parallelize each function evaluation.
  2. In the 2nd case, however, wherever multiple independent function calls are possible, modifications to NLOPT will significantly add to the efficiency of the algorithms.
jschueller commented 6 years ago
  1. I'm not saying that parallelization is not possible. I'm just saying NLopt cannot assume thread-safety of the function, that's why I think this should be delegated to the user callback on multiple points.
scottfharvey-cm commented 4 years ago

Hi! I was wondering if there is any update on this topic. I have the ability to calculate the objective function for my application in batches of 20 (using cuda), but unfortunately have not figured out how to leverage this parallel benefit for optimization. I'm currently using an "LN" method, but would be ok using a "GN" method if it leveraged the 20x speedup. Ideal interface would be 20 points to calculate, then the next 20 points to calculate based on the results from the first 20 etc.. Any help would be greatly appreciated or if there is another library to try!

jschueller commented 4 years ago

@scottfharvey-cm I think no one is working on it, else I think pagmo does parallel optimization.

ismetdagli commented 4 years ago

There is still no update, right? @jschueller

jschueller commented 4 years ago

nope

jschueller commented 4 years ago

try pagmo

gitouni commented 1 year ago
  1. this would not be the responsibility of nlopt, this can already be done on the user side to
  2. The way I see it Nlopt cannot be responsible of the parallelization in itself but provide input points by bulk, then it's the user fonction to parallelize the evaluation of the multiple points and return the result to nlopt.

At least, NLopt should provide an API interface with multiple x inputs and multiple fval outputs. This case is algorithm-dependant and is also possible to be parallelized by users. Without such an API interface, I cannot figure out in which case can users parallelize the computation of the objective function, sinice it has been wrappered into NLopt itself.