facebookresearch / nevergrad

A Python toolbox for performing gradient-free optimization
https://facebookresearch.github.io/nevergrad/
MIT License
3.95k stars 353 forks source link

Parallelization of Objective Function #1509

Closed kayuksel closed 1 year ago

kayuksel commented 1 year ago

Hello!

I have an objective function written using PyTorch and calculate the fitness for a batch of solutions. How can I use Nevergrad so that for population-based algorithms, I can use this parallel objective?

(Note: Please note that, I don't prefer to use multi-threading to calculate fitness for each solution, but rather want to calculate the fitness in one-shot (one function call) for the whole population batch)

Thanks, Kamer

bottler commented 1 year ago

You can use the ask/tell interface. The example at https://facebookresearch.github.io/nevergrad/machinelearning.html#ask-and-tell-version may help.

kayuksel commented 1 year ago

It seems that I need a for-loop to get a population by 'ask' and then another one to push results by 'tell' after batch calculation. I will try doing that, thank you very much. (It would be great if that could be done via a single calls rather than using for-loops).

bottler commented 1 year ago

(It would be great if that could be done via a single calls rather than using for-loops).

This isn't likely to change. For this kind of thing, the api is designed for both flexibility of user and flexibility of optimizer-implementer, rather than small speedups. In typical applications the objective function is itself very slow.