facebookresearch / nevergrad

A Python toolbox for performing gradient-free optimization
https://facebookresearch.github.io/nevergrad/
MIT License
3.94k stars 353 forks source link

Memory & speed #797

Open teytaud opened 4 years ago

teytaud commented 4 years ago

Presumably Nevegrad performs excellent on computationally expensive objective functions, because it is good at choosing an informative next iterative. On the other hand, it is sometimes slow for choosing this next iterate, which is an issue for computationally cheap objective functions.

Presumably we spend time in the archiving of past iterates.

Ideas:

jrapin commented 4 years ago

The archive is basically only useful in very noisy contexts as far as I understand it. We should probably completely drop the archive system in most settings actually (i.e remove it from the base class)

teytaud commented 4 years ago

There are noise-free cases in which it's useful, like highly multimodal problems, or non-stationary problems, or cases with expensive objective functions & surrogate models. So I'd go for a simple bit in the optimization algorithm for disabling it, or reducing it to something very small.

jrapin commented 4 years ago

There are noise-free cases in which it's useful, like highly multimodal problems, or non-stationary problems, or cases with expensive objective functions & surrogate models.

I was mentioning implementations here. What I mean is that apart from noisy optimizers, I don't think we have any implementation using it in nevergrad

So I'd go for a simple bit in the optimization algorithm for disabling it, or reducing it to something very small.

It was reduced about 6 months ago btw, for 1 worker it only keeps at most 1000 points for instance now (despite the docstring, it's no more very conservative now) https://github.com/facebookresearch/nevergrad/blob/f34626bb2c10851a659740f009fb78da1defd85e/nevergrad/optimization/utils.py#L263-L266