facebookresearch / nevergrad

A Python toolbox for performing gradient-free optimization
https://facebookresearch.github.io/nevergrad/
MIT License
3.95k stars 353 forks source link

NGOptRW stuck on high num_workers #1506

Closed mstniy closed 1 year ago

mstniy commented 1 year ago

Steps to reproduce

The setup is rather brittle. Using >~9 workers, NGOptRW and 4 keyword arguments breaks nevergrad completely.

Python version: 3.10.6 Nevergrad version: 0.6.0

Observed Results

The optimizer freezes.

Expected Results

For the optimizer to find a solution.

Relevant Code

import nevergrad as ng

def _loss(*args, **kwargs):
    print("_loss:", args, kwargs)
    return 0

def main():
    parametrization = ng.p.Instrumentation(
        a=ng.p.Scalar(lower=1, upper=10),
        b=ng.p.Scalar(lower=1, upper=10),
        c=ng.p.Scalar(lower=1, upper=10),
    )

    optimizer = ng.optimizers.NGOptRW(
        parametrization=parametrization, budget=1000, num_workers=9
    )
    optimizer.minimize(
        _loss,
        verbosity=2,
    )

if __name__ == "__main__":
    main()
nhansendev commented 1 year ago

I tried this with a few different optimizers for comparison (same python and nevergrad versions):

Additionally, I found that with three or fewer workers NGOptRW could run the analysis as expected, but with four workers it would suddenly freeze part way though.

EDIT: FYI, I found more bugs related to CMandAS3, which has become its own issue (https://github.com/facebookresearch/nevergrad/issues/1508).

teytaud commented 1 year ago

I've reproduced this issue with Nevergrad 0.6.0. Investigating now and testing with more recent Nevergrad.

teytaud commented 1 year ago

Thank you very much. The issue is understood and fixed in https://github.com/facebookresearch/nevergrad/pull/1555 Will be merged asap and in release 0.13.0.

teytaud commented 1 year ago

Should be solved in the current main and in 0.13.0 which is on its way to release on Pypi.