SimonBlanke / Gradient-Free-Optimizers

Simple and reliable optimization with local, global, population-based and sequential techniques in numerical discrete search spaces.
https://simonblanke.github.io/gradient-free-optimizers-documentation
MIT License
1.21k stars 84 forks source link

Set invalid proposal in opt function #25

Closed 23pointsNorth closed 2 years ago

23pointsNorth commented 2 years ago

In some functions, e.g. f = 1/(x+y), there may be some complex variable interaction that is easy to confirm at runtime, which makes the optimization function undefined - e.g. x=-y in the above example.

It would be beneficial to mark the current variable setting as undefined and avoid evaluating it, without influencing the optimization. Right now we can detect those spaces and return +/- np.inf, but this creates an artificial value, which in e.g. BO should impact further optimization (correct me if I am wrong). The alternative option of pre-computing those cases and filtering them out of the search in the first place may not be always tractable with extra-large search spaces.

One option would be to return a value like None that doesn't carry underlying meaning.

Example Code:

from gradient_free_optimizers import RandomSearchOptimizer
def f_function(para):
    # if (para["x"] + para["y"]) == 0:
    #     return None

    loss = 1 / (para["x"] + para["y"])
    return -loss

search_space = {"x": range(-1, 1), "y": range(-1, 1)}

opt = RandomSearchOptimizer(search_space)
opt.search(f_function, n_iter=10)
SimonBlanke commented 2 years ago

Hello @23pointsNorth,

if you want to return a value that shows, that the parameters result in an invalid score you could return np.nan. GFO can handle np.nan (and np.inf) since version 0.2.11.

Does this solve your problem?

but this creates an artificial value, which in e.g. BO should impact further optimization

Bayesian optimization (and all other smbo) filter out np.inf and np.nan from the score-tracking-decorator: https://github.com/SimonBlanke/Gradient-Free-Optimizers/blob/e7a95503195ea7c0b3590241caf499ea307085f2/gradient_free_optimizers/optimizers/smb_opt/smbo.py#L73-L82

23pointsNorth commented 2 years ago

if you want to return a value that shows, that the parameters result in an invalid score you could return np.nan.

That's great, I guess that's the one thing I hadn't checked.

From my small test, np.nan is handled ok (aka ignored) in all of the optimizers similar to the code you've linked to in smbo?

SimonBlanke commented 2 years ago

Hello @23pointsNorth,

The code I have linked only works in smbo, but other optimization algorithms have a similar way of ignoring np.nan and np.inf if it is needed.