facebookresearch / nevergrad

A Python toolbox for performing gradient-free optimization
https://facebookresearch.github.io/nevergrad/
MIT License
3.9k stars 350 forks source link

Generating candidates that automatically satisfy algebraic constraints #746

Open bacalfa opened 4 years ago

bacalfa commented 4 years ago

Given the situation:

Instead of attempting to satisfy "cheap" constraints a few times, could the candidate be modified to satisfy constraints within a tolerance?

I'm thinking of the following approach:

  1. Generate a candidate
  2. Adjust candidate by solving an optimization model that ensures feasibility (original constraints) and minimizes the deviation from the original candidate

As an example, suppose we want to:

where and are real (and possibly bounded).

The approach is then to:

The norm can be any usual one (1, 2, infinity). Depending on the nature of the constraints and the chosen norm, the problem is LP, NLP, MILP, MINLP, etc. Then the corrected candidate would be used in the evaluation of the black-box and passed to the tell method.

Any thoughts from a theoretical standpoint? Even if it's not implemented in the core package. Just some food-for-thought. :)

jrapin commented 4 years ago

Any thoughts from a theoretical standpoint? Even if it's not implemented in the core package. Just some food-for-thought. :)

More on a practical viewpoint:

@teytaud any more thoughts on a theoretical viewpoint?

bacalfa commented 4 years ago

I vote YES for having the ability to implement a custom transformation. :) And as I mentioned, the optimizer must be told that the candidate point has been updated.

jrapin commented 4 years ago

I vote YES for having the ability to implement a custom transformation. :)

Ok I'll give it a thought, maybe I'll have time to get something working within the next week (shouldn't be hard)

And as I mentioned, the optimizer must be told that the candidate point has been updated.

Can you explain what you mean here? And what the optimizer would make of this information

bacalfa commented 4 years ago

Thanks!

And what I mean is that after asking for a candidate and modifying such candidate, the new candidate should be used when telling the optimizer the evaluation value of the black-box. In other words, in my mind, the optimizer has to know that the candidate has been changed when enforcing constraints.

jrapin commented 4 years ago

I vote YES for having the ability to implement a custom transformation. :)

I started giving it a thought, but it's not as easy as I expected to make something robust enough :s My main concern is the fact that some constraints may need to be propagated back from the value space to the standardized space where optimization actually happens. And other do not. I'd like to do something a bit general about it but not sure how. Related issue #759

bacalfa commented 4 years ago

Here's what I was thinking for my specific case in pseudo-code:

num_trials = 0
do:
   x = ask
   x_t, const_eval = transform(x)
   num_trials += 1
while not all(const_eval) and num_trials <= max_trials
if num_trials > max_trials:
   x_t = x
f = obj(x_t)
tell x_t, f

The transform function will return the transformed space values and an iterable of Booleans denoting constraint satisfaction (like the current lazy constraint implementation). So if I want to solve an LP to ensure feasibility, I'll have to return the new candidate point and True for every constraint satisfied within certain tolerance.

jrapin commented 3 years ago

Hey, this is just to tell that I did not forget about the issue, but I think I'll end up refactoring the constraint system to make it (hopefully) simpler to use and more transparent. Bold aim :s. That will take time though.

In the meantime I am a bit uneasy with the transform sending both a transformed point and whether the constraint is satisfied, shouldn't it deal with it on its own? that would avoid a lot of coupling in the code.

bacalfa commented 3 years ago

Thanks for the update! :)

And sure. That was just a suggestion to try to follow the current code as closely as possible. But I think it's fine to require the constraint evaluation function decide whether it converged or not, and then return the transformed candidate (if it's successful) or the original candidate (if it's unsuccessful). Something like that. The main idea is to give the user more flexibility to determine constraint satisfaction, which can be done more efficiently in situations like I described.