Open bacalfa opened 4 years ago
Any thoughts from a theoretical standpoint? Even if it's not implemented in the core package. Just some food-for-thought. :)
More on a practical viewpoint:
Transform
we have when defining bounds, but with a more complex projection. There's no entry point to add other transforms though, but that should be simple enough if there is any need (and as far as I am concerned, I think this can be useful)DE
and PSO
mostly: if several individuals/particle end up being projected to the same point, we lose diversity in the population and the optimizer gets stuck, I've seen this happen in (probably extreme) cases which led me to use a different strategy #684 avoiding this behavior. But this is probably not needed in most cases.@teytaud any more thoughts on a theoretical viewpoint?
I vote YES for having the ability to implement a custom transformation. :) And as I mentioned, the optimizer must be told
that the candidate point has been updated.
I vote YES for having the ability to implement a custom transformation. :)
Ok I'll give it a thought, maybe I'll have time to get something working within the next week (shouldn't be hard)
And as I mentioned, the optimizer must be told that the candidate point has been updated.
Can you explain what you mean here? And what the optimizer would make of this information
Thanks!
And what I mean is that after ask
ing for a candidate and modifying such candidate, the new candidate should be used when tell
ing the optimizer the evaluation value of the black-box. In other words, in my mind, the optimizer has to know that the candidate has been changed when enforcing constraints.
I vote YES for having the ability to implement a custom transformation. :)
I started giving it a thought, but it's not as easy as I expected to make something robust enough :s My main concern is the fact that some constraints may need to be propagated back from the value space to the standardized space where optimization actually happens. And other do not. I'd like to do something a bit general about it but not sure how. Related issue #759
Here's what I was thinking for my specific case in pseudo-code:
num_trials = 0
do:
x = ask
x_t, const_eval = transform(x)
num_trials += 1
while not all(const_eval) and num_trials <= max_trials
if num_trials > max_trials:
x_t = x
f = obj(x_t)
tell x_t, f
The transform
function will return the transformed space values and an iterable
of Booleans denoting constraint satisfaction (like the current lazy constraint implementation). So if I want to solve an LP to ensure feasibility, I'll have to return the new candidate point and True
for every constraint satisfied within certain tolerance.
Hey, this is just to tell that I did not forget about the issue, but I think I'll end up refactoring the constraint system to make it (hopefully) simpler to use and more transparent. Bold aim :s. That will take time though.
In the meantime I am a bit uneasy with the transform sending both a transformed point and whether the constraint is satisfied, shouldn't it deal with it on its own? that would avoid a lot of coupling in the code.
Thanks for the update! :)
And sure. That was just a suggestion to try to follow the current code as closely as possible. But I think it's fine to require the constraint evaluation function decide whether it converged or not, and then return the transformed candidate (if it's successful) or the original candidate (if it's unsuccessful). Something like that. The main idea is to give the user more flexibility to determine constraint satisfaction, which can be done more efficiently in situations like I described.
Given the situation:
Instead of attempting to satisfy "cheap" constraints a few times, could the candidate be modified to satisfy constraints within a tolerance?
I'm thinking of the following approach:
As an example, suppose we want to:
where
and
are real (and possibly bounded).
The approach is then to:
The norm can be any usual one (1, 2, infinity). Depending on the nature of the constraints and the chosen norm, the problem is LP, NLP, MILP, MINLP, etc. Then the corrected candidate would be used in the evaluation of the black-box and passed to the
tell
method.Any thoughts from a theoretical standpoint? Even if it's not implemented in the core package. Just some food-for-thought. :)