facebookresearch / nevergrad

A Python toolbox for performing gradient-free optimization
https://facebookresearch.github.io/nevergrad/
MIT License
3.95k stars 353 forks source link

Best way to handle an asked candidate for which the loss calculation failed user-side #1498

Open a-mazzetto opened 1 year ago

a-mazzetto commented 1 year ago

Hello, I have a question on how to best tell() the optimizer that a candidate's simulation has failed, without necessarily penalizing it with high loss. The scenario is such that between ask() and tell() there is a calculation that might fail for a number of reasons, some of which deserve a penalization with high loss, some others perhaps not (e.g. the calculation did not converge but might converge for a very similar candidate, not ideal but it is what it is). Would it make sense and is there any way to tell the optimizer to not expect a tell() for some specific candidate, or anyways make it aware that one candidate failed but that does not necessarily mean that similar candidates should be unfavored? Any help on this would be really appreciated, thank you

teytaud commented 1 year ago

Hello! There are algorithms that will not care too much if some candidates obtained by ask are never followed by tell. For example I think that DiscreteOnePlusOne will not care too much, and many related algorithms either. Do you have an example code ? If you have a code which fails due to this maybe we can just replace the optimizer by a more "robust" one.