Open ChristopherMayes opened 5 months ago
@nikitakuklev it looks like this assertion comes from you.
MWE:
from xopt import Xopt
YAML = """
generator:
name: neldermead
initial_point: {x0: -1, x1: -1}
evaluator:
function: xopt.resources.test_functions.rosenbrock.evaluate_rosenbrock
vocs:
variables:
x0: [-5, 5]
x1: [-5, 5]
objectives: {y: MINIMIZE}
"""
X = Xopt(YAML)
X.random_evaluate()
Thanks, I'll try take a look this week. My instinct is that is the correct behavior - it makes no sense to do a random evaluation on simplex, since it must maintain state from one step to next. You can't add/remove/modify its data once started. Maybe a better error message is needed.
This is also breaking the current workflow from badger ATM. It would be great if this generator could treat the last datapoint in X.data as the starting point and then run from there (locking the dataframe in the process?)
The problem is that it is not just 'last point' but 'last simplex' + 'stage' (contraction, etc.). The implementation before current one did exactly this - it took last ndim+1 points and restarted with that as initial simplex. Because of that it was not reproducible on reload, whereas current implementation is (see here). This is fundamental difference from BO methods, which makes simplex a pain to handle with BO-like interface.
Maybe some override flags for old behavior are in order. I'll prototype a bit.
Thanks for looking into this Nikita, I think using multiple points may be helpful but are then causing the reproducibility issue you mentioned. If we just started simplex from a single point that would be ok as well if it fixes the reproducibility issues
Update: didn't have time implement a full solution yet, ETA over the weekend.
In
scipy/neldermead.ipynb
: