Open akamaus opened 6 years ago
Hi, good point. You can already do that by instantiating the robo.solver.BayesianOptimization class and call the choose_next() function: For example:
bo = robo.solver.BayesianOptimization(model, acquisition, lower, upper, objective,..)
X=None
Y=None
for k in range(num_iters):
x_next = bo.chose_next(X, Y)
# evaluate x_next and update X, Y
Right now I call bo.run(n_init+1, bo.X, bo.y)
and analyse state in between. Still it requires quite a lot of lines to setup (I basically had to replicate fmin.bayesian_optimisation guts).
Unfortunately yes. We are right now working on a new package that is more flexible than RoBO but contains the same functionality. We are planning it to release it in the next weeks.
@aaronkl Hello! What is the package name, was it already released?
yes it is online (https://github.com/amzn/emukit) but not yet officially released and under heavy development
Currently optimization process is a fully-automatic blackbox. I mean, you call fmin.bayes_optimization with appropriate arguments, wait for some time and get the answers together with various running stats, like points tried, incumbents and so on. By the time you get the results, optimizer internal state is gone, so various interesting stuff like acquisition function behavior can't be analyzed.
What do you think about giving the option for client code to control optimization loop? For example, splitting BaseSolver.run into BaseSolver.start and BaseSolver.step, so interested users could write