Open stevenkleinegesse opened 5 years ago
You can find the optimum of your GP model with a gradient-based local optimizer such as L-BFGS-B. This is also done in the Bayesian optimization, where an aquisition function of the model, instead of the model itself, is optimized. You can look this up in https://github.com/SheffieldML/GPyOpt/blob/master/GPyOpt/optimization/optimizer.py
Hello everyone,
In my understanding, .x_opt returns the optimum of a BayesianOptimization object, where the optimum is chosen from the set of objective function evaluations, e.g.
self.x_opt = self.X[np.argmin(self.Y),:]
(Line 206 in GPyOpt/GPyOpt/core/bo.py)
However, to me it is more intuitive to define the optimum as the minimum of the predictive mean, instead of the set of evaluations.
My question: Is there a way to access the predictive mean that GPyOpt has evaluated after the last iteration and find the minimum of that? E.g. something like np.argmin(object.predictive_mean_current)?
If not, I would try to define a grid X_grid over my domain, and then simply do object.predict(X_grid) and find the minimum of that. Is there a more elegant way of doing this?