Yelp / MOE

A global, black box optimization engine for real world metric optimization.
Other
1.31k stars 139 forks source link

implement q,p-EI for the heuristic optimizers #76

Open suntzu86 opened 10 years ago

suntzu86 commented 10 years ago

Right now you can only using constant liar, kriging believer, etc. to solve q,0-EI. No reason why we can't do this with q,p-EI.

The basic algorithm is:

for i in range(q):
  new_point = get_next_best_point(GP, ...)
  new_point_value = get_truth_value(new_point)  # e.g., kriging
  GP.add_point(new_point, new_point_value)

So we can simply first do:

points_being_sampled_value = get_truth_values(points_being_sampled)
GP.add_points(points_being_sampled, points_being_sampled_value)

And that provides a way of solving q,p-EI. For standard kriging believer, this will drop the EI at each points_being_sampled to 0, allowing the heuristic solve (over q points) to identify new areas of interest.

suntzu86 commented 10 years ago

one question: it's possible (probable) that the order in which we add in points_being_sampled will matter when using GP-dependent heuristics (like kriging). This is irrelevant for constantliar type heuristics. We may want to:

  1. Randomize the order in which points_being_sampled are added in.

and/or

  1. try all (or a random subset of) orderings and take the best EI.
  2. is more performant but 2. may have real impact in terms of the result quality. I'm not sure if it would matter.