PKU-DAIR / open-box

Generalized and Efficient Blackbox Optimization System
https://open-box.readthedocs.io
Other
376 stars 52 forks source link

Scalability of GP #86

Open rmrmg opened 7 months ago

rmrmg commented 7 months ago

I have such problem with openbox optimization with prf works fast but quality of results is lower compore to gp. Unfortunatelly gp becomes very slow after several hundreds of points. There are some procedures to reduce complexity of gp (e.g. https://proceedings.neurips.cc/paper_files/paper/2019/file/01ce84968c6969bdd5d51c5eeaa3946a-Paper.pdf ) Is there any of such sort in openbox? If not how to do long opt effectively? I have following idea: a) perform long optimization with prf, feed history with those results and run gp but still each gp step will be very slow
b) optimization in sub-spaces, divided whole space into N non-overlapping parts and perform N "independent" sub-optimizations. The main optimization process will perform step in subspace where value of acquisition function is the highest. What you think about b)? Do you have any better idea? If you think b) is worth to try how to do this with openbox? I can based on https://open-box.readthedocs.io/en/latest/examples/ask_and_tell.html but is there anything like advisor.get_best_value_of_acquisition_function() ?

jhj0411jhj commented 7 months ago

Hi @rmrmg, there is an auto-switch mechanism in openbox currently. If you set surrogate_model='auto' and the model is decided to be 'gp', after 300 iterations, the model will be switched to 'prf' automatically. We may consider implementing the algorithm in you reference paper in the future.

For (b), you can take the following codes as reference:

import numpy as np
from openbox import Advisor, History, Observation

advisor = Advisor(...)
history = advisor.get_history()
for i in range(100):
    # split history
    history1 = History(...)
    history2 = History(...)
    history1.update_observations(history.observations[::2])
    history2.update_observations(history.observations[1::2])
    # get suggestion
    config1 = advisor.get_suggestion(history=history1)
    config2 = advisor.get_suggestion(history=history2)
    # compute acq value
    all_config = [config1, config2]
    acq_value = advisor.acquisition_function(all_config)
    # evaluate
    next_config = all_config[np.argmax(acq_value)]
    y = obj_func(next_config)
    # update history
    observation = Observation(config=next_config, objectives=[y])
    history.update_observation(observation)

The results may be different in different problems. If you want to optimize for 1000-10000 iterations, you can also consider using Evolutionary Algorithms (see openbox.core.ea_advisor and openbox.core.ea).

For developing using openbox, this docs can be helpful:

rmrmg commented 7 months ago

Hi @jhj0411jhj Thx for answer. Do you have any example tuorial for ea?

jhj0411jhj commented 7 months ago

That part is in dev. We will update the docs in the future.

rmrmg commented 7 months ago

@jhj0411jhj I found https://github.com/LLNL/MuyGPyS what you think about integrating this with openbox?

jhj0411jhj commented 7 months ago

Thanks for suggestion. We will take a look, but it may take some time due to the shortage of manpower.