Open dpeitz opened 1 week ago
Thanks for the question @dpeitz. I actually don't see get_MOO_PAREGO or get_MOO_EHVI in the tutorials. I think you may be working with a very old version of Ax and those functions have been reaped. You probably want to use the Modular Botorch model in a GenerationStrategy(see tutorial), but feel free to send what you have so far.
Drive by comment (and not endorsed by the official Ax team, so caveat emptor!): I recently had to use a ParEGO-based acqf as I had many target variables; the way I did it was to define the generation strategy as follows:
gs_subs = GenerationStrategy(
steps=[
GenerationStep(
model=get_MOO_PAREGO,
num_trials=-1,
max_parallelism=100,
model_kwargs={ # Kwargs to pass to `BoTorchModel.__init__`
"botorch_acqf_class": qExpectedHypervolumeImprovement,}
),
]
)
Thanks for the responses. @danielcohenlive you can find the tutorial here: https://ax.dev/tutorials/multiobjective_optimization.html I'm sure it will work with BoTorch, but I'd like to try if I can implement it in Ax as described above.
@Abrikosoff how did you run the experiments or evaluate your objective function? I just tried your approach with the BraninBeale tutorial and ran into problems when running the optimization.
# Load our sample 2-objective problem
branin_currin = BraninCurrin(negate=True).to(
dtype=torch.double,
device=torch.device("cuda" if torch.cuda.is_available() else "cpu"),
)
gs_subs = GenerationStrategy(
GenerationStep(
model=get_MOO_PAREGO,
num_trials=-1,
max_parallelism=100,
model_kwargs={ # Kwargs to pass to `BoTorchModel.__init__`
"botorch_acqf_class": qExpectedHypervolumeImprovement,}
),
]
)
# Client Setup
ax_client = AxClient(gs_subs)
ax_client.create_experiment(
name="moo_experiment",
parameters=[
{
"name": f"x{i+1}",
"type": "range",
"bounds": [0.0, 1.0],
}
for i in range(2)
],
objectives={
# `threshold` arguments are optional
"a": ObjectiveProperties(minimize=False, threshold=branin_currin.ref_point[0]),
"b": ObjectiveProperties(minimize=False, threshold=branin_currin.ref_point[1]),
},
overwrite_existing_experiment=True,
is_test=True,
)
# Evaluation function
def evaluate(parameters):
evaluation = branin_currin(
torch.tensor([parameters.get("x1"), parameters.get("x2")])
)
# In our case, standard error is 0, since we are computing a synthetic function.
# Set standard error to None if the noise level is unknown.
return {"a": (evaluation[0].item(), 0.0), "b": (evaluation[1].item(), 0.0)}
#Run optimization
for i in range(25):
parameters, trial_index = ax_client.get_next_trial()
# Local evaluation here can be replaced with deployment to external system.
ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(parameters))
Resolves in the following error:
...
Oh in my case I had pre-existing data which I loaded before running the generation. If you don't have pre-existing data you should include at least one Sobol sampling step, I think:
GenerationStep(
model=Models.SOBOL,
num_trials=1, # How many trials should be produced from this generation step
),
Thanks for the quick response, it works like that.
@Abrikosoff how can you be sure that ParEGO is used as the acquistion function in your example? In addition to the model, EHVI is also passed as an argument.
This isn't 100% right. Plus get_MOO_PAREGO()
has been reaped from the codebase. We've moved away from those factory methods. I'm unsure if the model
kwarg can take a factory function, but it might..
gs_subs = GenerationStrategy(
GenerationStep(
model=get_MOO_PAREGO,
num_trials=-1,
max_parallelism=100,
model_kwargs={ # Kwargs to pass to `BoTorchModel.__init__`
"botorch_acqf_class": qExpectedHypervolumeImprovement,}
),
]
)
You'd want to do it like:
gs_subs = GenerationStrategy(
steps=[
GenerationStep(
model=Models.BOTORCH_MODULAR,
num_trials=-1,
max_parallelism=100,
model_kwargs={ # Kwargs to pass to `BoTorchModel.__init__`
"botorch_acqf_class": qExpectedHypervolumeImprovement,}
),
],
)
See https://ax.dev/tutorials/modular_botax.html
how can you be sure that ParEGO is used as the acquistion function
I don't think it is. I think qExpectedHypervolumeImprovement
is. You should substitute qLogNParEGO
if you want that to be used.
Thanks for the response first of all. The implementation of these factory methods works. But you don't recommend using them, so I'd better adapt my structure. It seems that passing the AF to model_kwargs
is irrelevant in this case and is not applied to the model.
Is there a list of the available acquisition functions for the multi-objective case in the source code (e.g. qLogNParEGO)? I think I have already found something about this, but I can't find it anymore.
Thanks for the response first of all. The implementation of these factory methods works. But you don't recommend using them, so I'd better adapt my structure. It seems that passing the AF to
model_kwargs
is irrelevant in this case and is not applied to the model.
Yes, my apologies, I think this is what is happening here as well; in my use case I originally had another AF active, but triggered an error message saying that for my number of targets (17) I should use something like ParEGO, which prompted me to try get_MOO_PAREGO
, after which the code worked. The AF passed to model_kwargs
might have been from the previous failed attempt, sorry for the confusion.
Is there a list of the available acquisition functions for the multi-objective case in the source code (e.g. qLogNParEGO)? I think I have already found something about this, but I can't find it anymore.
Yes, this would be my question as well; also regarding @danielcohenlive 's remark that we should not call these factory methods, what would be the recommended way to call ParEGO, for example?
@Abrikosoff based on the above comments can we attach 12 trials to evaluate them and avoid using the sobol for generating to go direct to modelling step Botorch
Do you have any ideas @danielcohenlive regarding the acquisition functions? I don't want to rush you and i'm grateful for this forum. But if not, I would try looking in the source code or documentation again.
Sorry, I thought I responded but somehow that never happened. Thank you for following up @dpeitz!
Is there a list of the available acquisition functions for the multi-objective case in the source code
No, we don't maintain any acquisition function index, but all of them come from https://github.com/pytorch/botorch/tree/main/botorch/acquisition.
what would be the recommended way to call ParEGO, for example?
You'd want to use the modular botorch model
from botorch.acquisition.multi_objective.parego import qLogNParEGO
gs_subs = GenerationStrategy(
steps=[
GenerationStep(
model=Models.BOTORCH_MODULAR,
num_trials=-1,
max_parallelism=100,
model_kwargs={
"botorch_acqf_class": qLogNParEGO,
}
),
],
)
cc @sdaulton are there any other model kwargs necessary for MOO parego?
I would like to compare the qParEGO function for a multi-objective optimization with the qEHVI function. The only possibility I have discovered so far is via get_MOO_PAREGO and get_MOO_EHVI, which are also explained in the tutorials. Is there an easy way to link a ModelBridge object to an experiment of an AxClient? I would prefer to work with the Service API. Another possibility is to pass acquisition functions to the client via the GenerationStrategy. However, I have not found qParEGO as a defined acquisition function, unlike qEHVI. Have I missed something, or which implementation approach is (presumably) easier to implement?