Open jduerholt opened 1 year ago
Great!
Regarding of the constrained objectives:
Simple case: You have an optimization Problem in which you want to maximize one output but an output constrained has to be fulfilled like a cetain other property has to be smaller than a user provided threshold. In this case you can understand the problem as a single objective optimization problem under an output constraint, what you then do is to model the constraint by a MinimizeSigmoidObjective and multiply this with the MaximizeObjective. THis is exactly what we do in multiplicativeSobo and what botorch is also doing internally at other ocassions. It results in principle in constrained expected improvement.
Now let's add another MaximizeObjective to the optimization problem. Now we have to possibilites to understand the problem:
One would have the same for close to target objective, is it actually something which enters the pareto front or is it an output constraint. So far we cannot distuiguish this, for this reason, I propose to add this additional flag to specific objectives.
What do you think?
Are there practical applications where 2 is not sufficient?
Btw. I will wait for #21 to be merged before porting the algorithms.
Are there practical applications where 2 is not sufficient?
Yes. We often run opts with the first use case.
Okay. Why do we need a separate base class and a flag? Isn't a class ConstrainedObjective
for use case 1 sufficient? In use case 2 I do not see a special treatment.
Just as info, we will also bring back the random strategy and the sampling part from our side. We will orient very much on how you did it in opti
.
Just as info, we will also bring back the random strategy and the sampling part from our side. We will orient very much on how you did it in
opti
.
Uniform and sobol sampling is now available again, have a look here: https://github.com/experimental-design/bofire/pull/25
Constrained sampling, rejection sampling and the random strategy will be hopefully back tomorrow, the latest on Monday ;)
Okay. Why do we need a separate base class and a flag? Isn't a class
ConstrainedObjective
for use case 1 sufficient? In use case 2 I do not see a special treatment.
Good idea, I think I will change it accordingly ;)
Entmoot #74, TSEMO #73, DoE #68, Random Forest #69
As the abstract strategy templates are almost finished (https://github.com/experimental-design/bofire/pull/12), we should to discuss the how and who regarding bringing back of the strategies:
Random Strategy:
In our implementation we had a flage
use_sobol
to switch to Sobol sampling (using the botorch implementation) if allowed by the constraints of the domain, else we were switching to the HitAndRunSampler of boTorch. NChooseK constraint and non-linear constraints were not implemented in our RandomStrategy. When calling Sobol and having categorical features we were using uniform sampling for the categorical ones and Sobol for the continuous.You implemented different Samplers (Hit and Run, Sobol, Uniform, Rejection) in
opti.sampling
and just called it in theRandomStrategy. Based on the domain, you were choosing which sampler to apply.How should we do it in BoFire? Having one RandomStrategy which does the majic under the hood or having specific strategies, one for Sobol, one for Regejction? I am completely open here ...
Entmoot:
PredictiveStrategy
RandomForest:
PredicitiveStrategy
. I see that you are using there the sampling methods of opti sampling. Concerning point one (RandomStrategy), this could be an argument for doing the sampling from a conceptual point of view exaclty as you did it in opti. So having a sampling module and wrap it into the RandomStrategy.DoE:
Strategy
.TSEMO:
PredictiveStrategy
.Now the botorch complex. We have currently the following botorch based strategies:
botorch
Objective
class. SOBO only allows for a single OutputFeature. In AdditiveSobo the objectives defined per OutputFeature are combined in additive way, whereas the scalarization in MultiplicativeSOBO is multiplicative.MaximizeObjective
andMinimizeObjective
are supported currently. For supporting the other ones, I am tending towarts implementing them as constraints as done here: https://github.com/pytorch/botorch/blob/main/tutorials/constrained_multi_objective_bo.ipynb What is your opinion here? Do you want to have it as output constraints or as actual objectives in the acquistion function calc of the hypervolume? We could also go for both options, but then we need to define an API interface for this purpose.optimize_acqf_list_mixed
. I already started the PR but have to finished it by adding the tests (https://github.com/pytorch/botorch/pull/1342)From a class perspective it looks as follows currently in Everest:
I would opt for keeping this schema in principle as the this allows us to easily pass around the botorch specific implementations. I would also volunteer for porting this to bofire, as we plan also massive additions here regarding the support of other model types like deep ensembles, MultiTaskGPs, MultifidelityGPs, linear models and latent space models.
For me the biggest open question is how to distuinguish in MOBO strategies if an ojective should be treated as a true output objective or as an output constraint. This touches also the discussions in this issue https://github.com/experimental-design/bofire/pull/6#issuecomment-1312819195 I would propose the following:
Let us introduce a new abstract objective
ConstrainedObjective
with a boolean attributeis_output_constraint
and derive the MaxSigmoid, CloseToTarget etc. objectives from the abstract ConstrainedObjective class. Depending how theis_output_constraint
flag is set it is treated accordingly in the specific strategies. What do you think?ping @jkleinekorte @DavidWalz @bertiqwerty @R-M-Lee @WaStCo