Closed IanDelbridge closed 3 months ago
HI @IanDelbridge, yes, that's correct, and your solution is as well. It's actually not that hacky at all, all things considered. Ideally, we'd have an automated way of transforming the inputs to the acquisition functions, but since with the modular setup you could have any number and kinds of those with different inputs it's not obvious how to tell what to transform.
Maybe a convenience feature could be something like adding a acq_options_to_transform
argument to evaluate_acquisition_function
and then apply what you're doing manually here under the hood and then pass the union of the transformed and the acq_options
dict to the acquisition function? Do you think that would be a convenient API? Though that would also require distinguishing between python native and Tensor type inputs...
Hi Max, thanks, I appreciate your response!
I think the part that sets off alarms telling me my solution is a hack is the way I am doing the transform. Is there a better way to apply the transform and inverse transforms without making fake observation features? Something like gp_pi.transform_observation_data(ObservationData(...))
?
acq_options_to_transform
would work from my perspective as an API but I'm also thinking about un-transforming the acquisition function value, specifically thinking about the UCB
return value.
I think we should be able to easily expose a transform_observation_data()
method to make this less hacky - not sure why we haven't done this until now. @bletham is there a reason for this or just not something that was needed so far ?
Some transforms for ObservationData require knowing ObservationFeatures. Particularly StratifiedStandardizeY, which standardizes Y but stratified on some conditions on X (https://github.com/facebook/Ax/blob/main/ax/modelbridge/transforms/stratified_standardize_y.py#L34). It is used for multi-task modeling, where data from different tasks may have very different scales and so need to be standardized separately.
This is why transform_observation_data
is not a required method for a Transform to implement, and in fact StratifiedStandardizeY does not implement that and implements only transform_observations
.
We could implement a transform_observation_data
on the modelbridge, and it would just need to throw an error if the modelbridge has any transforms that do not implement transform_observation_data
.
This looks like it's been inactive for a while! @IanDelbridge, is this still an open issue for you? If so, please reopen it (we likely won't see further comments on a closed issue).
Hi, I am running a basic
SingleTaskGP
-based optimization, and I would like to returnP(f(x) > 0)
, the probability of improving over a fixed objective baseline of 0. This is for a downstream system to decide the value of the candidate points and decide whether to stop optimization or not.My understanding is that I should be able to get this by computing the
ProbabilityOfImprovement
acquisition function and supplying{"best_f": 0}
as follows:I think, though, that Ax transforms on the observation data before passing to BoTorch, and I'm not sure if 0 in the original space maps to 0 in the transformed space. Is that true? And if so, what is the recommended way of transforming the outcome?
The very hacky solution I've come up with looks like this, and I'm not sure if it's correct:
Thanks!