facebook / Ax

Adaptive Experimentation Platform
https://ax.dev
MIT License
2.35k stars 303 forks source link

Multi-Objective, scale about the objectives #2743

Closed rachelhson closed 1 week ago

rachelhson commented 1 week ago
          @rachelhson thanks for the question, closing the issue per Eytan's answer. Feel free to re-open with follow-ups, or open another issue to for any future questions :)

Originally posted by @mgarrard in https://github.com/facebook/Ax/issues/2724#issuecomment-2319237426

How it is normalized under the hood? Is there any reference page how it is handling? Thank you :)

mgrange1998 commented 1 week ago

Hi Rachel, thank you for opening this issue.

I am currently working with a test that involves two different objectives, each with different scales. For instance, one objective has values ranging from 500 to 1000, while the other ranges from 0.5 to 2.0. Can the ax-client handle these objectives equally despite their different scales? If so, what would be the recommended setup to ensure both objectives are optimized equally?

For a guide on how this works under the hood, a good resource is the "Fully Bayesian Multi-Objective Optimization using qNEHVI + SAASBO" wiki

See this quote from the intro:

Multi-objective optimization (MOO) covers the case where we care about multiple outcomes in our experiment but we do not know before hand a specific weighting of those objectives (covered by ScalarizedObjective) or a specific constraint on one objective (covered by OutcomeConstraints) that will produce the best result.

The solution in this case is to find a whole Pareto frontier, a surface in outcome-space containing points that can't be improved on in every outcome. This shows us the tradeoffs between objectives that we can choose to make.

When Ax performs multi-objective bayesian optimization, it does not assume any trade-offs about the objectives- it instead prioritizes searching for improvement near optimal points and in unexplored search spaces, in order to construct the Pareto frontier. The objectives are normalized such that the two objectives are weighted equally when exploring the search space.

Let me know if you have any other questions, thanks.

mgrange1998 commented 1 week ago

Will close out the issue, please feel free to open up another issue with further questions. Thanks!