Closed esantorella closed 2 weeks ago
This pull request was exported from Phabricator. Differential Revision: D65497700
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 95.61%. Comparing base (
8231273
) to head (68473c1
).
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
This pull request was exported from Phabricator. Differential Revision: D65497700
This pull request was exported from Phabricator. Differential Revision: D65497700
This pull request was exported from Phabricator. Differential Revision: D65497700
This pull request was exported from Phabricator. Differential Revision: D65497700
This pull request has been merged in facebook/Ax@ffedab0ccb7c1be53e3e04ba9acbb0fad96a56be.
Summary: Context: This will enable constructing the
BenchmarkRunner
based on theBenchmarkProblem
andBenchmarkMethod
rather than asking the user to provide it. In addition to making things simpler (it's weird that a runner is part of a problem!), that will enable the Runner to be aware of aspects of the method, such as parallelism.This will also enable us to return metrics in a dict format (
{outcome_name: value}
) if we choose to do so in the future. That may be simpler since the data already gets processed into dicts by the runner.Note that for problems based on BoTorch problems, names are usually already set programmatically, so that logic moves to the test problem.
This diff:
outcome_names
onBenchmarkTestFunction
outcome_names
as an argument fromBenchmarkRunner
BoTorchTestFunction
when they are not provided, following the convention used elsewhere.Update usages:
outcome_names
from calls toBenchmarkRunner
outcome_names
to calls toBenchmarkTestFunction
, where needed; they are generally already present on surroagate test functions and can be constructed automatically for BoTorch-based problems.Differential Revision: D65497700