automl / mf-prior-bench

A collection of multi-fidelity benchmarks with first class support for user priors
https://automl.github.io/mf-prior-bench/
Apache License 2.0
8 stars 2 forks source link

Similar function signatures for Learning Curve retrieval #1

Closed Neeratyoy closed 1 year ago

Neeratyoy commented 2 years ago

In my opinion, having the same function signatures for the two types of retrieval makes sense:

Additionally, I was wondering if we can already have a placeholder parameter that can be further passed to either a benchmark state during initialisation or to these function calls above. That is whether the evaluation is being continued or thawed. The only thing that should change then is the cost of continuations. Not sure where then is the best place to change. We could also kind of do it post-hoc for the evaluations made by subtracting the cost incurred for a lower fidelity evaluation.

For the non-benchmark case, NePS anyways handles continuation so should work out fine I believe.

@DaStoll @eddiebergman thoughts? Please feel free to take the call.

eddiebergman commented 2 years ago

As a note, I changed def trajectory to be but I see your argument none the less

# C, the config type for the benchmark
# F, the fidelity type for the benchmark (int, float)
# R, the result type for the benchmark
@abstractmethod
def trajectory(
    self,
    config: C,
    *,
    frm: F | None = None,  # None indicates min of fidelity
    to: F | None = None,   # None indicates max of fidelity
) -> list[R]:
eddiebergman commented 2 years ago

Regarding the freeze-thaw action, it only makes sense with "freeze-thawable" algorithms, I would leave it to post-hoc for simplicity of the benchmark