openml / automlbenchmark

OpenML AutoML Benchmarking Framework
https://openml.github.io/automlbenchmark
MIT License
391 stars 130 forks source link

Refactor inference time measurement through common interface #536

Open PGijsbers opened 1 year ago

PGijsbers commented 1 year ago

PR #532 introduced the option to measure inference time for many frameworks, but it introduced a lot of code duplication along the way. The PR should be "finished" by:

Another open issue (that may be tackled independently altogether), is that the inference time measurement can take a considerable amount of time. This may lead to the benchmark aborting a job because of the overhead, while it would otherwise would have been successful. There should be the option to do "best effort" inference time measurement within the time limit, or perhaps configure it dynamically. I don't have a concrete idea on how to do this robustly.