-
Use the https://autokeras.com/ AutoML system as a starting point.
-
http://mr-0xc1:8080/view/H2OAI/job/h2oai-benchmark-quick/1259/console
Starting automl original with runtime 97
04:49:26 AutoML progress: |███████ (failed)
04:49:26 Traceback (most recent call las…
-
Have a simple starter benchmark for comparing different AutoML algorithms e.g. from http://www.ml4aad.org/literature-on-neural-architecture-search/
Supporting orgs: Cisco
-
I made Benchmark AutoML libs, and TPOT showed very poor results, even worse than the usual CatBoost with standard parameters!
https://github.com/Alex-Lekov/AutoML-Benchmark/
I run the benchmark in d…
-
As a demonstration of the concept for time-series benchmarking #494, more autoML frameworks capable of predicting time series should be provided with benchmarking support.
This issue is dedicated …
-
When you want to run multiple AutoML systems, you currently have to call the script twice, e.g.:
```
python runbenchmark.py TPOT test test
python runbenchmark.py auto-sklearn test test
```
I…
-
Every benchmark should accept a rng (no matter whether it uses it). Currently, no benchmark accepts a seed or rng and therefore randomly shuffles data and creates models:
Examples
https://github.…
-
There seems to be an issue with running yahpo.
I run the following code:
```py
from yahpo_gym import *
from yahpo_gym import local_config, benchmark_set
def main():
local_config…
-
https://github.com/automl/HPOBench/blob/master/hpobench/benchmarks/ml/tabular_benchmark.py#L166
Should it be:
```python
cost_key = f"{evaluation}_costs"
```
?
-
I noticed that you're reporting logloss as the metric to evaluate systems, but you're not passing this information to any of the AutoML systems. Both auto-sklearn and H2O AutoML (maybe MLJar too?) ha…