Open ledell opened 3 years ago
You are right. There is no metric passed. I don't remember why it wasn't set.
Anyway, I've moved MLJAR AutoML engine into open-source https://github.com/mljar/mljar-supervised (docs: https://supervised.mljar.com) and added it to openml/automlbenchmark (although I need to update the mljar-supervised version there, after adding golden features and features selection as new steps).
@pplonski I've seen the new MLJar supervised; cool to see it open sourced! I saw it's been added to the openml/benchmark too, thanks!
I noticed that you're reporting logloss as the metric to evaluate systems, but you're not passing this information to any of the AutoML systems. Both auto-sklearn and H2O AutoML (maybe MLJar too?) have the ability to optimize and choose a leader model based on the metric which you want to evaluate, so this should be explicitly specified in a benchmark.
stopping_metric
andsort_metric
and should both be set to"logloss"
. More info here. By default on binary classification problems, H2O is optimized for AUC, unless you change it to logloss.metric
argument which should be used and set to"logloss"
. More info here.