ray-project / tune-sklearn

A drop-in replacement for Scikit-Learn’s GridSearchCV / RandomizedSearchCV -- but with cutting edge hyperparameter tuning techniques.
https://docs.ray.io/en/master/tune/api_docs/sklearn.html
Apache License 2.0
465 stars 52 forks source link

Metric name issue in tune_sklearn.tune_basesearch.TuneBaseSearchCV._format_results #117

Open zhu0619 opened 4 years ago

zhu0619 commented 4 years ago

In tune_sklearn/tune_basesearch.py:603, the way to get the column names of the given metric names aren't robust for customized scoring function and metric name, see below df[[ col for col in dfs[0].columns if "split" in col and "test_%s" % name in col ]].to_numpy() for df in finished If metric names are 'metric 1', 'metric 11', the result of metric 11 will also be added in 'metric 1'. In this case, maybe use col.endswith("test_%s" % name) is better.

richardliaw commented 4 years ago

hmm yeah maybe that sounds good. Could you push a PR and tag me?