ray-project / tune-sklearn

A drop-in replacement for Scikit-Learn’s GridSearchCV / RandomizedSearchCV -- but with cutting edge hyperparameter tuning techniques.
https://docs.ray.io/en/master/tune/api_docs/sklearn.html
Apache License 2.0
465 stars 52 forks source link

[Feature Request] Stop tuning upon optimization convergence #98

Closed rohan-gt closed 3 years ago

rohan-gt commented 4 years ago

Is it possible enable early stopping to any algorithm that does not have partial_fit (eg. LogisticRegression or RandomForest) just by looking at the train and test (CV score) score progression across the trials?

richardliaw commented 4 years ago

@rohan-gt good question! Can you clarify what you mean by "early stopping"? Do you mean:

  1. Stop the hyperparameter sweep early, or
  2. Stop the training of individual runs early? (LogisticRegression has the ability to "warm_start", so we leverage that for incremental training).
rohan-gt commented 4 years ago

@richardliaw to stop the hyperparameter sweep. Aren't the schedulers supported by Ray Tune used for the same purpose?

inventormc commented 4 years ago

In general, we need to be able to look at some metric after each epoch to use Ray Tune's schedulers/early stopping algorithms to stop a hyperparameter sweep early. This is why we currently only early stop on estimators that have partial_fit or warm_start -- we can look at the metric after each epoch. Other sklearn estimators will just fit all the way to completion without giving us a chance to look at metrics in between epochs.

richardliaw commented 4 years ago

Hmm yeah; I think perhaps there is value to stopping the hyperparameter tuning if the top score is converged across the last X trials though (even before having fully evaluated all n_trials trials).

rohan-gt commented 4 years ago

@richardliaw exactly. You just need to look at the CV score progression

rohan-gt commented 3 years ago

In the graph below I'm taking the cumulative max of the CV score as the trials progress. Here we can see that one major optimum is reached after 8 trials and we can possibly end the optimization after checking a few trials after that

Screenshot 2020-11-10 at 12 11 15 AM