sherpa-ai / sherpa

Hyperparameter optimization that enables researchers to experiment, visualize, and scale quickly.
http://parameter-sherpa.readthedocs.io/
GNU General Public License v3.0
331 stars 53 forks source link

Bayesian optimization with Random Forest #113

Open mikolajwojciuk opened 3 years ago

mikolajwojciuk commented 3 years ago

Hi there,

I am having a problem with implementing Bayesian optimization with Random Forest model, no matter how I do set up Sherpa study, i constantly get an error saying:

"InvalidConfigError: local_penalization evaluator can only be used with GP models"

My Sherpa config:

algorithm = sherpa.algorithms.GPyOpt(model_type='RF',acquisition_type='MPI',verbosity=True,max_num_trials=8) study = sherpa.Study(parameters=parameters, algorithm=algorithm, lower_is_better=True, disable_dashboard = True)

P.S. Overall great library!

LarsHH commented 3 years ago

Hi @mikolajwojciuk !

Apologies for the slow reply. It looks like the issue is due to the evaluator_type in GPyOpt, i.e. how GPyOpt chooses to evaluate concurrent trials. I didn't realize RF didn't work with local_penalization which I had hardcoded as the option for the evaluator type. Unless you have otherwise resolved the issue, could you try setting max_concurrent=1. That is, for your code:

algorithm = sherpa.algorithms.GPyOpt(model_type='RF',acquisition_type='MPI',verbosity=True,max_num_trials=8, max_concurrent=1)

In that case GPyOpt should ignore the setting. If that doesn't work, you could try going to this line https://github.com/sherpa-ai/sherpa/blob/ff6466e99717983f9f394ba72b63f17343e32bdc/sherpa/algorithms/bayesian_optimization.py#L138 in your code and setting it to evaluator_type='random'.

Thanks for raising the issue.

Best, Lars