rasbt / machine-learning-notes

Collection of useful machine learning codes and snippets (originally intended for my personal use)
BSD 3-Clause "New" or "Revised" License
774 stars 138 forks source link

04.2-optuna-xgboost-example.ipynb use of optuna #1

Closed ibozkurt79 closed 2 years ago

ibozkurt79 commented 2 years ago

suggest_discrete_uniform(name, low, high, q) should be the better choice for use of bayesian search below for learning_rate with high-low and step increases params = { "n_estimators": trial.suggest_categorical("n_estimators", [30, 50, 100, 300]), "learning_rate": trial.suggest_categorical("learning_rate", [0.01]), "lambda": trial.suggest_loguniform("lambda", 1e-8, 1.0), "alpha": trial.suggest_loguniform("alpha", 1e-8, 1.0), }

rasbt commented 2 years ago

Thanks! Wouldn't it be even better to use a discrete log-uniform one?

ibozkurt79 commented 2 years ago

Exactly. It would take a little more time to search but more accurate in a sense of bayesian thinking.

rasbt commented 2 years ago

However, in practice, tuning the learning rate in gradient boosting is also discouraged, which is why I would maybe rather leave it as is so that people don't try to tune it extensively and maybe only use 2 or 3 options at most. Tricky.

ibozkurt79 commented 2 years ago

Agree. Always saw notebooks searching only 2-3 LR but didn’t know the reason behind. Great learning curve for me