scikit-learn-contrib / forest-confidence-interval

Confidence intervals for scikit-learn forest algorithms
http://contrib.scikit-learn.org/forest-confidence-interval/
MIT License
284 stars 48 forks source link

Can this package be adapted to perform Thompson sampling? #105

Closed douglasmason closed 4 months ago

douglasmason commented 3 years ago

I’m looking at using random forest regressors to perform hyperparameter tuning in a Bayesian optimization setup. While you can use the upper confidence bound to explore your state space, Thompson sampling performs better and eliminates the need for tuning the hyper-hyperparameter of the confidence interval used for selection. One solution is to obtain an empirical Bayesian posterior by training many random forest regressors on bootstrapped data, but this seems like overkill (ensembles of ensembles!). Would appreciate any input on the subject thank you! (For more discussion see this review of using CART decision trees to pull off the goal: https://arxiv.org/pdf/1706.04687.pdf)

danieleongari commented 4 months ago

This package provides the confidence interval not the acquisition function!