Blunde1 / agtboost

Adaptive and automatic gradient boosting computations.
MIT License
66 stars 11 forks source link
adaptive-learning gradient-boosting information-theory machine-learning

Travis build status Lifecycle: experimental License:
MIT CRAN RStudio mirror downloads

aGTBoost

Adaptive and automatic gradient tree boosting computations

aGTBoost is a lightning fast gradient boosting library designed to avoid manual tuning and cross-validation by utilizing an information theoretic approach. This makes the algorithm adaptive to the dataset at hand; it is completely automatic, and with minimal worries of overfitting. Consequently, the speed-ups relative to state-of-the-art implementations are in the thousands while mathematical and technical knowledge required on the user are minimized.

Note: Currently for academic purposes: Implementing and testing new innovations w.r.t. information theoretic choices of GTB-complexity. See below for to-do research list.

Installation

R: Finally on CRAN! Install the stable version with

install.packages("agtboost")

or install the development version from GitHub

devtools::install_github("Blunde1/agtboost/R-package")

Users experiencing errors after warnings during installlation, may be helped by the following command prior to installation:

Sys.setenv(R_REMOTES_NO_ERRORS_FROM_WARNINGS="true")

Example code and documentation

agtboost essentially has two functions, a train function gbt.train and a predict function predict. From the code below it should be clear how to train an aGTBoost model using a design matrix x and a response vector y, write ?gbt.train in the console for detailed documentation.

library(agtboost)

# -- Load data --
data(caravan.train, package = "agtboost")
data(caravan.test, package = "agtboost")
train <- caravan.train
test <- caravan.test

# -- Model building --
mod <- gbt.train(train$y, train$x, loss_function = "logloss", verbose=10)

# -- Predictions --
prob <- predict(mod, test$x) # Score after logistic transformation: Probabilities

agtboostalso contain functions for model inspection and validation.

-- Model validation --

gbt.ksval(object=mod, y=caravan.test$y, x=caravan.test$x)


The functions `gbt.ksval` and `gbt.importance` create the following plots:
<img src="https://github.com/Blunde1/agtboost/raw/master/docs/img/agtboost_validation.png" width="700" height="300" />

Furthermore, an aGTBoost model is (see example code)

- highly robust to dimensions: [Comparisons to (penalized) linear regression in (very) high dimensions](R-package/demo/high-dimensions.R)
- has minimal worries of overfitting: [Stock market classificatin](R-package/demo/stock-market-classification.R)
- and can train further given previous models: [Boosting from a regularized linear model](R-package/demo/boost-from-predictions.R)

## Dependencies

- [My research](https://berentlunde.netlify.com/) 
- [Eigen](http://eigen.tuxfamily.org/index.php?title=Main_Page) Linear algebra
- [Rcpp](https://github.com/RcppCore/Rcpp) for the R-package

## Scheduled updates

- [x] Adaptive and automatic deterministic frequentist gradient tree boosting.
- [ ] Information criterion for fast histogram algorithm (non-exact search) (Fall 2020, planned)
- [ ] Adaptive L2-penalized gradient tree boosting. (Fall 2020, planned)
- [ ] Automatic stochastic gradient tree boosting. (Fall 2020/Spring 2021, planned)

## Hopeful updates

- Optimal stochastic gradient tree boosting.

## References
- [An information criterion for automatic gradient tree boosting](https://arxiv.org/abs/2008.05926)
- [agtboost: Adaptive and Automatic Gradient Tree Boosting Computations](https://arxiv.org/abs/2008.12625)

## Contribute

Any help on the following subjects are especially welcome:

- Utilizing sparsity (possibly Eigen sparsity).
- Paralellizatin (CPU and/or GPU).
- Distribution (Python, Java, Scala, ...),
- good ideas and coding best-practices in general.

Please note that the priority is to work on and push the above mentioned scheduled updates. Patience is a virtue. :)