-
### Describe the workflow you want to enable
I am part of the @neurodata team. Binning features have resulted in highly efficient and little loss in performance in gradient-boosted trees. This feat…
-
#### Description
To enhance the performance of the flight delay prediction model, we should explore the use of more advanced machine learning models and perform hyperparameter tuning. The following s…
-
keyword: `Gradient Boosting Decision Trees`
-
### Current situation
We have the unfortunate situation to have 2 different versions of gradient boosting, the old estimators ([`GradientBoostingClassifier`](https://scikit-learn.org/stable/modules/g…
-
# Tweet summary
(GBM) Separate tree based parameters and Boosting parameters
(XGB/LGBM/Cat) Add regularization parameters
# Useful link
https://www.analyticsvidhya.com/blog/2016/02/complete-guid…
-
1. How to Explain Gradient Boosting
* https://explained.ai/gradient-boosting/index.html
* check out `Recommended Readings`
1. Towards Data Science - Boosting Algorithms by SauceCat
* h…
-
Hi, I am recently conducting some numerical experiments for a new tree boosting trick. I designed a simple example for robust regression, in which there are three data points involved: (1.2, 1.5, 2.…
-
ML
---
1. Decision tree
2. Random forest (regression)
3. Random forest (classification)
4. Linear (regression)
5. Linear (classification)
6. Gradient Boosting
DL
---
1. KNN
-
I have a timeseries data that I have to fit a classifier, I would like to re-train it every month with new data coming in. I would like to keep some consistency, so I prefer to pre_warm the tree with …
-
Currently each model that needs it defines their own losses, it might be useful to put them all in one place to see if anything could be re-used in the future.
In particular, losses are defined in,…