-
Both XGBoost and LightGBM support a "Random Forest" mode.
In XGBoost you set the `num_parallel_tree` parameter with `nrounds=1` (https://xgboost.readthedocs.io/en/latest/R-package/discoverYourDat…
-
## Description
After training in CLI a model I get a txt file with a model. Is it possible to provide more information about the structure of this txt file.
![image](https://user-images.githubuser…
-
There is a recent paper entitled "Feature Importance in Gradient Boosting Trees with Cross-Validation Feature Selection" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9140774/) that attempt to address…
-
Problem: CatBoost very slow in small numeric datasets?
catboost version: 0.22
Operating System: Linux
CPU: Intel(R) Core(TM) i5-6200U CPU @ 2.30GHz
The issue is that I am getting very slow times…
-
Hi,
for data protection reasons I want to make sure that splitting stops when a minimum number of observation in one node is reached. In other tree boosting implementations there is a parameter to…
LNA28 updated
3 years ago
-
Fit clones of an estimator on each block / partition, and ensemble the results somehow http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.VotingClassifier.html, http://scikit-learn.org/…
-
- [ ] [Release Release 2.0.0 stable · dmlc/xgboost](https://github.com/dmlc/xgboost/releases/tag/v2.0.0)
# Release Release 2.0.0 stable · dmlc/xgboost
## Snippet
We are excited to announce the rele…
-
I have created the following LighGBM model but I cant get it to work with the TreeExplainer when I use model_output = "probability".
![image](https://user-images.githubusercontent.com/30757857/53…
-
This issue is a follow-up of the PR #20058
## Background
We are aware that our current implementation of mean decrease in impurity is biased:
- it uses statistic from the training set (issue …
-
In the nearest future, hep_ml will be cleaned and integrated with `REP`.
## Prospect
`hep_ml` will be an extension of scikit-learn, which will follow it module structure, and interface (with some cor…