Closed JoshuaC3 closed 3 years ago
Closed in favor of being in #2302. We decided to keep all feature requests in one place.
Welcome to contribute this feature! Please re-open this issue (or post a comment if you are not a topic starter) if you are actively working on implementing this feature.
Summary
Have the option such that the model can select features cyclically, instead of simply randomly selecting the features.
See here for initial discussion on LightGBMs and EBMs.
Motivation
Model explain ability is becoming ever more important in the ML space. LightGBM can take advantage of some of the methods used by Explainable Boosted Machines to make models more interpretable. One of the features of EBMs is build shallow, single-feature trees (currently possible in LightGBM by toying with parameters). However, these trees are boosted in a cyclic fashion.
So, for example,
So for a model with 3-features:
This allows the model to ensure it gains information from colinear features that might be equally as important. When comparing this to a small number of deeper trees, it is easy to get a bias (lots of gain) in the first few features the are randomly selected.
Description
The feature would be used to make LightGBM more interpretable and results more comparable to EBMs. This will allow users to make informed decision on interpretability vs model performance.
Additionally, in certain cases, the model maybe be more robust at inference time if colinear features are missing.
References
A great conceptual video explanation.
InterpretML: A Unified Framework for Machine Learning Interpretability InterpretML: A toolkit for understanding machine learning models InterpretMLs Explainable Boosting Machine