For eli5.explain_prediction() eli5 uses an approach based on ideas from http://blog.datadive.net/interpreting-random-forests/ : feature weights are calculated by following decision paths in trees of an ensemble. Each node of the tree has an output score, and contribution of a feature on the decision path is how much the score changes from parent to child.
To better understand how this method differ from other explanation methods like Shapley values, I would like to have a more detailed description, and I wonder if that might be found somewhere?
At https://eli5.readthedocs.io/en/latest/libraries/lightgbm.html#library-lightgbm it says:
For eli5.explain_prediction() eli5 uses an approach based on ideas from http://blog.datadive.net/interpreting-random-forests/ : feature weights are calculated by following decision paths in trees of an ensemble. Each node of the tree has an output score, and contribution of a feature on the decision path is how much the score changes from parent to child.
To better understand how this method differ from other explanation methods like Shapley values, I would like to have a more detailed description, and I wonder if that might be found somewhere?