The book Interpretable Machine Learning is really interesting!
But I'm confused about an equation in Section 9.6.5 SHAP Feature Importance:
As written in the book: 'average the absolute Shapley values per feature across the data',
I think the ϕ_j^i is the feature attribution for a feature j, instance i, the Shapley value.
And the I_j is the feature importance value of feature j.
So why doesn't the equation look like this:
Please correct me if I am wrong, and forgive my poor English.
Thanks for your reading, I'm looking forward to hearing from you.
Lagoon
Dear sir,
The book Interpretable Machine Learning is really interesting! But I'm confused about an equation in Section 9.6.5 SHAP Feature Importance:
As written in the book: 'average the absolute Shapley values per feature across the data',
I think the ϕ_j^i is the feature attribution for a feature j, instance i, the Shapley value.
And the I_j is the feature importance value of feature j.
So why doesn't the equation look like this:
Please correct me if I am wrong, and forgive my poor English.
Thanks for your reading, I'm looking forward to hearing from you.
Lagoon