christophM / interpretable-ml-book

Book about interpretable machine learning
https://christophm.github.io/interpretable-ml-book/
Other
4.73k stars 1.05k forks source link

Global Explanations for SHAP #277

Closed ericluo04 closed 2 years ago

ericluo04 commented 3 years ago

Hello - hope all is well! After reading your fantastic book (I've learned so much! thank you!), I had two quick questions:

  1. You mention here that: With SHAP, global interpretations are consistent with the local explanations, since the Shapley values are the "atomic unit" of the global interpretations. I would love to hear you expand on this, especially since I couldn't find theoretical justifications for global explanations in Lundberg and Lee (2017). What does atomic unit mean? And is this a consequence of local accuracy and consistency?
  2. On a related note, I notice most implementations of SHAP rely on average absolute value SHAP for feature importance. In a continuous prediction setting (e.g. forecasting temperature), average SHAP (w/o the absolute value) seems more intuitive. Do the same theoretical justifications work here as well?
ericluo04 commented 2 years ago

In case it's helpful to others, I found a great theoretical treatment of global explanations by Ian Covert and the original SHAP authors here: https://arxiv.org/pdf/2004.00668.pdf. Reading this answered all of my questions.