Quickly build Explainable AI dashboards that show the inner workings of so-called "blackbox" machine learning models.
2.32k
stars
332
forks
source link
skorch models raising The SHAP explanations do not sum up to the model's output #289
Open
oegedijk opened 11 months ago
Seems related to this bug: https://github.com/shap/shap/issues/3363
Can be avoided with passing
shap_kwargs=dict(check_additivity=False)
to the explainer, but then you might get inaccurate shap values.Added check_additivity=False param to the skorch tests for now.