-
Hi there!
First of all, thank you so much to everyone who has contributed to this incredible library!
I've been using SHAP for a while and I'm a big advocate of additive explanations for model i…
-
Is there any change in the WaterFall plot?
Previously this was the syntax:
```
shap.waterfall_plot(expected_values, shap_values[row_index], data.iloc[row_index], max_display=max_features)
```
Now…
-
### Issue Description
I used a Random Survival Forest with 10 estimators and a max depth of 25 on approximately 1800 data samples. The full dataset otherwise contains approximately 200,000 data sam…
-
## Overview
As per discussion on #2893, it looks the shape of `shap_values` returned by TreeExplainer is inconsistent. It has:
- shape `(n_samples, n_features)` if the background dataset is passed…
-
Is there a way to calculate the shap intercation values for neural network models?
-
catboost version: 1.0.6
Operating System: Linux
CPU: Y
Problem:
catboost cannot compute shap values if the model has set a scale and bias via the set_scale_and_bias function.
```
import ca…
-
### Issue Description
I trained a Residual Convolutional Neural Network based upon the Pytorch framework. And I got the error:
> `AssertionError: The SHAP explanations do not sum up to the model's…
-
## Möbius Interactions
$$
\Phi_n(S) := \sum_{T \subseteq S} (-1)^{|S|-|T|}\nu(T)
$$
Where:
| Symbol | Description |
| --- | --- |
| $\\Phi\_n(S)$ | Möbius Interaction for subset S |
| $S$…
-
Dear Developers:
The ```shap.GPUTreeExplaine``` can be used on Ubuntu, but little information I know is about running it on Windows.
My code is:
```
explainerGPU = shap.GPUTreeExplainer(mod…
-
Hi, I'm explaining models loaded on python from weka and I was trying to use the summary_plot with the shap_interaction_values, but when i try to do it:
shap_interaction_values = explainer.shap_inter…