-
See https://github.com/christophM/interpretable-ml-book/issues/56
-
### Description
I am unable to make sense of this traceback.
#### Code to reproduce
```
import shapely.geometry as sgeom
```
#### Traceback
A problem occurred in a Python script. Here is…
-
Hello,
I am trying to understand how SHAP equation derived in an intuitive way.
![default](https://user-images.githubusercontent.com/2246216/41510359-03e29c86-7218-11e8-8a10-5d30e6d8f1e0.PNG)
…
-
Hi! I've got an observation and curious if the developers had any thoughts.
For tree-based binary classifcation problems, I've noticed that when computing Shapley values for XGB, the values repres…
-
When using xgboost to train the model, the algorithm knows to treat missing value differently by assigning them to a different branch at splitting.
My question is, how does SHAP handle missing val…
-
I have a question about the SHAP.
After I train a model, eg. neural networks on my dataset, I use SHAP to evaluate.
Does SHAP use the same idea of the calculation of Shapley Value?
What I mean …
-
They are hand drawn and I am not super happy with them.
Help to improve those is appreciated.
The location in the book: https://github.com/christophM/interpretable-ml-book/blob/master/chapters/…
-
Without just getting the all the combinations from the feature set, what is the logic that you have used using num_subset_sizes and num_paired_subset_sizes ect to generate new samples in Kernel SHAP?…
-
Hi,
I've noticed that all the sample notebooks appear to require training data to ascertain base values. But say all I've got is a black box model. I don't know its architecture and I don't have ac…
-
Hi,
Sometimes if i add a new 'baselayer' the 'dropdown' option to select the second baselayer does not appear. The same can happen with the 'Locating' dropdown list. It will sometimes appear with n…