-
@CESARDELATORRE
There's no integration with AutoML interpretability yet for TEST dataset. Interpretability is done just for TRAIN dataset. My customers want to check shapley value with predicted va…
-
Metrics
* Accuracy
* Balanced accuracy
Tools for explainability:
* print out of tree decisions (limited depth is best)
* software - lime
* software SHAP
* software pdp box
-
Hi and thanks for the great work ! I am having trouble understanding what the shap values for `model_output=1` represent. Here is a sample notebook:
https://github.com/AliSamiiXOM/ngboost_question/…
-
>PDTE approximates the interventional conditional expectation based on how many training samples went down paths in the tree, whereas ITE computes it exactly.
Hi, @HughChen , could you please provi…
-
It states to have an "An efficient implementation of the permutation feature importance algorithm discussed in this chapter from Christoph Molnar’s Interpretable Machine Learning book."
If so, one …
pat-s updated
3 years ago
-
More of a question but all my SHAP values are positive. Am I getting the values correct for the given model? Is there a better way to get the class 1 variables for a light gbm
```
import shap…
-
### Issue Description
Hello, I am having an issue where the sum of the shap values does not sum to the model prediction - expected_value. The model is in keras. I have included a minimum reproducible…
-
If you're working on one of these files, please add the url to your branch or the pull request. Mark the box if the changes are merged with `master`.
#### R-files
- [ ] `clustering.R`
- [x] `exp…
-
Test subgraphx example:
explainer = SubgraphX(grace, num_classes=4, device=device,
explain_graph=False, reward_method='nc_mc_l_shapley')
then get this error
TypeError …
-
I have a CNN model that both take images and scalar variables.
I would like to investigate only the importance of the scalars (not the pixels in the image).
As the images are (56,11,4), and the s…