-
Is it possible to explain a black box model with Shapley networks?
Suppose, I have a trained CNN model, and I want to get feature attributions for a single image from IMAGENET data set.
unnir updated
2 years ago
-
#### Description
Shankey plots are very useful in showing flows such as in : patient distribution, cash allocation, marketing channel attribution.
They can be a useful visualization of unsupervi…
-
When i run the code follow README.md , if report error like this , can you explain what happened ? thanks a lot !
BERT_meets_Shapley/Code/TransSHAP_master/explainers/LIME_for_text.py in predict(s…
-
It would make a lot of sense, and currently [SHAPR](https://cran.r-project.org/web/packages/shapr/vignettes/understanding_shapr.html) (= the R package for interpretability using Shapley Values) does n…
-
The y-axis label ("Feature") is not really helpful and should be removed
-
Hi Scott,
First, thanks for this great package!
I'd like to mention a recent paper which describes a conceptual critique of an aspect of the SHAP package, and to get your opinion on the matter. …
-
Hello! I'm just wondering if you code point me to the right direction for understanding how Shapley deals with dependent or correlated variables? Trying to understand how it is calculated differently …
-
Problem: the approximate method can still be slow for many trees
catboost version: master
Operating System: ubuntu 18.04
CPU: i9
GPU: RTX2080
Would be good to be able to specify how many trees …
-
# Tweet summary
PDP (Partial Deference Plot)
- Assumption: Fix all other variables, generate simulation on target variables
- Biggest issue: non-realistic parameter combination, i.e. 200cm 40kg
…
-
I have an imbalanced dataset (positive class rate = 1%) and have downsampled the negative class to give me a 50/50 balance in the two classes. Ignoring the challenges that comes with undersampling (l…