-
# Adding an interpretability module to ZairaChem
## Background
This project is related to @HellenNamulinda's MSc thesis at Makerere University. The thesis is co-supervised by [Dr. Joyce Nakatumba-Nab…
-
Hi,
I am using mlr and iml to build an interpretable prediction model using xgboost. I tried Lime as well as shapley to extract the interpretable feature score (Lime) and phi values (Shapley).
`…
-
Hello, could you please provide some guidelines on how to obtain SHAP values for a finetuned vision transformer for custom dataset?
I am finetuning a google/vit-base-patch16-224-in21k with a class…
-
Shapley explanation works for the whole prediction set, however sometimes one needs to explain every step in the forecasting. This is a complex issue due to using different approaches - direct or recu…
-
For debugging black box models, it would be nice to get shapley feature importance values as they relate to the loss of the model rather than the prediction. I've seen this implemeted by the original …
-
Do you have more detail for the proof? Through some simulations I get that
$$X^TWX = \frac{M}{M-1}I + cJ$$ instead of $$X^TWX = \frac{1}{M-1}I + cJ$$
(The inverse in the paper is that of the cor…
-
Will shapley interaction values be supported in the next release?
-
a has a Shapley value of **f(a)**;
b has a Shapley value of **f(b)**;
if we let c = {a, b} as a super feature. (by considering the contribution of a & b together)
Then how do we calculate Shapl…
-
Post regarind the SHAP example
https://dropout009.hatenablog.com/entry/2019/11/20/091450
-
We are considering Shapley Value Analysis for the interpretability of our models. One crucial aspect is determining the optimal visualization plots for Shapley values and deciding the number of featu…