-
adding a feature that compares feature importance or SHAP/LIME results across multiple models
-
### Problem Description
For data scientists in the industry, explainability is crucial to build trust with other stakeholders. Marginal explainability with, e.g., `permutation_importance` helps, but …
-
Currently, we only support scikit-learn models. We want to extend its functionality to include support for Keras models, which are widely used for building deep learning models.
Integrate Keras mod…
-
I was checking out one of the utilities for model explanations. I see two functions (grad_cam and feat_attribution). Is this attribution in any way related to SHAP? I don't see that it is. Would a SHA…
-
**Is your feature request related to a problem? Please describe.**
sktime currently lacks built-in tools for model explainability, making it difficult for users to interpret and understand the pred…
-
How Could I explain the model? Is there any way? Like using LIME or SHAP?
-
-
Currently, there are only two model explainers: Alibi and AIX. However, Alibi is from Seldon and they recently changed their license so users cannot use it in production: https://github.com/kserve/kse…
-
[SHAP Explainability](https://shap.readthedocs.io/en/latest/example_notebooks/overviews/An%20introduction%20to%20explainable%20AI%20with%20Shapley%20values.html) provides an explanation for the output…
-
Hi,
How can I use the model for generating evidence tokens for a single clinical note? I found the parquet files with evidence tokens generated after running the eval_explanations.py file, but I c…