-
https://homes.cs.washington.edu/~marcotcr/aaai18.pdf
----
### Abstract
We introduce a novel model-agnostic system that explains the
behavior of complex models with high-precision rules calle…
-
### WHY
We need to reveal how the black-box model engine operates for user to understand and trust the systems (where they do not know the basis for the current outcomes and lack insight how it wor…
-
## Detailed Description
The current solar forecasting model is a gradient boosted tree model, which can achieve high predictive accuracy but often lacks interpretability. It is proposed to eval…
-
There is a recent paper which explains how to do explain_prediction for trees and tree ensembles, which they claim to be better than treeinterpreter-like measures: https://arxiv.org/pdf/1706.06060.pdf…
kmike updated
5 years ago
-
## Abstract
- Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental
- if one plans to take action based on a prediction.
- when cho…
hon9g updated
5 years ago
-
Whilst `AnchorText` works directly on raw text, `IntegratedGradients` works on the token level. One reason for this is that `IntegratedGradients` is use case agnostic - tabular data, images and text a…
-
Chào mọi người em có dịch qua một số từ trong phần 5.8 như sau. Bốn từ đầu được lấy từ đoạn này:
Like its predecessor, the anchors approach deploys a perturbation-based strategy to generate local e…
-
https://scrapbox.io/nikkie-memos/%22Why_Should_I_Trust_You%3F%22:_Explaining_the_Predictions_of_Any_Classifier
## まとめ
どのような分類器の予測も説明できる手法LIME (Local Interpretable Model-agnostic Explanations) を提…
-
[Why should I trust you?: Explaining the predictions of any classifier](https://arxiv.org/abs/1602.04938)
Despite widespread adoption, machine learning models re- main mostly black boxes. Understandi…
-
We use SHAP explanations for models. It will be nice to have explanations for each prediction.