Closed elboyran closed 3 years ago
According to the "taxonomy paper" the post-hoc XAI methods can be grouped in few categories (see Section 2.5.2, page 88, Figure 4 on page 89 and Section 4 on page 92). Table 2 on page 90 already limits the scope of ML models which need the post-hoc explainability: Tree Ensembles, Support Vector Machines, Multi–layer Neural Network, Convolutional Neural Network, Recurrent Neural Network and also the type of post-hoc methods. Section 4.1 Model-agnostic techniques for post-hoc explainability limits further the scope to model simplification, feature relevance estimation and visualization techniques. Section 4.3. Explainability in Deep Learning claims most used methods are local explanations and feature relevance.
Initially int he proposal and kick-off I have considered only relevance estimation. After Yang's suggestion, I propose to (maybe) consider the following categories:
Yes | Maybe | Not |
---|---|---|
Feature relevance (Visually) | Explanation by simplification | Text explanations |
Local explanations | Explanation by example | |
- | - | Visual explanations (as a separate category) |
What can help with populating the XAI methods initial list could be the Tables 1 and 2 on pages 27 and 28 of this recent report about state of the art of XAI. What is useful in Table 1 are the data types supported (we prefer Any) and in Table 2- what methods are preferred by the popular toolkits (similar to my own collection, but more recent).
Shall we limit to Deep ML models only?
Shall we limit to Deep ML models only?
Yes. I think we can focus on Deep ML. And it will be easier for evaluation as well since we only focus on one category.
Found a new publication "Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications", serves as the latest synthesis of XAI for deep ML. In appendix C, it provides a list of XAI methods (Table 3) and a short list of XAI software packages (Table 2). This seems to be very useful for our task.
I'm now going to have a look at "Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications" as suggested by Yang. I'll add what I find to the result document.
We split this issue into a couple of smaller ones. Please assign yourself there. Closing this one.
Create an initial list of XAI methods to chose from.
Input Possible starting points to get references from:
DIANNA proposal, section 6b eScience
Arrietta et all. "Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI", Information Fusion 58 (2020) pp. 82–115, aka "taxonomy paper"
My preliminary work overview summarizing some OSS libraries and what methods they explain
Recent delivarable on "State of the Art on Validation Techniques for ML", especially Tables 1 and 2 on pages 27 and 28
New publication about XAI for deep ML "Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications", also at our repo, including a list of XAI methods (Table 3) and a short list of XAI software packages (Table 2)
Scope Maybe we should not limit ourselves only to feature relevance methods to chose top N fro, but top M (M < N) from each category post-hoc model agnostic methods?
Output Create a document to help deciding on the initial list.