[According to the authors, there are three approaches to explain neural networks: making parts of the DNN transparent, Learning semantic graphs from existing DNNs, Generate visual explanations that can be easily interpreted by humans.]
This description appears in Section II-B Importance of XAI.
Houda said in comment: I do not agree with this classification.
Houda summarized:
[According to the authors, there are three approaches to explain neural networks: making parts of the DNN transparent, Learning semantic graphs from existing DNNs, Generate visual explanations that can be easily interpreted by humans.]
This description appears in Section II-B Importance of XAI.
Houda said in comment: I do not agree with this classification.
How do we address this in this section ?