dais-ita / interpretability-papers

Papers on interpretable deep learning, for review
29 stars 2 forks source link

Explaining the Unexplained: A CLass-Enhanced Attentive Response (CLEAR) Approach to Understanding Deep Neural Networks #49

Open richardtomsett opened 6 years ago

richardtomsett commented 6 years ago

Explaining the Unexplained: A CLass-Enhanced Attentive Response (CLEAR) Approach to Understanding Deep Neural Networks In this work, we propose CLass-Enhanced Attentive Response (CLEAR): an approach to visualize and understand the decisions made by deep neural networks (DNNs) given a specific input. CLEAR facilitates the visualization of attentive regions and levels of interest of DNNs during the decision-making process. It also enables the visualization of the most dominant classes associated with these attentive regions of interest. As such, CLEAR can mitigate some of the shortcomings of heatmap-based methods associated with decision ambiguity, and allows for better insights into the decision-making process of DNNs. Quantitative and qualitative experiments across three different datasets demonstrate the efficacy of CLEAR for gaining a better understanding of the inner workings of DNNs during the decision-making process.

Bibtex:

@INPROCEEDINGS{8014949, author={D. Kumar and A. Wong and G. W. Taylor}, booktitle={2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)}, title={Explaining the Unexplained: A CLass-Enhanced Attentive Response (CLEAR) Approach to Understanding Deep Neural Networks}, year={2017}, volume={}, number={}, pages={1686-1694} }

richardtomsett commented 6 years ago

From previous review: Kumar et al. (2017) present an alternative heat-mapping method that can show the image regions that the model was most attentive to, but also allows for multiple classes to be associated with these regions of attention, whereas LRP* assumes all features make either a zero or positive contribution to the single predicted class.

*see issues #44 #45 #46 #47 #48