dais-ita / interpretability-papers

Papers on interpretable deep learning, for review
28 stars 2 forks source link

Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps #33

Open richardtomsett opened 6 years ago

richardtomsett commented 6 years ago

Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [Erhan et al., 2009], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [Zeiler et al., 2013].

Bibtex:

@misc{1312.6034, Author = {Karen Simonyan and Andrea Vedaldi and Andrew Zisserman}, Title = {Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps}, Year = {2013}, Eprint = {arXiv:1312.6034}, }

richardtomsett commented 6 years ago

From previous review: Several groups have taken an alternative approach to understanding CNNs: generating the CNN’s preferred image for each class it has learned. Simonyan et al. (2013) provide an early example of this, generating images by maximizing the output score of the network for each class in turn. Their images qualitatively demonstrate the input features that most represent each class.