dais-ita / interpretability-papers

Papers on interpretable deep learning, for review
28 stars 2 forks source link

Understanding Deep Image Representations by Inverting Them #30

Open richardtomsett opened 6 years ago

richardtomsett commented 6 years ago

Understanding Deep Image Representations by Inverting Them Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of them remains limited. In this paper we conduct a direct analysis of the visual information contained in representations by asking the following question: given an encoding of an image, to which extent is it possible to reconstruct the image itself? To answer this question we contribute a general framework to invert representations. We show that this method can invert representations such as HOG more accurately than recent alternatives while being applicable to CNNs too. We then use this technique to study the inverse of recent state-of-the-art CNN image representations for the first time. Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance.

Bibtex:

@InProceedings{Mahendran_2015_CVPR, author = {Mahendran, Aravindh and Vedaldi, Andrea}, title = {Understanding Deep Image Representations by Inverting Them}, booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2015} }

richardtomsett commented 6 years ago

From previous review: Mahendran and Vedaldi (2015) investigated the information contained in image representations at different CNN levels, revealing that deeper layers learn increasingly abstract representations of the image contents, thus making their responses more invariant to changes in the input image.