dais-ita / interpretability-papers

Papers on interpretable deep learning, for review
27 stars 2 forks source link

Visualizing and Understanding Convolutional Networks #27

Open richardtomsett opened 6 years ago

richardtomsett commented 6 years ago

Visualizing and Understanding Convolutional Networks Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark Krizhevsky et al. [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.

Bibtex:

@Inbook{Zeiler2014, author="Zeiler, Matthew D. and Fergus, Rob", editor="Fleet, David and Pajdla, Tomas and Schiele, Bernt and Tuytelaars, Tinne", title="Visualizing and Understanding Convolutional Networks", bookTitle="Computer Vision -- ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I", year="2014", pages="818--833", url="https://doi.org/10.1007/978-3-319-10590-1_53" }

richardtomsett commented 6 years ago

From previous review: Zeiler and Fergus (2014) extended this idea* to (supervised) convolutional neural networks (CNNs) by using deconvolutional networks [8] – CNNs that map features to the input pixel space – to analyse higher-layer units. They used their visualizations to guide modifications to the CNN that improved its accuracy, and showed that having a minimum model depth was crucial to its performance. This work provides an important example of how increased transparency is not just important for understanding model behaviour; it can also guide us to build better models.