dais-ita / interpretability-papers

Papers on interpretable deep learning, for review
29 stars 2 forks source link

Synthesizing the preferred inputs for neurons in neural networks via deep generator networks #34

Open richardtomsett opened 6 years ago

richardtomsett commented 6 years ago

Synthesizing the preferred inputs for neurons in neural networks via deep generator networks Deep neural networks (DNNs) have demonstrated state-of-the-art results on many pattern recognition tasks, especially vision classification problems. Understanding the inner workings of such computational brains is both fascinating basic science that is interesting in its own right---similar to why we study the human brain---and will enable researchers to further improve DNNs. One path to understanding how a neural network functions internally is to study what each of its neurons has learned to detect. One such method is called activation maximization, which synthesizes an input (e.g. an image) that highly activates a neuron. Here we dramatically improve the qualitative state of the art of activation maximization by harnessing a powerful, learned prior: a deep generator network. The algorithm (1) generates qualitatively state-of-the-art synthetic images that look almost real, (2) reveals the features learned by each neuron in an interpretable way, (3) generalizes well to new datasets and somewhat well to different network architectures without requiring the prior to be relearned, and (4) can be considered as a high-quality generative method (in this case, by generating novel, creative, interesting, recognizable images).

Bibtex:

@incollection{NIPS2016_6519, title = {Synthesizing the preferred inputs for neurons in neural networks via deep generator networks}, author = {Nguyen, Anh and Dosovitskiy, Alexey and Yosinski, Jason and Brox, Thomas and Clune, Jeff}, booktitle = {Advances in Neural Information Processing Systems 29}, editor = {D. D. Lee and M. Sugiyama and U. V. Luxburg and I. Guyon and R. Garnett}, pages = {3387--3395}, year = {2016} }

richardtomsett commented 6 years ago

From previous review: Nguyen et al. (2016) make use of a Deep Generator Network to generate preferred images for particular neurons in a CNN, producing very realistic synthetic images that they claim make their method more easily interpretable when trying to understand what a CNN has learned.