dais-ita / interpretability-papers

Papers on interpretable deep learning, for review
27 stars 2 forks source link

Visualizing and Understanding Recurrent Networks #29

Open richardtomsett opened 6 years ago

richardtomsett commented 6 years ago

Visualizing and Understanding Recurrent Networks Recurrent Neural Networks (RNNs), and specifically a variant with Long Short-Term Memory (LSTM), are enjoying renewed interest as a result of successful applications in a wide range of machine learning problems that involve sequential data. However, while LSTMs provide exceptional results in practice, the source of their performance and their limitations remain rather poorly understood. Using character-level language models as an interpretable testbed, we aim to bridge this gap by providing an analysis of their representations, predictions and error types. In particular, our experiments reveal the existence of interpretable cells that keep track of long-range dependencies such as line lengths, quotes and brackets. Moreover, our comparative analysis with finite horizon n-gram models traces the source of the LSTM improvements to long-range structural dependencies. Finally, we provide analysis of the remaining errors and suggests areas for further study.

Bibtex:

@article{DBLP:journals/corr/KarpathyJL15, author = {Andrej Karpathy and Justin Johnson and Fei{-}Fei Li}, title = {Visualizing and Understanding Recurrent Networks}, journal = {CoRR}, volume = {abs/1506.02078}, year = {2015}, url = {http://arxiv.org/abs/1506.02078}, timestamp = {Wed, 07 Jun 2017 14:42:54 +0200}, biburl = {http://dblp.uni-trier.de/rec/bib/journals/corr/KarpathyJL15}, bibsource = {dblp computer science bibliography, http://dblp.org} }

richardtomsett commented 6 years ago

Karpathy et al. (2015) provided similar insights* for recurrent neural networks (RNNs) – specifically Long-Short-Term-Memory (LSTM) RNNs. They trained an LSTM RNN one character at a time on different texts, and developed a method to show the activation of individual units as they generated new text. They showed that some cells learned easily-interpretable features in the text that spanned over a long time-range; for example, keeping track of quotations or line-lengths. Other units, though, produced less easily interpretable outputs, switching on and off with no easily discernible pattern