dais-ita / interpretability-papers

Papers on interpretable deep learning, for review
29 stars 2 forks source link

Why are deep nets reversible: A simple theory, with implications for training #57

Open richardtomsett opened 6 years ago

richardtomsett commented 6 years ago

Why are deep nets reversible: A simple theory, with implications for training Generative models for deep learning are promising both to improve understanding of the model, and yield training methods requiring fewer labeled samples. Recent works use generative model approaches to produce the deep net's input given the value of a hidden layer several levels above. However, there is no accompanying "proof of correctness" for the generative model, showing that the feedforward deep net is the correct inference method for recovering the hidden layer given the input. Furthermore, these models are complicated. The current paper takes a more theoretical tack. It presents a very simple generative model for RELU deep nets, with the following characteristics: (i) The generative model is just the reverse of the feedforward net: if the forward transformation at a layer is A then the reverse transformation is AT. (This can be seen as an explanation of the old weight tying idea for denoising autoencoders.) (ii) Its correctness can be proven under a clean theoretical assumption: the edge weights in real-life deep nets behave like random numbers. Under this assumption - which is experimentally tested on real-life nets like AlexNet - it is formally proved that feed forward net is a correct inference method for recovering the hidden layer. The generative model suggests a simple modification for training: use the generative model to produce synthetic data with labels and include it in the training set. Experiments are shown to support this theory of random-like deep nets; and that it helps the training.

Bibtex: @misc{1511.05653, Author = {Sanjeev Arora and Yingyu Liang and Tengyu Ma}, Title = {Why are deep nets reversible: A simple theory, with implications for training}, Year = {2015}, Eprint = {arXiv:1511.05653}, }