Deep Bottleneck: Understanding learning in deep neural networks with the help of information theory
This repository contains code to reproduce and expand on the results of
Schwartz-Ziv and Tishby and Saxe et al..
It is used to investigate what role compression plays in learning in deep neural networks.
Features
- plotting of learning dynamics in the information plane
- plotting activation histograms and single neuron activations
- different datasets and mutual information estimators
- logging experiments using Sacred
Documentation
Extensive documentation including theoretical background and API documentation can
be found at Read the Docs.