In the thesis "Hierarchical Neural Networks for Image Interpretation", which was published as LNCS 2766 by Springer in 2003, I propose a hierarchical, recurrent, convolutional architecture for learning image interpretation. Many features of deep learning architectures popular today were present there, including output at original image resolution, supervised training through rectifying nonlinearities, and iterative refinement of interpretations. The main contribution is the use of horizontal and vertical feedback loops (via recurrent connections) that allow for the flexible use of contextual information for resolving local ambiguities.
In the thesis "Hierarchical Neural Networks for Image Interpretation", which was published as LNCS 2766 by Springer in 2003, I propose a hierarchical, recurrent, convolutional architecture for learning image interpretation. Many features of deep learning architectures popular today were present there, including output at original image resolution, supervised training through rectifying nonlinearities, and iterative refinement of interpretations. The main contribution is the use of horizontal and vertical feedback loops (via recurrent connections) that allow for the flexible use of contextual information for resolving local ambiguities.