THIS IS VERY MUCH A WORK IN PROGRESS, BUT HOPEFULLY IS INTERESTING NONETHELESS!!
This is a repository for alternative learning rules to backpropagation. Most of the implemented methods use no backpropagation, however some do use it, but are not completely reliant on it (fast weight programmers and reservoir computers).
The following methods are implemented:
In general, the rules are implemented and tested on MNIST.
All of the fully implemented algorithms are compared by their sample efficiency (how accurate are they with N samples?) and clock time efficiency (how accurate are they with X seconds of train time?). Architectures were identical as possible, however there are some caveats to that:
The following plot shows sample efficiency:
Backprop and the reservoir computer required optimizers, SGD was used with learning rate 0.01. Note that forward-forward does not function well with continuous (i.e. in [0, 1] values). The paper uses binary pixel values, which I did not do for comparison with the other backpropagation alternatives.
Sample efficiency is not the whole story: predictive coding relies on a slow equilbration phase involving hundreds of forward passes per sample. For this reason, time taken is also important:
Checked means started, not done (it wouldn't be on here if it was done!). Unchecked means not started.
Hebbian learning is a classic theory of neural network learning, inspired by neuroscience. In the most simple case, neural networks update with the rule "neurons that fire together, wire together" i.e. neurons update their weights based on the correlation between input and output. This turns out to essentially do a computation of low-rank version of the principal component matrix. There are many more variants of Hebbian learning, which capture things like nonlinearities, regularization, and clustering.
Here, the variant of Hebbian learning that is like "independent component analysis" is implemented as well as the "soft winner-take-all" method (which has also gone under the name "competitivie learning"). Hebbian learning was implemented using techniques from several papers:
Implementing convolutional Hebbian learning is a to-do.
Predictive coding is a neuroscience-inspired learning method that avoids some of the implausibility of backpropagation without sacrificing global error signals. The essential step is that only errors from predictions are passed "upwards" in the neural network. The error is calulated either via comparison with a backward prediction (in the bidrectional case) or via a energy-based relaxation phase that follows an energy gradient that decreases the errors then updates the weights to make that state more likely in the future (in feedforward case, see the linked paper for details). Note that predictive coding often has many Hebbian components, but it's listed seperately because there are major differences and non-Hebbian aspects to the learning.
Three variants are implemented here (with some subvariants):
This is based on a recent paper from Flatiron Institute. They derive online, non-Hebbian learning rules based on the idea of pyramidal neurons in the cerebral cortex being multicompartmental: they have distinct (apical, proximal) dendrites that integrate information from discrete neural populations. Therefore, this rule is based on learning involving integrating two seperate multidimensional signals.
TODO!
I was inspired by this paper to implement reservoir computing as well. In reservoir computing, a large randomly connected (not generally feedforward!) neural network (the reservoir) is initialized. In general, these neural networks are highly recurrent. They have read-in weights, which are channel by which data can causally influence the network, and state update weights which describe how the neural dynamical system evolves with time. These weights are not updated with any learning rules.
The reservoir is essentially doing a nonlinear signal analysis of the data. In order to parse this, a linear readout network is trained on the reservoir state (so that in e.g. MNIST it is predicting class from the reservoir state). In the ideal case, the reservoir seperates signals in the data so that the simple linear readout can make strong predictions. The linear readout network here can either be trained with backprop (default) or Hebbian WTA learning.
TODO! There are some preliminary implementations of CMA-ES and a custom GA, but they need work.
This implements the forward-forward algorithm by Hinton, based on his Matlab source (note, this link is a zip file that will download). Additionally, I implemented layer collaboration based on this paper, which partially ameliorates the greedy layerwise aspect of the algorithm.
The forward-forward algorithm is essentially a contrastive learning framework. In the image + label case, images are concatenated to their labels (since the learning method is not supervised, so concatenation learns the joint distribution in an unsupervised way). Next, a negative sample is generated by utilizing the same image, but concatenating a random label. The network is then trained to modify its weights by minimizing (maximizing) an energy function for the negative (positive) samples using a layer-local gradient update.
Unlike the original/official implementation, I do not force images to be binary, pixel values are treated as continuous numbers.
I've also implemented some interesting papers with graph theoretic tools that I thought might help analyze the backprop alternatives. A potentially incomplete list: