nengo / nengo-loihi

Run Nengo models on Intel's Loihi chip
https://www.nengo.ai/nengo-loihi/
Other
35 stars 12 forks source link

Binary spike encoding #158

Open arvoelke opened 5 years ago

arvoelke commented 5 years ago

This is a work-in-progress / proof-of-concept that seeks to improve the accuracy of a single layer encode-decode by minimizing the noise from encoding the input into spikes.

This idea is courtesy of @tcstewar who suggested that the spike generator could be implemented using a binary code where each spike represents a bit of information in the binary representation of the node's input vector. This uses 2*d*k spike generators to represent any input vector from the (-1, +1)^d-cube with 2^k precision. There is no variability from the ideal PSC, because the synapse is None and the code is transmitted precisely every time-step. All of the error (on the encoding side) comes from quantizing the signed input values to k bits.

The trade-off is this sends O(k) spikes to each input neuron every time-step, whereas the previous on/off encoding scheme was sparse in time (i.e., trading between spike density, dt, tau, and input frequency). To further explore this trade-off, the on/off code should be extended to support k heterogeneous and independent spike trains (i.e., inflate the generator's total spike count by a factor of k to reduce variance by a factor of sqrt(k)).

The implementation is a work-in-progress; I took the path-of-least-resistance in order to test this out as quickly as possible. At the very least this should help provide a starting template for what different encoding schemes might look like in code.

TODO:

arvoelke commented 5 years ago

I refactored the code and added a unit test that demonstrates this works perfectly using the Nengo simulator to do the same math. But sometimes things go wrong with the Loihi emulator and I can't figure out why. The improvement in accuracy seems to depend on some combination of neuron model, stimulus, and use of the defaults:

nengo.Ensemble.max_rates.default = nengo.dists.Uniform(100, 120)
nengo.Ensemble.intercepts.default = nengo.dists.Uniform(-1, 0.5)

The most favourable condition I've found on this branch is a linear ramping input, with the above defaults, and the default neuron model.

linear

In this case nengo_loihi and nengo simulators are nearly equal in accuracy! The master branch is also close, but not as close (not shown). But for other conditions, things fall apart or become even worse than on master, and I'm finding it difficult to debug / isolate the possible sources of error. The mean values look right but there is often some pretty crazy variability. For example, in the simulation below, the only difference from the above is that the input is now a constant 1.

wild_variability

And curiously, it is only so extreme on this branch. It's much less variable on master (shown below), which is counter-intuitive. Could be something to do with homogeneity in response curves?

master

arvoelke commented 5 years ago

Ping. I think we should still consider this, as the spike generators are a significant source of error, and this reduces that error by a factor of 2^k for chosen k (see unit test that verifies this).