nengo / nengo-loihi

Run Nengo models on Intel's Loihi chip
https://www.nengo.ai/nengo-loihi/
Other
35 stars 12 forks source link

Decoding weights require scaling by dt for use as a transform #153

Open arvoelke opened 5 years ago

arvoelke commented 5 years ago

Discovered while trying to find a work-around to #152.

When taking the encoders and decoders and using them to form a full-weight matrix (given as a transform on a Neurons -> Neurons connection), an additional scaling factor of dt is required.

From what I understand, this is a consequence of sim.data[*].weights being divided by sim.dt, in relation to:

https://github.com/nengo/nengo-loihi/blob/f02fcebc957451559e7a05addc1e481831fc9105/nengo_loihi/builder.py#L570-L572

And so dt needs to be multiplied to get it back into range for use as a transform. Regardless, if there is no way to keep this consistent, it should at least be documented, as it changes the interpretation of the decoding weights and creates a significant difference between specifying connection weights on nengo versus nengo_loihi.

loihi_noscale

nengo_noscale

loihi_scaling

(Note the last simulation's Sim 1 is noisier due to interneurons.)

This turned out to be very difficult to debug when embedding the decoders within a full weight matrix between two ensembles, as it causes the downstream ensemble to constantly saturate (not shown).

%matplotlib inline

import numpy as np
import matplotlib.pyplot as plt

import nengo
import nengo_loihi

simulator = nengo_loihi.Simulator
scaling = 0.001

nengo.Ensemble.max_rates.default = nengo.dists.Uniform(100, 120)
nengo.Ensemble.intercepts.default = nengo.dists.Uniform(-1, 0.5)

with nengo.Network() as model1:
    u1 = nengo.Node(output=lambda t: np.sin(2*np.pi*t))
    x1 = nengo.Ensemble(300, 1)
    y1 = nengo.Ensemble(200, 1)

    nengo.Connection(u1, x1, synapse=None)
    conn1 = nengo.Connection(x1, y1)

    p1 = nengo.Probe(y1, synapse=0.05)

with simulator(model1) as sim1:
    sim1.run(1.0)    

with nengo.Network() as model2:
    u2 = nengo.Node(output=u1.output)
    x2 = nengo.Ensemble(x1.n_neurons, x1.dimensions,
                        encoders=sim1.data[x1].encoders,
                        max_rates=sim1.data[x1].max_rates,
                        intercepts=sim1.data[x1].intercepts)
    y2 = nengo.Node(size_in=1)

    nengo.Connection(u2, x2, synapse=None)
    conn2 = nengo.Connection(
        x2.neurons, y2, transform=sim1.data[conn1].weights * scaling)

    p2 = nengo.Probe(y2, synapse=p1.synapse)

with simulator(model2) as sim2:
    sim2.run(1.0)    

plt.figure()
plt.title("simulator = %s, scaling = %g" % (
    simulator.__module__.split('.')[0], scaling))
plt.plot(sim1.trange(), sim1.data[p1], alpha=0.7, label="Sim 1")
plt.plot(sim2.trange(), sim2.data[p2], alpha=0.7, label="Sim 2")
plt.legend()
plt.show()

assert np.allclose(sim1.data[x1].encoders, sim2.data[x2].encoders)
assert np.allclose(sim1.data[x1].max_rates, sim2.data[x2].max_rates)
assert np.allclose(sim1.data[x1].intercepts, sim2.data[x2].intercepts)
arvoelke commented 5 years ago

Wanted to make a note somewhere that probed spikes are also 1 and not scaled by 1/dt. I assume this is related to this issue. But it was causing some confusion for me when trying to debug differences between nengo.Simulator and nengo_loihi.Simulator because you get different results when you filter the spike trains.