Open arvoelke opened 6 years ago
Definitely. There was some discussion of this before when support for radius!=1 was added, but that was in the pre-github days, so that's not as accessible. Here's what I wrote then:
Added support for radius!=1
We now support radius values other than 1!
The main thing that's different here from normal nengo is that we have to keep the intermediate values that are being passed around somewhere near 1, because they are being represented by interneurons. This is true for inputs (node->ens), outputs (ens->probe or node), and for ens->ens connections with a solver with weights=False. (The one exception is ens->ens with weights=True).
So, the easiest way to do this was to have scaled_encoders not include the radius. Then, for inputs we scale by the radius of the ensemble we are sending to, for probes we scale by the radius of the ensemble we are reading from, and for ens-ens we scale by the post-ensemble's radius.
The one situation where this may cause problems is if you're decoding out a function that has a very different range of values than the ensemble you're reading from. For example:
a = nengo.Ensemble(n_neurons=100, dimensions=1) out = nengo.Node(None, size_in=1) nengo.Connection(a, out, transform=1000)
It currently tries to use a.radius for that scaling, and it's not smart enough to figure out some other scaling (which is a pretty tricky problem, in general....) I think it's fine for now, but a future PR could look into a better option....
I wonder if something like making that saturation point configurable (via the config
system) would be a good starting point.... although I don't quite know what the right syntax would be....
I found the following two steps in combination serves as a work-around for the second scenario above:
inter_tau
and set the former to None
.k
identical copies of the same connection with weight 1/k
.The first step unfortunately affects all other spike generators and any interneurons in the model as well, and so this solution has limited applicability. The second step is necessary because the transform is on the connection to the spike generator, and there currently seems to be no way to specify a different transform/radius on that connection/generator versus the one injecting/receiving spikes. However, with an improvement to the spike generator, this splitting may have a side-effect of keeping the accuracy scale-invariant for larger k
(see asterisk).
Other issues with this approach:
k
is the ceiling of the former.init_generators
below might solve this problem by "jittering" the `2k` spike generators:import numpy as np
import matplotlib.pyplot as plt
import nengo
import nengo_loihi; nengo_loihi.set_defaults()
synapse = nengo.Lowpass(0.1)
freq = 5
sim_t = 1. / freq
amp = 1. / np.abs(synapse.evaluate(freq))
split = np.ceil(radius).astype(int)
def init_generators(sim, name='host_pre', rng=np.random):
sim_host = sim.sims[name]
i = 0
for a in sim_host.model.sig:
if isinstance(a, nengo.Ensemble) and isinstance(a.neuron_type, nengo_loihi.neurons.NIF):
sim_host.signals[sim_host.model.sig[a.neurons]['voltage']] = rng.rand(a.n_neurons)
i += 1
return i
plt.figure()
plt.title("High-Frequency Input (%d Hz)" % freq)
for loihi in (False, True):
with nengo.Network() as model:
u = nengo.Node(output=lambda t: np.sin(2*np.pi*freq*t) * amp)
y = nengo.Ensemble(100, 1)
for _ in range(split):
nengo.Connection(u, y, synapse=None if loihi else synapse,
transform=1. / split)
p_y = nengo.Probe(y, synapse=0.1)
if loihi:
loihi_model = nengo_loihi.builder.Model()
loihi_model.inter_tau = synapse.tau # used by spike generator
sim = nengo_loihi.Simulator(
model, model=loihi_model, precompute=True)
else:
sim = nengo.Simulator(model)
with sim:
if loihi:
print("Jittered", init_generators(sim), "generators")
sim.run(sim_t)
plt.plot(sim.trange(), sim.data[p_y], label="Loihi" if loihi else "Nengo")
plt.legend()
plt.xlabel("Time (s)")
plt.show()
Note the above code no longer does the right thing as of v0.5.0, as it relied on a hidden feature of v0.4.0 (setting inter_tau
) that changed in #132. Related discussion is in #97.
You should be able to change inter_tau
to decode_tau
and get the same behaviour as before.
(Preamble: I'm looking at this from the perspective of your average user, and in the context of
Node -> Ensemble
connections. I haven't checkedEnsemble -> Ensemble
, but there's probably something similar due to interneurons.)In the standard Nengo reference simulator, the radius is post-synapse (and post-addition), conceptually taking place within the neuron model.
In nengo_loihi, the radius is effectively pre-synapse, and applied individually to each connection.
Another way of thinking of this, is that it's as though each input saturates at
[-radius, radius]
before being filtered and added together.I understand this has to do with the way the spike generators are set up, but this kind of difference is important to consider, because it changes the way that you have to think about building models, especially when you have multiple connections, and dynamics (higher frequencies). Here's two simple networks to illustrate why this is important to consider:
The first example implements
2 - 1
. The answer we get should be1
, but the radius is applied to the2
to get1 - 1 = 0
instead.The second example takes a
5 Hz
input (I call this "high frequency", but really this is quite low in the context of Principle 3) and filters it, so that the post-filtered variable is within the range [-1, 1]. However, with Loihi it saturates at [-1, 1] before being filtered.As a sanity check, change the
radius = 1
at the top toradius = 3
, to see that both simulators end up equal in this case. This shows that the issue here is not due to any sort of model / quantization effects, but really in essence related to different interpretations of radius.