nengo / nengo-loihi

Run Nengo models on Intel's Loihi chip
https://www.nengo.ai/nengo-loihi/
Other
35 stars 12 forks source link

fixup! Add Keras-to-Loihi example #288

Closed studywolf closed 4 years ago

studywolf commented 4 years ago

The conversion using scale_firing_rates for each layer was giving 2% accuracy, it was caused by not scaling the additional dense0 layer that was added due to Loihi constraints on spike probes. In fixing this, it made sense to add another subplot with the output from this layer, which then prompted the height of the plots to increase.

Fixed a couple of typos and an error that arose in plotting when there was no activity in any neurons in an ensemble.

Also added a line mentioning SNNs are better suited to temporal problems like video processing over still frames.

studywolf commented 4 years ago

Sigh, that's a debugging holdover. That should be 100! Not 1000!

On Mon, Apr 20, 2020, 5:48 PM Eric Hunsberger, notifications@github.com wrote:

@hunse commented on this pull request.

In docs/examples/keras-to-loihi.ipynb https://github.com/nengo/nengo-loihi/pull/288#discussion_r411714974:

@@ -453,11 +456,12 @@ "metadata": {}, "outputs": [], "source": [

  • "target_mean = 100\n",
  • "target_mean = 1000\n",

Why is this increased? It seems like these rates will be too high.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/nengo/nengo-loihi/pull/288#pullrequestreview-396837337, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAS3ZCMM65XYNK5AFGSHDILRNS7KNANCNFSM4MMYHC5A .

hunse commented 4 years ago

In the generated example notebook, the accuracy was still bad for the example with individual layer scaling. The latter layers still had quite low firing rates, so I bumped the target rate up to 200 Hz instead of 100.

Unfortunately, there's still some variability in the notebook, despite setting both the tensorflow and numpy seeds. @drasmuss, I'm not sure if you know any other ways to set seeds that would get us identical results from run to run, but on my machine I'm seeing variance (that's using a GPU, maybe it's different on the CPU).

I also suppressed the NengoDL warning about not having a GPU, since we don't want that in the rendered notebook.

drasmuss commented 4 years ago

Seeding the Simulator might help? It's pulling the Simulator seed from the numpy random generator, if one isn't set, so you'd think that seeding the numpy generator would have the same effect, but maybe there's some non-determinism in the ordering or something.

drasmuss commented 4 years ago

Seeding the Simulator might help? It's pulling the Simulator seed from the numpy random generator, if one isn't set, so you'd think that seeding the numpy generator would have the same effect, but maybe there's some non-determinism in the ordering or something.

I also notice there's a seed on the NengoImageGenerator that isn't being used. Again, you'd expect that under the hood that would be controlled by the base numpy/tensorflow seeds, but perhaps not.

hunse commented 4 years ago

So I looked at the generated notebook, and it's definitely better (find it in this branch). The results it gives are a bit different than when I run on my machine, but I haven't looked into it enough to tell if that's just because I'm running on a GPU, or if there's also variability from run to run on the CPU. Anyway, I think it's good enough (the results are close enough that the main points stand).

@studywolf, let me know if you're good with everything and I'll merge.

studywolf commented 4 years ago

LGTM!