nengo / nengo-loihi

Run Nengo models on Intel's Loihi chip
https://www.nengo.ai/nengo-loihi/
Other
35 stars 12 forks source link

Investigate why higher amplitude improves keyword_spotting #50

Open tbekolay opened 6 years ago

tbekolay commented 6 years ago

In #39, @hunse and I found that increasing the amplitude from 0.002 (which is what the network was trained on) to 0.005 improves performance. This could be due to the post-discretization tuning curves being lower than the Nengo tuning curves. In any case, we should investigate this more rigorously.

drasmuss commented 5 years ago

I know there's been a lot of work on the keyword spotting demo since this came up, do you know if this is still an issue @hunse?

hunse commented 5 years ago

I think this is resolved in some of the networks we trained for the keyword spotting paper. However, this has not been incorporated in this repository, as evidenced by this comment: "network was trained with amplitude of 0.002, scaling up improves performance on Loihi" here.

I think we should take the code from the keyword spotting example and use it for the example here. It includes the training in NengoDL, too, which the current example does not.

pblouw commented 5 years ago

I'll add a vote in favor of using the version of the keyword spotter used in the paper as the demo example - it's up-to-date and performs much better. Happy to help with doing the updating if others are busy.

drasmuss commented 5 years ago

Yeah that sounds like a good plan, I'll add it to the backlog for a future sprint.