Open n-getty opened 6 years ago
We would expect a loss of accuracy when directly translating a nengo model to nengo-loihi. The Loihi hardware (and emulator) works with a lot more constraints than a standard, floating-point simulation (discretized weights, discretized voltages, specific neuron models, etc.). That means that a network that works well in an unconstrained system may not work as well when we translate it to the constrained system. If we want a network to translate well, we need to take care when designing/training the network so that we end up with a system that will still work when we add the additional constraints. You can see an example of what that looks like here: https://github.com/nengo/nengo-loihi/blob/conv2d-mnist/sandbox/dl/mnist_convnet.py (although that is still a work-in-progress).
The simulation speed issues are predominantly because you are running with precompute=False
, as mentioned in the other issue. That will slow things down a lot. Since you are just doing a three-layer dense network, you could implement that directly with nengo Ensembles and Connections, rather than using nengo-extras
and Keras
. That would allow you to run the network with precompute=True
. Note that you can use nengo-dl
if you want to train that network in Nengo.
snip_max_spikes_per_step
isn't currently exposed easily (it's on our TODO list). You will need to do a developer installation, and then modify this line here https://github.com/nengo/nengo-loihi/blob/master/nengo_loihi/loihi_interface.py#L388.
Thanks Daniel, I appreciate the explanations!
Also note that this network is not doing at all what you would expect. nengo-loihi
has not been set up to use the syntax of defining convolutional connections with Nodes, as SequentialNetwork
does. Rather, we're creating a newer syntax, which you can see in the mnist_convnet.py
file @drasmuss pointed to.
So what your network is actually doing is defining a number of Ensembles on Loihi, and then running the nodes in between them (which do the convolution) off Loihi. So none of your weights are actually being mapped to the chip, which is likely both why you have poor performance and why it's so slow.
Thanks Eric, I am not using any convolutional layers. I am only using dense layers, perhaps both result in what you describe?
Actually if I use convolution on the raw images I can achieve ~80% accuracy on mnist and 70% accuracy on fashion_mnist. The bottleneck here is that the number of synapses is so huge I can only use heavily strided, large windows and very low filter size on the input conv layer (max 6 filters). Looking forward to convolution implemented with populations.
I have defined a fairly simple mnist model in keras (input image is 16 dimensions; 3 dense layers: 32 -> 16 -> 10) and convert to nengo network like so:
Accuracy is good on the nengo simulator:
Accuracy drops significantly on the nengo-loihi simulator:
Accuracy drops even more on board and seems to take a long time:
One source of loss is likely pointed to by this warning:
UserWarning: Too many spikes (73) sent in one time step. Increase the value of snip_max_spikes_per_step (currently set to 50)
Where to I define this parameter? 50 spikes per step seems like a fairly low default.