Closed ammar-bitar closed 3 years ago
You may start by making the time constants in similar range. The hyperparameters are going to be different because the neuron model and dynamics are different. In addition Loihi model is a fixed precision model. So your convergence is going to be slow.
On padding, just add a pad
padding
field in the layer description:
'layer' : [
{'dim': '34x34x1'},
{'dim': '16c5z', 'padding': 1},
{'dim': '2a'},
{'dim': '32c3'},
{'dim': '2a'},
{'dim': '64c3'},
{'dim': 10}
],
Thanks for the reply!
Adding the 'pad' field, doesn't seem to work though, when calling the assistant to build the net using I'm still gettting:
Found out looking through the source code, if I use 'padding' : 1 instead of 'pad' : 1 it works!
When you talk about the time-constants, do you mean the simulation time, sample-time, target-regions - and tau/scale Rho? If yes, those are all already set to the same!
Does it make sense to fix the Loihi threshold (vThMant) to be the same as theta, and then work with the voltage/current params?
Thanks again!
ETA:
For reference my net parameters:
netDesc = {
'simulation' : {'Ts': 1.0, 'tSample': 300, 'nSample': 12}, #nSample: 12
'neuron' : {
'type' : 'LOIHI',
'vThMant' : 10,
'vDecay' : 128,
'iDecay' : 1024,
'refDelay' : 1,
'wgtExp' : 0,
'tauRho' : 1,
'scaleRho' : 1,
},
}
And SRM params:
netDesc = {
'simulation' : {'Ts': 1.0, 'tSample': 300, 'nSample': 12}, #nSample: 12
'neuron' : {
'type' : 'SRMALPHA', # Neuron Model
'theta' : 10,
'tauSr' : 10.0,
'tauRef' : 1.0,
'scaleRef' : 2, # Relative to theta
'tauRho' : 1, # Relative to theta
'scaleRho' : 1,
}
}
Whatever is in the neuron
field needs to be tuned.
Hi,
I was wondering if there was some way to replicate the SRM neural behaviour with the Loihi parameters?
Training a couple of SNNs using the SRM model on MNIST/F-MNIST I got to 95%+ accuracy; but trying to do the same with the Loihi model the learning happens at much much slower rate (in term of epochs).
Where, with the SRM model accuracy goes above 90% very quickly, after ~3-5 epochs (depending on coding); with the Loihi neuron-model (and the 'default' parameters, the ones you're using for NMNIST in the tutorial), it's only residing at around 50-60%.
Regarding SLAYERAuto, I'm still not 100% sure about its syntax.
I have this network:
When trying to rewrite it with SLAYERAuto like this:
I end up with this:
(The layers are conv1 -> pool1 -> conv2 -> pool2 -> conv3 -> dense)
Especially, in the first convlayer, the padding is not the same (which might impact the learning a bit?).
Thanks!