NeuromorphicProcessorProject / snn_toolbox

Toolbox for converting analog to spiking neural networks (ANN to SNN), and running them in a spiking neuron simulator.
MIT License
358 stars 104 forks source link

Activation function for SLAYER on Loihi #92

Closed albertopolito closed 3 years ago

albertopolito commented 3 years ago

Hi,

I don't know if this is the right place to ask this question. I see your example of SLAYER on Loihi with the NxTF layers on link. I don't understand why you use different activation function for different layer. I know that SLAYER is a direct learning method for SNN and every neuron follow the CUBA LIF model described on link.

So why you use 'relu' and 'linear' activation function? These parameter (activation functions) has any correlation with other values, such as Vth or Vdecay or other? If we use values very different from the example we can always use 'relu' and 'linear' activation function for the NxTF layers?

Thanks in advance for your time. Best regards.

Alberto Viale.

rbodo commented 3 years ago

Hi Alberto,

The neuron properties (leak, threshold etc) used in NxTF layers are determined by the compartment_kwargs dict here. The activation argument in the NxTF layer constructor is not used to modify any neuron properties. It is only relevant if you are running the NxModel instance in non-spiking mode, i.e. as a normal Keras model.

If our neurons were non-leaky IF neurons, then relu would be the correct activation function to use in all the layers (also the Dense layers). But since the neurons are leaky, neither linear nor relu activations are an exact match for the neuron transfer function. So to avoid confusion it would have been better in this example to not set the activation argument at all. But it won't affect the behavior here.

albertopolito commented 3 years ago

Hi Bodo,

Thanks for your answer. So if I would exactly the LIF neuron behavior I must not set the activation argument, or it is not possible to have exactly this behavior with the NxTF layers? Because I remember that by default the activation argument of a NxTF layer is set to 'linear'. It is right?

Or the activation parameter is only relevant when we use the snntoolbox and we would convert a IF or LIF model described in Keras and we must use in this case the appropriated values ('relu' or 'linear')? So in this case the activation parameter of NxTF layers is useless after the conversion?

For my problem it is very important that the offline LIF model is very close to the behavior of Loihi neurons.

Sorry for this stupid question. Thanks in advance for your time. Best regards,

Alberto Viale.

rbodo commented 3 years ago

NxTF layers are just enhanced Keras layers, so you can in principle set their activation function to anything you like (see here). So you'd have to define your nonlinearity such that it matches the transfer function of your particular neuron model. NxTF won't try to parse the activation argument in your NxLayer to determine the compartment_kwargs; you have to specify them yourself.

If you use snntoolbox, then the activation function plays a role if you enable weight- or threshold normalization. The toolbox will normalize your parameters based on activation statistics using your activation function.

If you don't use Loihi as backend but the builtin SNN simulator of the toolbox (INIsim), you will have less flexibility to adapt the neuron model, you'll basically have relu neurons.

albertopolito commented 3 years ago

Hi Bodo,

thanks a lot for the clarification.

Best regards,

Alberto Viale.