Closed jimzhou112 closed 3 years ago
Total params
is the number of weights in the model, which is different from the number of neurons. So there are not 20,820 activations / neurons in the model but parameters. As you can see from the keras model.summary() printout, the number of parameters is dominated by the Dense layer, and is simply given by multiplying the number of neurons in the Dense and preceeding Conv layer (24 * 864 + 24 bias = 20760). When the toolbox prints out the Number of neurons
, it simply adds up the 864 neurons of the Conv layer and the 24 neurons of the Dense layer (not counting neurons of input layer). This number of neurons has nothing to do with the number of parameters as reported by Keras above.
By the way, the Number of synapses
(31320) is different from the number of parameters, because it does not consider weight sharing, it actually counts all connections ("synapses") in the network regardless of whether the connection strength is shared as in a conv layer. That's why the number of synapses is larger than the number of params.
Hello,
I've been using SNN-Toolbox to convert an ANN into SNN. The resulting SNN model has fewer neurons than the orignal ANN activations. The ANN model architecture is:
As you can see, there are 20,820 activations. However, when it is converted to SNN, the output displays that there are 888 neurons.
My understanding is that each neuron should correspond to one activation in the original model, so there should in theory be 20,820 neurons in the SNN. I noticed my results are consistent with the examples which also contain fewer neurons than original activations. What am I misunderstanding how the conversion is being done?
Thank you!!