NeuromorphicProcessorProject / snn_toolbox

Toolbox for converting analog to spiking neural networks (ANN to SNN), and running them in a spiking neuron simulator.
MIT License
360 stars 104 forks source link

How to implement a converted SNN model in C #105

Closed jimzhou112 closed 3 years ago

jimzhou112 commented 3 years ago

Hello!

I would like to re-implement the converted Loihi SNN model using C. The model only needs to handle inference for future implementation on an FPGA using High-Level Synthesis. Do you have any pointers or resources you could guide me that show the operations that are undertaken to go from input to output in SNN Toolbox? Specifically the inner-workings of the Convolutional and FC layers. I ran the code through a stack trace but could not find out where the calculations for the SNN output are being done.

The model architecture I am using is:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (NxInputLayer)       (1, 16, 16, 1)            0         
_________________________________________________________________
0Conv2D_14x14x6 (NxConv2D)   (1, 14, 14, 6)            60        
_________________________________________________________________
1Flatten_1176 (NxFlatten)    (1, 1176)                 0         
_________________________________________________________________
2Dense_24 (NxDense)          (1, 24)                   28248     
=================================================================
Total params: 28,308
Trainable params: 28,308
Non-trainable params: 0

Thank you so much!!

rbodo commented 3 years ago

Not sure if you want to know about the toolbox simulator or Loihi. If you look at the pipeline here, the SNN Toolbox provides a link to several simulation / execution frameworks to run the model after conversion. One of them is the built-in INIsim simulator which is running on the CPU or GPU and basically implements each simulation time step with a forward pass through the network. The membrane potential dynamics are coded up here. The conv/fc layers are just your normal keras layer with this added state and a thresholding behavior.

If you want to re-implement the Loihi SNN then you won't find anything within SNN toolbox because it only provides an entry point to the Intel Loihi NxSDK. I'd recommend looking at the new Lava framework or the NengoLoihi simulator.

jimzhou112 commented 3 years ago

Thanks, I had a look and I now understand the added state and thresholding behavior in the INIsim simulator. I currently have a C implementation for my own model, but the final output spikes at each timestep are unfortunately different from yours. Could I verify that my implementation idea is correct?

My current network topology is a Conv layer and a FC layer for the CNN.

A high level overview of my implementation, which has the config file attached at the end:

During each timestep: 1) Add pixel values of input (mine are pre-normalized to be between 0 and 1) as the input current to the input membranes 2) If any pass the threshold (threshold = 1), that input neuron fires a spike and then that membrane is decreased by the threshold (reset by subtraction). 3) These spikes (an array of either 0s and 1s) are fed as the input to be convolved in the first convolutional layer with the CNN weights and biases of the same layer 4) Results from the above convolutional layer are added as the input current to the 1st hidden layer neuron membranes 5) Repeat the same firing behavior using 1st hidden layer membrane as step 2 6) These spikes are fed into the FC layer with the CNN weights and biases of the same layer 7) Results from the above FC layer are added as the input current to the output layer membrane 8) Repeat the same firing behavior step 2 9) The spikes generated in step 8 are the output spikes for this timestep, with 1 per output label 10) The membranes of the input, hidden, and output layers carry over to the next timestep

Accumulate spike trains from each timestep and the output label with the greatest amount of spikes is the prediction

Could you let me know if I have missed anything?

Since I'm not able to stack trace through execution and see intermediate layer results, I haven't been able to debug the problem. Do you happen to know where I can trace through in order to display intermediate results for the INIsim simulator?

Thanks!!

Config settings:

config['paths'] = {
    'path_wd': path_wd,             # Path to model.
    'dataset_path': path_wd,        # Path to dataset.
    'filename_ann': model_name      # Name of input model.
}

config['tools'] = {
    'evaluate_ann': True,           # Test ANN on dataset before conversion.
    'normalize': True             
}

config['simulation'] = {
    'simulator': 'INI',           # Chooses execution backend of SNN toolbox.
    'duration': 32,                # Number of time steps to run each sample.
    'num_to_test': 3000,             # How many test samples to run.
    'batch_size': 1,
    'keras_backend': 'tensorflow'   # Which keras backend to use.
}
rbodo commented 3 years ago
6. These spikes are fed into the FC layer with the CNN weights and biases of the same layer

I assume here you mean FC (not CNN) weights.

8. Repeat the same firing behavior step 2

Are you using softmax or ReLU in the output layer? Softmax would require special treatment; I'd recommend using ReLU only for debugging.

9. The spikes generated in step 8 are the output spikes for this timestep, with 1 per output label

I assume you mean one neuron (not spike) per output label.

Since I'm not able to stack trace through execution and see intermediate layer results, I haven't been able to debug the problem. Do you happen to know where I can trace through in order to display intermediate results for the INIsim simulator?

You'd have to enable eager execution in tensorflow for the debugging to work. Alternatively, you could save the spikes and membrane potentials (see the log_vars option in the config) and compare against your C implementation offline. Also, I'd recommend simplifying the problem as much as possible, e.g. to a network with one input and one output neuron, with fixed integer weight and zero bias, so you know the expected dynamics analytically. Then gradually add neurons / layers / random weights / bias, ...

jimzhou112 commented 3 years ago

Thanks for the help Bodo, I found my error.