A deep learning library for spiking neural networks which is based on PyTorch, focuses on fast training and supports inference on neuromorphic hardware.
The reset_states method for the DynapcnnCompatibleNetwork in the dev/0.3 branch won't set states to zeros but random integers. I doubt this leads to some discrepancy between the on-chip and simulation performance.
Because usually we will reset the states to zeros for the simulation SNN model between each sample.
In my experiments, if I correctly reset the on-chip neuron states to zeros between each testing samples as the same as simulation, the gap between the on-chip and simulation disappeared.
The original reset_states method for the on-chip DynapcnnCompatibleNetwork model is shown below.
def reset_states(self):
"""
Reset the states of the network.
"""
if isinstance(self.device, str):
device_name, _ = _parse_device_string(self.device)
if device_name in ChipFactory.supported_devices:
self.samna_device.get_model().apply_configuration(self.samna_config)
return
raise NotImplementedError
When I check the neurons_initial_value of self.samna_config, I found that those values are not zero but random numbers, which means even after we re-apply the config to chip, the neuron states are still random numbers.
Solution
we need to manually set the neurons_initial_value of the samna config to zeros in the self.make_config() method.
for idx, lyr in enumerate(config.cnn_layers):
shape = torch.tensor(lyr.neurons_initial_value).shape
zero_state = torch.zeros(shape, dtype=torch.int)
zero_state = zero_state.tolist()
config.cnn_layers[idx].neurons_initial_value = zero_state
Description
The reset_states method for the DynapcnnCompatibleNetwork in the dev/0.3 branch won't set states to zeros but random integers. I doubt this leads to some discrepancy between the on-chip and simulation performance.
Because usually we will reset the states to zeros for the simulation SNN model between each sample.
In my experiments, if I correctly reset the on-chip neuron states to zeros between each testing samples as the same as simulation, the gap between the on-chip and simulation disappeared.
The original reset_states method for the on-chip DynapcnnCompatibleNetwork model is shown below.
When I check the neurons_initial_value of self.samna_config, I found that those values are not zero but random numbers, which means even after we re-apply the config to chip, the neuron states are still random numbers.
Solution
we need to manually set the neurons_initial_value of the samna config to zeros in the self.make_config() method.