Closed CloudyDory closed 4 months ago
So many thanks. It's very interesting.
However, I have to suggest to use the taichi interface in brainpy.math.XLACustomOp
for performance. This is because the pure heterogenous delay retrieval is very expensive for GPU. Instead, we can use the taichi custom interface to merge the delay retrieval and sparse computing together to minimize the memory indexing overhead.
Thank you for the very valuable suggestion. The performance is indeed an issue here. I am very unfamiliar with XLACustomOp
and taichi language, so it may take time to optimize the code.
Description
Currently, the synaptic delay variable in BrainPy only support adding a single delay to a group of synapses. In practice, we may encounter situations where a large number of synapses are governed by similar dynamics, but with different delay length. This PR introduce a new class called
HeteroLengthDelay
in filebrainpy/_src/math/delayvars.py
. It is modified from theLengthDelay
class in the same file, but with the following changes:__init__()
requires input the delay length of each synapse, and the number of synapses each pre-synaptic neuron has. The array of delay length should be sorted according to the pre-synaptic neuron index.retrieve()
function is a 1-d array of spikes delivered to each synapse. The length of the array is the number of synapses, not the number of post-synaptic neurons.numpy
andbrainpy.math
import at the start of the file.The new class internally stores the previous spikes in a matrix with dimension
[max_delay_length, num_pre_neurons]
. It should work when the size of this matrix does not exceed memory constraints.How Has This Been Tested
TO DO
Types of changes
Checklist