Closed thesamovar closed 6 years ago
Good catch, I think I never used big enough SpikeGeneratorGroup
s to notice this. Probably there's a faster implementation of the unique check as well?
Working on this now. Incidentally, the code I was working on had around 60M spikes being sent to SpikeGeneratorGroup
so it's not surprising we didn't really notice this before!
Cool. BTW: are your 60M spikes unsorted in the first place? If not, don't forget the sorted=True
option :)
The problem is in
SpikeGeneratorGroup.__init__
andbefore_run
(and also duplicated code inset_spikes
) like this:and
Huge amounts of time are spent in
ndarray.sort()
andunique
. The problem appears to be the use of recarrays, which are apparently very slow. For the first code block, it can be replaced with this code and is much faster:In some code I was working on, the time drops from 90s to about 17s.
I will submit a PR soon since it shouldn't take long to implement this.