Closed CloudyDory closed 9 months ago
I guess the difference lies in the compilation time of brainpy.math.for_loop
. But I will perform more experiments to see what's going on for such a difference.
Actually, brainpy.math.for_loop
will be faster if we increase the simulation time steps from 10000 to 100000.
In the documentation of monitor every multiple steps, two methods are provided. One using
brainpy.math.for_loop
and the other usingmodel.jit_step_run
. I have profiled the running speed of the given two examples, and find thatmodel.jit_step_run
consistently runs faster thanbrainpy.math.for_loop
(at least on my platform, on both CPU and GPU).I am a bit surprised by the result, since using
model.jit_step_run
requires writing explicit python for-loop, which I think should be slow. What might be reason behind the performance difference?Profile code:
Outputs:
Even if I reverse the order of the two methods, the result are almost the same, so the difference is not caused by the JIT compilation time during the first run.