Add benchmarks on the following standard models to the ReadTheDocs documentation:
brunel_alpha_nest.py: 12500 node balanced network
...
Measure runtime NEST vs. NESTML for the same neuron model, but with NEST native vs. NESTML generated code.
Detailed benchmarking can use
#include <chrono>
auto t1 = std::chrono::steady_clock::now();
// ... code to benchmark ...
auto t2 = std::chrono::steady_clock::now();
double duration = std::chrono::duration_cast<std::chrono::microseconds>( t2 - t1 ).count();
Initial benchmarks show poor performance (iaf_psc_alpha model):
NESTML
Total time handling spikes: 4.795e+03 ms
Total time in update(): 7.898e+02 ms
Simulation time : 293.35 s
NEST
Total time handling spikes: 2.553e+03 ms
Total time in update(): 2.219e+02 ms
Simulation time : 184.71 s
Add benchmarks on the following standard models to the ReadTheDocs documentation:
brunel_alpha_nest.py
: 12500 node balanced networkMeasure runtime NEST vs. NESTML for the same neuron model, but with NEST native vs. NESTML generated code.
Detailed benchmarking can use
Initial benchmarks show poor performance (iaf_psc_alpha model):