scidash / neuronunit

A package for data-driven validation of neuron and ion channel models using SciUnit
http://neuronunit.scidash.org
38 stars 24 forks source link

Is parallel rheobase faster than binary search rheobase #192

Closed russelljjarvis closed 5 years ago

russelljjarvis commented 5 years ago

In this notebook I compare the speed of both algorithms https://github.com/russelljjarvis/neuronunit/blob/dev/neuronunit/unit_test/druckman_tests.ipynb

4.450975932982318 speed up for NEURON 0.5183659191052845 speed up (slow down) for rawpy

These results show that parallel rheobase is ~3.5-7 times faster for NEURON backend, but slower for numba jit depending on model.

This makes sense, because numba jit evaluations are over so quickly (esp with smaller dt, thanks Justas), it rivals the time, for interprocessor communication, incurred by making the job parallel). This is not so with NEURON simulations, where simulation takes a long time (~1s).

The reason parallel is faster given interprocessor comm speed < sim evaluation time, is because of differences in the algorithms:

In the case of binary search:

In the parallel case using 8 CPUs:

The upshot of all this, @JustasB, is that you we are both right, if me GA algorithm uses NEURON backend model, then ParallelRheobase is faster, if it uses forward-euler and jit compilation, then binary search is faster. And it's the outermost loop of gene evaluation that should be parallel instead.

That that interprocessor communication incurs it's own performance cost is also described here: https://www.anaconda.com/blog/developer-blog/parallel-python-with-numba-and-parallelaccelerator/

JustasB commented 5 years ago

Very interesting results!

russelljjarvis commented 5 years ago

Later in the same NB I examine if the DM tests execute faster in parallel. Using, rawpy backend they are execute twice as fast in parallel, I assume they would be significantly faster in parallel with NEURON.