dbrnz / opencor

OpenCOR is a cross-platform modelling environment which can be used to organise, edit, simulate and analyse CellML files.
http://www.opencor.ws/
GNU General Public License v3.0
1 stars 0 forks source link

Python scripts run slower after multiple executions inside GUI #23

Open dbrnz opened 5 years ago

dbrnz commented 5 years ago

From Gonzalo Talou:

I found another issue when multiple executions of a model are performed. In the script in attachment (example_loop.py), a model is iteratively called and each new execution takes more time than the previous one. ​In my case I have to perform ~1 millon executions and this accumulative effect makes the computational time intractable.

Is any way to underpin this issue?

example_loop.py.gz pv_type.cellml.gz

This is an issue when running inside the OpenCOR GUI but not when running the same script from a Jupyter notebook nor batched from the command line. The OpenCOR GUI is significantly slower after multiple executions of the model. (Jupyter is slightly faster than the CLI as the CLI has the initial overhead of starting a Jupyter notebook server.)

dbrnz commented 5 years ago

@agarny, this is something to keep in mind when running demos for say parameter estimation.

agarny commented 5 years ago

Yes, Gonzalo mentioned it to me. Do you have any idea of what might be going on here? I mean, I can't see what could be causing this problem from a pure OpenCOR viewpoint...

dbrnz commented 5 years ago

Hmm, another quick test (with a lot more simulation steps) has successive simulations in the GUI all taking about the same time and Jupyter slowing down...

import OpenCOR as oc
import time

filename = '/Users/dave/Documents/OpenCOR/python_script_slowdown/pv_type.cellml'
s = oc.openSimulation(filename)
s.data().setEndingPoint(10000000)

def r():
    s.resetParameters()
    s.clearResults()
    t1 = time.clock()
    s.run()
    t2 = time.clock()
    return t2 - t1

for i in range(10):
    print(i, r())

Which in the GUI produces:

0 3.7100829999999974
1 3.613687000000006
2 3.647003000000005
3 3.617575000000002
4 3.662610000000001
5 3.6118939999999995
6 3.621793000000004
7 3.642975000000007
8 3.680886000000001
9 3.674117999999993

and Jupyter:

0 3.460616
1 3.769762
2 3.7733159999999994
3 3.5495540000000005
4 3.6004119999999986
5 3.740630000000003
6 4.099632999999997
7 3.823600000000006
8 3.9997909999999948
9 4.104401000000003

Repeating the loop in the GUI gives much the same times. However in Jupyter after a few repeats we are getting:

0 4.1978909999999985
1 4.120165
2 4.116837000000004
3 4.346628999999993
4 4.162823000000003
5 4.099050000000005
6 4.3687710000000095
7 4.160547000000008
8 4.1949010000000015
9 4.975065000000001
agarny commented 5 years ago

Getting weirder by the minute...

dbrnz commented 5 years ago

There of course there is elapsed time (reported above) and process time. Times for last five runs of a test in the GUI are reported as both:

5 3.6048319999999876
6 3.622029999999995
7 3.6679730000000177
8 3.6405180000000144
9 3.6577589999999987

and

Simulation time: 2s 585ms using CVODE.
Simulation time: 2s 648ms using CVODE.
Simulation time: 2s 624ms using CVODE.
Simulation time: 2s 627ms using CVODE.
Simulation time: 3s 686ms using CVODE.

so clearly OpenCOR's reported simulation time is not the same as elapsed clock time...

dbrnz commented 5 years ago

Getting weirder by the minute...

I'll second that!

agarny commented 5 years ago

FWIW, here is how simulation time is computed in OpenCOR. As you can see, it's really about computing the time taken to run the simulation. Actually, it also takes into account the time taken to keep track of simulation data, as well as to slow the simulation (using the delay set by the user using the wheel). Indeed, to ignore those times, we would need to retrieve the elapsed time and restart the timer many times, which would result in no elapsed time in the end (been there done that... :)).

nickerso commented 5 years ago

I am also starting to see this problem (or a very similar problem at least). An example demonstrating my version of this issue is available here: https://models.physiomeproject.org/w/andre/SAN-ORd/file/e65b98c32226f05047cd3ca5ed6bbf22919887a1/python-testing.py

Should mention that running that script in @dbrnz's 2018-12-04 snapshot release, the first action potential runs instantly whereas the second one takes "forever". Running in the Python console in the GUI.