ModellingWebLab / project_issues

An issues-only repository for issues that cut across multiple repositories
1 stars 0 forks source link

Speed up model generation and simulation #52

Closed MichaelClerx closed 4 years ago

MichaelClerx commented 5 years ago

Generating a model takes 3.7 seconds at the moment, on my laptop. Waaaaay to long.

Once we have cellmanip and fccodegen split up a bit better we should profile this and find out what's taking so long. This should be well under 100ms.

MichaelClerx commented 5 years ago

Another thing to look into w.r.t. templating is that jinja seems to be built as a server. It has lots of options for precompiling, maximum number of cached templates etc. So perhaps not suitable as a single use templating engine? @jonc125 ?

jonc125 commented 5 years ago

While jinja2 is often used as the templating engine for web servers (e.g. Flask, Django) it can be used standalone too, and does get used by other Python projects needing template functionality (e.g. Ansible, Terraform). Particularly given that 2 other parts of the Web Lab infrastructure use it, I think I’d want a strong reason to switch to something else.

MichaelClerx commented 5 years ago

Two things that are known to slow WL down:

  1. Parsing the protocol file (exceeds recursion limit). Could be made faster by caching parsed libraries. However, even with the libraries its only reading 3 or 4 files which should be much faster than it currently is
  2. Pre-pacing could be a lot faster if we (1) didn't have a maximum step size, (2) avoided logging during pre-pacing (not sure if this happens already!)
MichaelClerx commented 4 years ago

1 has been partially addressed with some caching. Cellmlmanip itself is still very slow though, so could definitely use some work speeding that up

jonc125 commented 4 years ago

Should start with some profiling. I think there are some relevant issues already in the cellmlmanip repo too.

We've also discussed in the past speeding simulation by removing the maximum step size and instead analysing the stimulus current so we can stop CVODE at the point where it triggers. This would be much easier to do now we've replaced pycml, I think. We already avoid (most) logging during pre-pacing, though that's the choice of the protocol author.

MichaelClerx commented 4 years ago

Yeah let's close this as a general issue :-)

mirams commented 4 years ago

We've also discussed in the past speeding simulation by removing the maximum step size and instead analysing the stimulus current so we can stop CVODE at the point where it triggers. This would be much easier to do now we've replaced pycml, I think. We already avoid (most) logging during pre-pacing, though that's the choice of the protocol author.

Yeah, it would also want stop/start at sudden voltage changes in voltage clamp experiments, but my hunch is that this is low down the list of bottlenecks!