CovertLab / arrow

Stochastic simulations in python
MIT License
3 stars 1 forks source link

Refactored propensity calculations and caching reaction stoichiometries #29

Closed jmason42 closed 5 years ago

jmason42 commented 5 years ago

It was bugging me that propensities were called 'distributions' and combinations were called 'propensities', so I reorganized that logic (and moved all the pure math stuff to a math submodule). Once this was done, it was easy to start caching the reaction/reactant relationships. This work might collide with any attempt to make the representation more sparse, but it might also do a good chunk of the work.

I also flattened some logic and replaced some defaults with None.

jmason42 commented 5 years ago

Forgot to mention - this is about a 3x speed-up on my machine.

prismofeverything commented 5 years ago

@jmason42 Looks great, I'll test this out!

prismofeverything commented 5 years ago

Hey @jmason42, thanks for this. It did result in a significant speedup (one generation went from 22 minutes to 18). This is still far longer than the current simulation time of 8.5 minutes.... I know that we are okay adding time to the simulation if it results in a better system overall, but I feel this slowdown is maybe still outside the realm of reason, especially if we want to expand the algorithm to other processes.

I'll try the sparse approach next. BTW, any thoughts on an approach like this? https://www.ncbi.nlm.nih.gov/pubmed/19162916 FPGA implementation of Gillespie with 100 million time steps per second? Seems like something that would be valuable to us. Who are the FPGA people at Stanford?

jmason42 commented 5 years ago

Yeah, this doesn't really solve the performance issues to our desired level. As far as the FPGA goes, I'm not sure we need that level of performance - I think getting this properly compiled will be enough. That said I don't see Numba working going forward (although I don't really understand where the performance hangups are), and Cython has its own frustrations.

prismofeverything commented 5 years ago

Yeah I got the cython working but that makes it hard to distribute (pypi rejected my build due to compiled artifacts.... there is a way around it but moderately cumbersome). Could be worth it if it gives us the needed speedup? But currently it isn't.

The other option is just to implement directly in C without numpy or anything (probably using the sparse matrix approach). I went through a few of the other existing native libraries (like Stochsim: https://lenoverelab.org/perso/lenov/stochsim.html) but the interface is so bad (can only specify systems through SBML???) that I think it could be worth implementing exactly what we need with the interface we want. Might break out the old c++ compiler just to see what the performance difference would be.

jmason42 commented 5 years ago

I'd be fumbling badly through C/C++ so you'd need to take the lead on that.