Open jmason42 opened 5 years ago
I'm curious how this affects performance. Currently I have a version of complexation that uses this library, but it is a bit slower than the previous non-gillespie approach. I think performance may make or break this library. Perhaps we should take a look at the previous issues with numba as well and see if we can get compilation working.
Yes, I was thinking about checking out Numba today. Actually right in the middle of writing a performance issue; I also have a small PR ready that should increase performance by maybe 25%.
I suspect that our major performance bottleneck right now is https://github.com/CovertLab/arrow/blob/d419c374ac2de2feb2ecc577fbed8d49ed2c996b/arrow/arrow.py#L13
which a sparse representation would alleviate. We could also just pre-compute that np.where
call, effectively accomplishing the same thing.
Okay, back from break, this is the next thing I'm going to try : )
SSAs, at scale, usually represent sparsely connected reaction networks. As such it's often more effective to store and access information on a per-reaction basis rather than storing a full matrix of reaction stoichiometries. This may be unproductive or even counter-productive for small systems - what we need is a large production system (e.g. WCM complexation).