Closed prismofeverything closed 5 years ago
Ignoring the alternative propensity calculations for the moment: It would take some refactoring but there is a way to do this exactly. The total propensity is (obviously) the sum of all propensities, while each propensity is some float (a positive rate constant) times the number of ways in which molecule can possibly interact with each other, which is itself always a non-negative integer. So, if you think about it, the only way for the overall propensity (a float) to be zero is if all possible interaction #'s for all reactions are also zero, which can be checked without requiring any arbitrary threshold.
Now, as far as the alternative propensity calculations go, things become more complicated. There's no strict enforcement of what value the alternative propensities can take, so it could be nonzero even when a reaction really shouldn't happen (@prismofeverything is there handling for this?). Furthermore there's no prior calculation of the number of possible interactions between molecules, so therefore there are no integer values to check for zeros. So we're back to checking an arbitrary threshold. However, as I mentioned, we ought to be making sure that there are enough molecules for one reaction to occur anyway, otherwise there's the risk that copy numbers go negative. So if that logic was introduced - which would be as simple as checking that the number of reactant molecules of each type is at least as great as their stoichiometry - then, again, we can do exact comparisons, and no thresholding is necessary.
In short - the question isn't "Is the total propensity greater than zero?", it's "Can any more reactions possibly occur?". They're equivalent in a mathematical sense but not in a computational sense.
Awesome, thanks for that rundown @jmason42. Based on these considerations and my subsequent testing I am going to leave the condition as is if (total <= 0.0)
as we are currently only calculating propensities in one way, and as we add new forms we can revisit the considerations for what this threshold should be.
The artifacts I was seeing (infinitesimals) were temporary issues based on some memory access issues I was having while debugging this, and I have not seen them since those were fixed. All of my tests currently reach a total of exactly 0.0
.
Based on a report that a multi-gen simulation run took more memory to complete than before the complexation change, this lead to the discovery of a memory leak in this repository. Ultimately it turned out to be quite an adventure into the low level internals of numpy and python/C interaction.
I used valgrind to find the leak, which is a pretty amazing tool if anyone ever needs to debug memory issues again (works for segfaults too): http://valgrind.org/
As it turns out, if you want to return an array from C as a numpy array, you can't free that pointer until after numpy is done with it. But when is numpy done with it? We don't know. I took this to mean that numpy would free it afterwards, but this assumption turned out to be incorrect.
The fix is to create empty numpy arrays of the right shape from C and copy the values from the C array into it. Then I was able to free those pointers. There were a couple of other improvements (I found a bug in the reaction selection process) and additions to the tests.
Happy to answer any questions, let me know.