SciML / Catalyst.jl

Chemical reaction network and systems biology interface for scientific machine learning (SciML). High performance, GPU-parallelized, and O(1) solvers in open source software.
https://docs.sciml.ai/Catalyst/stable/
Other
454 stars 74 forks source link

Alternative ReactionNetwork Lowering #792

Open ChrisRackauckas opened 4 years ago

ChrisRackauckas commented 4 years ago

It would be nice to automatically allow for generating this set of ODEs from the ReactionNetwork.

isaacsas commented 4 years ago

Tau-leaping too.

isaacsas commented 4 years ago

What's the master equation as a PDESystem here refer to? Do you mean a PDE for the corresponding generating function, or for spatial particle systems?

ChrisRackauckas commented 4 years ago

For the probability distribution

isaacsas commented 4 years ago

Ahh, ok so you are thinking of it as a differential-difference equation. Could we use some of the infinite dimensional LA tooling for representing the adjoint of the generator in the master equation? (I know next to nothing about whether it would be worthwhile, but that would presumably allow us to "exactly" represent the equation prior to trying to solve it numerically.)

ChrisRackauckas commented 4 years ago

We can represent it via an InfiniteArray of equations, and then find ways to deal with it. Closed systems would be finite of course.

isaacsas commented 4 years ago

Yup, it would just be nice to have a representation that is "exact", so that different methods can truncate / approximate as they see fit.

vnminin commented 3 years ago

@ChrisRackauckas, UCI-MCSB PhD student @yushang-lai is trying to implement LNA inference in Turing. We have run into a problem with the covariance matrix ODE: https://discourse.julialang.org/c/domain/probprog/48

Do you have any advice for us? Thanks!

ChrisRackauckas commented 3 years ago

@vnminin oh, someone just put up a whole library of moment closures: https://github.com/augustinas1/MomentClosure.jl

vnminin commented 3 years ago

Thanks, @ChrisRackauckas. Maybe helpful, but hard to tell, because there is almost no documentation.

Our main problem is with Turing not liking our way of enforcing positive definiteness of the covariance matrix -- ODE returns a nearly positive definite matrix

ChrisRackauckas commented 3 years ago

https://augustinas1.github.io/MomentClosure.jl/dev/

ChrisRackauckas commented 3 years ago

For undocumented libraries, look at their tests: https://github.com/augustinas1/MomentClosure.jl/blob/main/test/moment_convert_tests.jl

vnminin commented 3 years ago

Thanks, @ChrisRackauckas. I am still not sure how moment closures will help us (LNA is a different technique unless I am missing some equivalence). Maybe you are hinting at the fact that the above library does enforce positive definitive constraint gracefully, so Turing does not get mad and so we can copy that... We'll try to investigate

ChrisRackauckas commented 3 years ago

LNA is also known as "Normal Closure" or "Zero Closure" to second order. I pointed to a file which makes use of those conversions.

vnminin commented 3 years ago

I am not an expert on moment closures, so I could be wrong, but at least this paper contrasts LNA and normal closure: https://arxiv.org/pdf/1409.1096.pdf

So maybe not the same?

ChrisRackauckas commented 3 years ago

Yeah Normal Closure is the LNA with the covariance adjustment, and Zero Closure is the simple LNA.

yushang-lai commented 3 years ago

Hi! Christopher and Vladimir,

I checked that MomentClosure.jl is an alternative framework to formulate ODE systems but the solver is still from OrdinaryDiffEq.jl Based on the three examples in this new package. It seems it helps solving ODE with less time and less memory allocation but not showing that the results could be more precise. I will test whether it gives better results with less numerical errors so that I don't need to manually change the output matrix to symmetry and positive definite.

I'm not very sure about what they check in https://github.com/augustinas1/MomentClosure.jl/blob/main/test/moment_convert_tests.jl Could you explain more about this test?

Best, Yushang

On Mon, Mar 1, 2021 at 12:04 PM Christopher Rackauckas < notifications@github.com> wrote:

LNA is also known as "Normal Closure" or "Zero Closure" to second order. I pointed to a file which makes use of those conversions.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/SciML/Catalyst.jl/issues/792, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOC6PCKS7IIAUIQ3B7W46ZTTBPXCPANCNFSM4NMTK3OA .

yushang-lai commented 3 years ago

https://discourse.julialang.org/t/forwarddiff-dual-typeerror-while-solving-ode-with-3-by-3-matrix-input-and-proximaloperators-package-which-keep-the-output-positive-definite/56039

I think this issue could be solved in another way instead. The error message raised because Turing's auto differentiation change the data into it's own type during the inference. For example, if I set backend as ForwardDifference. Then, the ODE solutions' data will automatically change into forwarddiff.dual type which I could not apply symmetry() on it. Since symmetry() only take input as Float64 type. Besides, the forwarddiff.dual type output could not be forced into Float64 type by Convert(). The reason might be that it has been stored as a dense matrix which could not be changed.

Could you give me some idea about this? Thanks a lot in advance!

On Mon, Mar 1, 2021 at 12:16 PM Christopher Rackauckas < notifications@github.com> wrote:

Yeah Normal Closure is the LNA with the covariance adjustment, and Zero Closure is the simple LNA.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/SciML/Catalyst.jl/issues/792, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOC6PCLR337XGMO32ABSVBDTBPYSTANCNFSM4NMTK3OA .

ChrisRackauckas commented 3 years ago

I could not apply symmetry() on it

symmetry isn't a Base Julia function. Is there a reason you don't just make it generic? Have you discussed an MWE in the Slack?

It seems it helps solving ODE with less time and less memory allocation but not showing that the results could be more precise.

What it does is it generates the moment expressions under different closures. Under the right choice of closure, it's just a transformation from an ReactionSystem to an ODE for the LNA, which you can solve and thus get the LNA means and variances.

yushang-lai commented 3 years ago

Sorry, I didn't type correctly in the previous email. I used Symmetric() in the LinearAlgebra Package. Here is the code:

[image: image.png]

prox() is from package ProximalOperator.jl to make matrix positive semi-definite.

I haven't discussed an MWE in the slack yet. Could you send me a tutorial on creating an MWE? I could not find one in Google.

I will read MomentClosure.jl carefully and write the code in the next few days.

Thanks a lot in advance! Best, Yushang

On Mon, Mar 1, 2021 at 2:47 PM Christopher Rackauckas < notifications@github.com> wrote:

I could not apply symmetry() on it

symmetry isn't a Base Julia function. Is there a reason you don't just make it generic? Have you discussed an MWE in the Slack?

It seems it helps solving ODE with less time and less memory allocation but not showing that the results could be more precise.

What it does is it generates the moment expressions under different closures. Under the right choice of closure, it's just a transformation from an ReactionSystem to an ODE for the LNA, which you can solve and thus get the LNA means and variances.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/SciML/Catalyst.jl/issues/792, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOC6PCLVLK2RWD2SXAAKAV3TBQKEZANCNFSM4NMTK3OA .

ChrisRackauckas commented 3 years ago

I haven't discussed an MWE in the slack yet. Could you send me a tutorial on creating an MWE? I could not find one in Google.

I mean, it's no different from normal. Just simplify until you can't anymore.

vnminin commented 3 years ago

Sorry for a pause and thanks @ChrisRackauckas for replying to Yushang and my questions.

I am still not sure I understand all equivalencies between moment closures and LNA. Chris, if you can point us to some references, it would be great. When I do a quick search, I keep finding papers that specifically compare and contrast moment closure methods and LNA. For example, this paper (https://arxiv.org/pdf/1711.07383.pdf) says "[LNA] agrees with the two-moment approximation in the macroscopic limit of large volumes [27], namely in the limit of small noise" and "The advantages of [LNA] are numerous: (i) the mean and variance are always guaranteed to be positive and hence physically meaningful (unlike moment-closure approximations)..."

ChrisRackauckas commented 3 years ago

IIRC the derivation of the LNA is simply, expand out the moments, declare moments higher than 2 equal to zero, reduce and solve for the ODEs that give the evolution of the first two moments. That zeroing process is a moment closure, what MomentClosures.jl calls "zero moment closure". It's only one of many moment closure methods, and there are dozens, so a statement like "the mean and variance are always guaranteed to be positive and hence physically meaningful (unlike moment-closure approximations)" don't even make sense without reference what or how the moment closure approximation is done. So I think it's just a terminology thing where they are referring to one specific way of doing it, instead of the whole class of method (of which LNA is in).

Maybe @augustinas1 can chime in and clarify a bit.

But anyways, I'm not opposed to having an LNA transformation in the library. I haven't gotten to it because it's a good starting project for a new contributor (or undergrad project), to take a description like https://tanhevg.github.io/GpABC.jl/latest/overview-lna/ and code up the transformation. At the end of the day I think we need something called linear_noise_approximation because it's common enough, so I'm willing to take a PR on it. However, it would still be good to clear up the terminology.

vnminin commented 3 years ago

I agree with you @ChrisRackauckas that this is probably a terminology issue. Here is an example of paper that seems to be including LNA into moment closure methods, but I don't think they use "zero moment closure" name for LNA: https://ieeexplore.ieee.org/abstract/document/4537208?casa_token=2gyVgM4MwOUAAAAA:x12m4gWaEhfvgTNtZaHy99m301eDrxPU7muqO5BmcR9eWhfOaUlHXB5TPfkXMxad6yoWdeSts-A

Sorry for all the pestering about the terminology. It is important to understand, because if LNA is already implemented under the name "zero moment closure" and(!) plays nicely with Turing, it would be super helpful to @yushang-lai, who has been struggling to make his own LNA implementation to play nicely with Turing.

As almost always the case, implementing these techniques to simulate from the process is much-much easier than to perform inference from noisy data.

ChrisRackauckas commented 3 years ago

What's his issue with the zero moment closure? It should be relatively straightforward since it just becomes ODEs. None of the transformation needs to be done within Turing, so it shouldn't be any different from the ODE demo.

vnminin commented 3 years ago

You would think, right? :) @yushang-lai has a reproducible example of what goes wrong: https://discourse.julialang.org/t/forwarddiff-dual-typeerror-while-solving-ode-with-3-by-3-matrix-input-and-proximaloperators-package-which-keep-the-output-positive-definite/56039

ChrisRackauckas commented 3 years ago

That has nothing to do with the ODE part. You can delete the ODE and still see the issue. Of course Symmetric(convert(Array{Float64,2},phi_new)) is guerenteed to fail: Symmetric(convert(Array{T},phi_new)) or etc. Minimal reproducible examples are more helpful.

vnminin commented 3 years ago

Well, that requires more knowledge that we have :) Thanks for replying to silly questions!

ChrisRackauckas commented 3 years ago

Dual numbers are not floating point numbers. See https://mitmath.github.io/18337/lecture8/automatic_differentiation.html for how it works. The documentation of ForwardDiff.jl says:

https://juliadiff.org/ForwardDiff.jl/latest/user/limitations.html

The target function must be written generically enough to accept numbers of type T<:Real as input (or arrays of these numbers). The function doesn't require a specific type signature, as long as the type signature is generic enough to avoid breaking this rule. This also means that any storage assigned used within the function must be generic as well (see this comment for an example).

As in, don't require Float64.

ChrisRackauckas commented 3 years ago

I would highly recommend asking this kind of question in the Julia Slack: JuliaLang.org/slack in the #autodiff or #diffeq-bridged channels. But to get help quicker, I would recommend trying to minimize your problem first. It was clear there wasn't an effort in minimizing so I put a 1 week reminder on when to respond to it hoping your student would narrow it down first. Basically, delete stuff until you find the real issue. If that was done convert(Array{Float64},x) in ForwardDiff would've been found immediately. But if you need help doing that kind of thing, the Slack channels are where to go.

vnminin commented 3 years ago

Thanks Chris! This is our first project in Julia, so we are often confused and lost. Joining Slack is a good idea. Thanks again for all your advice!

yushang-lai commented 3 years ago

Thanks very much Chris and Vladimir!

I tried GpABC a few months ago. The LNA method in GpABC is not the one we can use: it creates the whole ODE trajectory first and appy MvNormal to each save point simultaneously which is not what we want. https://tanhevg.github.io/GpABC.jl/latest/overview-lna/ The LNA algorithm we use is to split the whole time span into several intervals. We apply ODE in the first interval and appy MvNormal at the end of it. Then, go to the second..... etc. Based on what initial condition we use for each time interval, it will further be classified as restart LNA and non-restart LNA. I will look into their code again to see whether it has a tactic to make covariance matrix symmetry and PD.

I have googled all the related issues and discussions. I have tried Symmetric(convert(Array{T},phi_new)), and set T to be real at the beginning of the function following a link before. In that discussion, the question asker says this work for his case (no ODE involves, but he code like Symmetric(P'DP)), but he also says that using real type will degrade the performance a lot. But when I tried in my ODE example in real type with Turing, it also could not work.

In fact, prox() is from package ProximalOperator.jl which tries to make the input matrix positive semi-definite will automatically applies Symmetric() on the input matrix. I don't need to call Symmetric() again explicitly. The real issue is because that phi_new is dual type and Symmetric() call inside prox() could only apply on Float64 type.

[image: image.png]

I will try to minimize the problem as much as I can and ask in the Julia Slack: JuliaLang.org/slack in the #autodiff or #diffeq-bridged channels. I'm currently learning the moment closure and will finish read the document you send me about dual data type.

Best, Yushang

On Wed, Mar 3, 2021 at 11:04 AM Vladimir N. Minin notifications@github.com wrote:

Thanks Chris! This is our first project in Julia, so we are often confused and lost. Joining Slack is a good idea. Thanks again for all your advice!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/SciML/Catalyst.jl/issues/792, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOC6PCNUIZ44BCQDPZSY3F3TB2BQNANCNFSM4NMTK3OA .

ChrisRackauckas commented 3 years ago

In fact, prox() is from package ProximalOperator.jl which tries to make the input matrix positive semi-definite will automatically applies Symmetric() on the input matrix. I don't need to call Symmetric() again explicitly. The real issue is because that phi_new is dual type and Symmetric() call inside prox() could only apply on Float64 type.

Yes, I mentioned on Discourse that the real issue is ProximalOperators.jl. Just dev the package and remove all of the restrictions like https://github.com/kul-forbes/ProximalOperators.jl/blob/1f8d02bbe18eda6b9ec611ea946fca8c444c4d9d/src/functions/indPSD.jl#L94-L135 . They specifically wrote a non-differentiable library, so you just need to revert that (and open a PR).

augustinas1 commented 3 years ago

Hi all, sorry I'm late to the party but hopefully can still help to clear out some confusion between the LNA and moment closure approximations. In short, they are two distinct methods and no moment closure scheme is equivalent to LNA.

LNA. Using Van Kampen's system size expansion one can write down the moments of the chemical master equation as a series in powers of the inverse system size parameter (e.g. volume of the compartment or number of molecules in the system). Zeroth order truncation gives the LNA and the resulting time-evolution equations of the means and covariances are decoupled from higher order moments. Such expansion is systematic and so can be expected to be valid for (relatively) large system sizes.

Moment closure approximations. The starting point is to derive the time-evolution equations for the moments directly from the master equation (see here for example) which is less involved than the system size expansion. In the general case, the moment equations will depend on higher order moments. However, no assumptions about the size of the system have been made and so we cannot make any systematic argument about truncating the moment hierarchy at some arbitrary order. For this reason, we have to resort to moment closure approximations based on a variety of ad hoc assumptions.

The moment equations obtained using the two methods will generally differ. Both approaches have pros and cons, some relevant discussion along these lines can be found in the papers linked by Vladimir and also https://doi.org/10.1063/1.3702848.

ChrisRackauckas commented 3 years ago

Thanks! So then it would be good to get an LNA here.

one can write down the moments of the chemical master equation as a series in powers of the inverse system size parameter (e.g. volume of the compartment or number of molecules in the system). Zeroth order truncation

I see, so LNA expands in the system's size instead of the moments. Good to know!

ChrisRackauckas commented 6 months ago

Do all of these have an issue?

augustinas1 commented 3 years ago

I might get around to adding LNA at some point myself (if nobody does it before). In general it would be nice to be able to generate the moment equations from the system-size expansion up to arbitrary order, and potentially useful for a comparison between both methods.

ChrisRackauckas commented 3 years ago

It would be interesting to do a systematic test of the higher order expansions and see the difference on large and small systems then.

yushang-lai commented 3 years ago

Hi! Augustinas and Christopher,

Thanks very much for the clarification between LNA and Moment Closure Clarification. It's very clear.

Hi! Auguestinas: could you notice me once if you or someone esle finished adding the code for LNA?

Thanks a lot in advance! Best, Yushang

On Wed, Mar 3, 2021 at 12:37 PM Christopher Rackauckas < notifications@github.com> wrote:

It would be interesting to do a systematic test of the higher order expansions and see the difference on large and small systems then.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/SciML/Catalyst.jl/issues/792, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOC6PCPTMO6JZEDAS2GU6LLTB2MT5ANCNFSM4NMTK3OA .

yushang-lai commented 3 years ago

Oh on... I got permission denied while trying to modified Float64 to Real in the ProximalOperators.jl package.

I'm going to minimize the problem and send on slack

[image: image.png]

On Wed, Mar 3, 2021 at 1:06 PM Yushang Lai yushanl2@uci.edu wrote:

Hi! Augustinas and Christopher,

Thanks very much for the clarification between LNA and Moment Closure Clarification. It's very clear.

Hi! Auguestinas: could you notice me once if you or someone esle finished adding the code for LNA?

Thanks a lot in advance! Best, Yushang

On Wed, Mar 3, 2021 at 12:37 PM Christopher Rackauckas < notifications@github.com> wrote:

It would be interesting to do a systematic test of the higher order expansions and see the difference on large and small systems then.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/SciML/Catalyst.jl/issues/792, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOC6PCPTMO6JZEDAS2GU6LLTB2MT5ANCNFSM4NMTK3OA .

vnminin commented 3 years ago

Thanks, @augustinas1! If you know of a way to gracefully enforcing positive definiteness, so autodiff doesn't get mad, please let us know.

@yushang-lai, I think we should move this discussion elsewhere, because the nitty gritty of what you are doing may not belong in this issue.

isaacsas commented 6 months ago

I don't think so (at least not explicitly).

Many of these might work for a GSOC by someone who has learned the MTK internals ahead of time (the key being they need to know them ahead of time I think to make progress on this).