ReactionMechanismGenerator / ReactionMechanismSimulator.jl

The amazing Reaction Mechanism Simulator for simulating large chemical kinetic mechanisms
https://reactionmechanismgenerator.github.io/ReactionMechanismSimulator.jl
MIT License
71 stars 33 forks source link

Speed up Adjoint Sensitivities #51

Open mjohnson541 opened 4 years ago

mjohnson541 commented 4 years ago

The current implementation of adjoint sensitivities is functional, but very poorly optimized. Currently it just evaluates the full derivative function for the domain and then extracts the one derivative it needs from that in the function g. We then calculate gradients dgdu and dgdp using forward differentiation.

In this scheme in each evaluation of g we evaluate many reactions that are irrelevant to the calculation of the derivative of the species of interest. Ideally we should determine the reactions involving the species of interest and only evaluate those reactions. Additionally it should be much faster to calculate dgdu and dgdp using reverse mode differentiation with ReverseDiff, Tracker or Zygote than forward mode differentiation.

It should also be possible to do even better than the above for specific domains by using analytic dgdu and dgdp.

jiweiqi commented 4 years ago

Add a reference on comparing different Adjoint sensitivity approaches in Julia eco-system https://arxiv.org/abs/1812.01892. I am actually not sure whether reducing the number of parameters affect the cost a lot. I guess the cost should be dominantly by the dimension of outputs involved in the loss function g.

jiweiqi commented 4 years ago

Where is the current implementation of adjoint sensitivities you are referring to? @mjohnson541

mjohnson541 commented 4 years ago

Sorry, I still need to add it to the wiki, but the ConstantTP H2 example notebook on master has examples for both forward and adjoint sensitivities.

mjohnson541 commented 4 years ago

The adjoint code is in Simulation.jl

jiweiqi commented 4 years ago

I am not sure why I can not ping Christopher Rackauckas here. If you can, it will be great to ping him here. He might have some suggestions.

mjohnson541 commented 4 years ago

I'm not sure, I can't either, but I think for the moment the path forward is relatively clear...we have some very clear inefficiencies in the implementation that should net us >100x speed up...we can ping him once we've fixed these...

ChrisRackauckas commented 4 years ago

In this scheme in each evaluation of g we evaluate many reactions that are irrelevant to the calculation of the derivative of the species of interest. Ideally we should determine the reactions involving the species of interest and only evaluate those reactions. Additionally it should be much faster to calculate dgdu and dgdp using reverse mode differentiation with ReverseDiff, Tracker or Zygote than forward mode differentiation.

Oof that's not good. Is the issue in the vjps, i.e. https://github.com/ReactionMechanismGenerator/ReactionMechanismSimulator.jl/issues/84?

mjohnson541 commented 4 years ago

So I believe these two bits are separate from #84. The first bit is just a programming problem, just need to make a smaller phase object specifically for calculating g. The second, I've already had some success with dgdp switching to ReverseDiff was ~100x faster (without even fixing the issue with g), I'm working the infrastructure to run ReverseDiff for dgdu properly, but it seems straightforward and I think it should be cheaper than dgdp. Put together I anticipate a lot of speed up once I have it working.

mjohnson541 commented 3 years ago

@ChrisRackauckas @hwpang has gotten ReverseDiff for the smaller phase object mostly working non-preallocating for dgdu and dgdp, we're checking some aspects. But we're a bit at a lost of how to use caching to preallocate for ReverseDiff. As you noted earlier caching might also help avoid allocations in the main function evaluations. Are there good templates for how to do this generally and for ReverseDiff?

ChrisRackauckas commented 3 years ago

You won't be able to eliminate all allocations from the reverse-mode AD: it's inherent to all reverse mode implementations that do not rely on reversible computing (which only works on a very small subset of codes: AD via reversible computing is cool but still very very research-y: https://github.com/GiggleLiu/NiLang.jl . If you think you can make a RHS that is NiLang compatible though, we can get that all hooked up!). The reason is that values have to be cached in the heap to construct a reverse pass which will be called in the future given that function boundaries will be crossed: this is quite fundamental to the method.

That said, what would help a ton is either using out of place adjoints or, if in-place, compiling the graph with autojacvec=ReverseDiffVJP(true). There are some assumptions that need to be satisfied for the latter, but it's more than worth it on the kinds of RHS functions you're generating, so it would be good to generate a version which satisfies the assumptions (i.e. no branch changes).