SciML / DiffEqFlux.jl

Pre-built implicit layer architectures with O(1) backprop, GPUs, and stiff+non-stiff DE solvers, demonstrating scientific machine learning (SciML) and physics-informed machine learning methods
https://docs.sciml.ai/DiffEqFlux/stable
MIT License
865 stars 153 forks source link

sciml_train NeuralODE has zygote error, when processing 2D arrays of batch data #169

Closed cems2 closed 4 years ago

cems2 commented 4 years ago

update: The crux: found the single line change that converts this from working to non-working!!! This is not the fix-- it just pinpoints what makes Zygote fail.

The batch of k items to train on is specified as a 2D matrix of dimensions (k x 2)

# contains batch of 4 items  [u0 u1 u2 u3] is a 2D array of shape  2×4
minibatch = [([u0 u1 u2 u3],)]  

When I consider batches containing just a single item I have the option to write these as either as a 2D matrix (1x2) as above or to use a 1D column vector.

The column vector works in sciml_train but the 2D matrix fails!!!

# contains batch of 1 items as a 2D array of shape  2×1 Array
minibatch = [(reshape(u0,2,1),)]       # this won't work in sciml_train

#contains a batch of 1 item as 1D array of size 2
minibatch = [(u0,)]         #  this works in sciml_train!!  

Note the dimensions and shape are all correct for the loss function. The issue is Zygote fails when taking the gradient in the 2D case.

Confirming this with a related observation: converting this to serial processing of the batch instead of vector will let it work, since now one is processing it as column vectors again.

# vector version of loss fails in sciml_train
 loss_vector(p,mb) = sum(abs, pred(mb,p).- mb)

where mb is a 2D array listing a set of initial condition vectors.

Changing the vector version to an explicit loop does work in sciml_train

function loss_serial(p,mb)     
   temp=0.0f0
   for i in 1:size(mb)[2]              # explicitly loop over each initial condition
      j = mb[:,i]
      temp += sum(abs, pred(j,p).- j)
   end
   temp
end

key point: BOTH of these functions work just fine when called manually and also during the initial calls made by sciml_train to compute the loss. The vector version fails in sciml_train only when it comes time for zygote to take the gradient.

_1. The serial version works with sciml_train

  1. The vector version works in sciml_train only if the batch is just a single item in a column vector
  2. vector version fails in scimltrain when batch has multiple training items in a 2D matrix.

so it's something about the 2D shape of the Batch Data that cause the Zygote gradient problem.

============= Full description of the problem =============

Problem Area: batching: the loss function will consider multiple training examples at once. I have two ways to do this:

  1. serial looping over calls to the ODE solver on each initial condition in a supplied list.
  2. Calling ODE solver on the entire matrix. So it's processing all the initial conditions simultaneously. (that works fine)

however sciml_train fails to work with the second loss method ("vectorized")

Why I think the error is related to zygote/siml_train() When I add print statements before the returns from loss() and the model f(x,p,t) to see if it returns I see that, yes indeed, the loss function returns normally.
However, after the loss function returns, one observes a further call to the model f(x,p,t). This call also finishes cleanly (returning no Nans, right dimensions).

Then the error happens after the return from f(x.p,t).

_Since the only function in my code that calls f(x,p,t) is in the Loss Function it logically can't be called that final time after the loss function returns. So this has to be the work of Zygote computing the derivatives I think._

Regression tests:

  1. when called manually the model, the prediction, and the loss function all work fine in both single and multiple initial condition batches
  2. these functions work when used in serial or vector mode loss functions.
  3. the serial and vector versions agree numerically
  4. the numerical output when called by sciml_train agrees with the manual calls.

Also none of the input arrays are changing dimensions, but the zygote error message complains of a dimension mismatch.

The error is not being raised from commands in my code, but is raised from within Zygote.

Strawman rejected: The only thing that I think is weird in my program here the setup of prob (the ODE problem). Notably ODEProblem requires you to input an initial condition, but later this intial condition is overridden by concrete solve, so it's just a placeholder. (why do we need it? I'm guessing for solve()? would be better to have a prototype signature without this uneeded term)

However I tested this by moving the ODE problem set up inside the Loss loop. I get the same error. So that's not it.

Code to reproduce:

#=
Status `~/.julia/environments/v1.3/Project.toml`
[c52e3926] Atom v0.12.3
[aae7a2af] DiffEqFlux v1.3.2 #master (https://github.com/JuliaDiffEq/DiffEqFlux.jl.git)
[0c46a032] DifferentialEquations v6.11.0
[587475ba] Flux v0.10.1
[7073ff75] IJulia v1.21.1
[e5e0dc1b] Juno v0.7.2
[429524aa] Optim v0.20.1
[1dea7af3] OrdinaryDiffEq v5.29.0
[91a5bcdd] Plots v0.29.1
[d330b81b] PyPlot v2.8.2
=#

using Flux #for ADAM #DiffEqFlux,
using Optim # for BFGS
using OrdinaryDiffEq
using DiffEqFlux

model1 = FastChain(FastDense(2,2))

p = initial_params(model1)

function f(x,p,t)
   println("fx ",x)
   println("fp ",p)
   model1(x,p)  # error tracebakc juno highlits this in red
end

u0 = Float32[1.0,1.0]
u1 = Float32[0.0,1.0]
u2 = Float32[1.0,0.0]
u3 = Float32[1.0,0.7]

#minibatch = [([u0 u1 u2 u3],)] # mutiple training cases 2D matrix, gives error
minibatch = [(u0[:,:],)] # single training case 2D matrix , gives error
#minibatch = [(u0,)]  # Single Training case 1D matrix #######  this won't give an error

prob = ODEProblem(f,minibatch[1][1],(0.0f0,1.0f0),p)  # u0 is a placeholder.

pred(u,p) = concrete_solve(prob,Tsit5(),u,p, saveat=0.01)

#validate
pred(u0,p)
pred(minibatch[1][1],p)

function loss_batch(p,mb)
   #prob = ODEProblem(f,mb,(0.0f0,1.0f0),p)  # moving these two lines inside the loop
   #pred(u,p) = concrete_solve(prob,Tsit5(),u,p, saveat=0.01) # but this doesn't fix the problem
   println("==============================")
   println("mb",mb)
   println("p",p)
   sum(abs, pred(mb,p).- mb)
end

#validate
loss_batch(p,minibatch[1]...)

function cb1(args...)
   println("args:",args[1:2])
   false
end

res0 = DiffEqFlux.sciml_train(loss_batch,p,ADAM(0.005),minibatch,maxiters=300,cb=cb1)

output:

julia> res0 = DiffEqFlux.sciml_train(loss_batch,p,ADAM(0.005),minibatch,maxiters=300,cb=cb1)
==============================
mbFloat32[1.0 0.0 1.0 1.0; 1.0 1.0 0.0 0.7]
pFloat32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[1.0 0.0 1.0 1.0; 1.0 1.0 0.0 0.7]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[1.0 0.0 1.0 1.0; 1.0 1.0 0.0 0.7]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.99998975 -2.4023055e-5 1.0000137 0.99999696; 0.9999772 0.9999725 4.6483274e-6 0.6999854]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.9998349 -0.0003867712 1.0002216 0.9999509; 0.9996326 0.99955773 7.483807e-5 0.69976526]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.99966484 -0.0007853762 1.0004503 0.99990046; 0.9992541 0.9991021 0.00015196591 0.69952345]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.99907863 -0.0021607548 1.0012394 0.99972683; 0.99794847 0.9975304 0.00041809405 0.69868934]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.99899685 -0.002352758 1.0013496 0.9997027; 0.99776626 0.99731106 0.00045524587 0.69857293]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.99897647 -0.00240068 1.0013771 0.9996967; 0.99772084 0.9972563 0.00046451823 0.6985439]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.99897647 -0.0024006611 1.0013771 0.9996967; 0.99772084 0.99725634 0.00046451477 0.6985439]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.9985973 -0.0032910816 1.0018884 0.9995847; 0.996876 0.9962392 0.0006368062 0.6980042]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.99820757 -0.004208213 1.0024158 0.99947006; 0.99600655 0.9951923 0.0008142661 0.69744885]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.9968657 -0.0073711826 1.0042368 0.9990771; 0.99301034 0.99158406 0.0014262823 0.69553506]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.9966788 -0.007812499 1.0044913 0.9990226; 0.99259263 0.991081 0.0015116755 0.69526833]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.9966322 -0.007922638 1.0045549 0.999009; 0.9924884 0.9909554 0.0015329856 0.69520175]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.99663234 -0.007922529 1.0045549 0.9990091; 0.99248856 0.99095565 0.0015329653 0.6952019]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.99576837 -0.009963861 1.0057322 0.9987575; 0.9905569 0.98862904 0.0019279517 0.69396824]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.9948837 -0.012063646 1.0069474 0.9985028; 0.988574 0.9862398 0.0023342483 0.69270205]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.9918482 -0.019297238 1.0111455 0.99763733; 0.9817551 0.9780212 0.0037339113 0.6883487]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.99142694 -0.020305347 1.0117323 0.99751854; 0.9808065 0.9768776 0.003928978 0.6877433]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.9913219 -0.020556875 1.0118788 0.997489; 0.98056996 0.9765924 0.003977646 0.68759227]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.9913227 -0.020556122 1.0118788 0.9974895; 0.98057115 0.97659373 0.0039774957 0.68759304]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.9891963 -0.025651673 1.014848 0.9968918; 0.9757796 0.97081625 0.0049634567 0.6845347]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.98704225 -0.030874776 1.017917 0.99630463; 0.9708937 0.9649197 0.005974098 0.6814178]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.9797175 -0.04881777 1.0285352 0.9943628; 0.9541842 0.9447383 0.009445961 0.6707627]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.9787103 -0.051312037 1.0300224 0.9941039; 0.95187247 0.94194394 0.00992859 0.6692893]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.9784598 -0.05193391 1.0303937 0.9940399; 0.9512966 0.94124776 0.01004892 0.6689223]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.97846735 -0.051925562 1.0303929 0.99404496; 0.95130855 0.94126135 0.010047304 0.6689302]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.9738921 -0.0633007 1.0371927 0.99288225; 0.94078374 0.9285355 0.012248329 0.6622231]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.9693676 -0.07487758 1.0442452 0.9918308; 0.9302042 0.91571593 0.014488392 0.65548944]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.9542779 -0.1144521 1.06873 0.9886135; 0.8944158 0.87227 0.022145841 0.6327348]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.95223755 -0.11993821 1.0721757 0.98821896; 0.88950604 0.8662988 0.023207372 0.62961644]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.95173275 -0.12130374 1.0730366 0.98812383; 0.888287 0.8648154 0.023471612 0.62884235]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.95179975 -0.121223405 1.0730232 0.9881667; 0.88839597 0.86494005 0.02345606 0.628914]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.9439907 -0.14250135 1.0864921 0.986741; 0.8694579 0.8418848 0.027573224 0.6168925]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.9366254 -0.16392474 1.1005502 0.98580277; 0.8508875 0.81916904 0.03171853 0.6051368]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.91291493 -0.2367961 1.149711 0.98395365; 0.789063 0.74324423 0.045818754 0.5660897]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.90976936 -0.24693796 1.1567073 0.98385066; 0.7806128 0.7328317 0.047781147 0.5607633]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.9090014 -0.24945468 1.158456 0.9838376; 0.7785286 0.7302604 0.04826812 0.55945045]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.9094132 -0.24893552 1.1583488 0.9840938; 0.77921194 0.7310444 0.04816764 0.5598986]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.8990071 -0.2841703 1.1831775 0.9842581; 0.75037694 0.69539165 0.05498537 0.54175943]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.8901997 -0.31920016 1.2093999 0.9859597; 0.7232478 0.6614845 0.06176345 0.52480245]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.86392915 -0.43852022 1.3024493 0.9954851; 0.63456917 0.54971784 0.084851205 0.46965373]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.8604143 -0.455599 1.3160132 0.9970939; 0.62212163 0.5339657 0.088155866 0.46193182]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.8595908 -0.45981488 1.3194056 0.99753517; 0.619093 0.5301214 0.088971645 0.4600566]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.8614917 -0.4573055 1.3187971 0.9986832; 0.6223063 0.53382033 0.088486046 0.46216014]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.85512847 -0.4966884 1.3518168 1.0041349; 0.59534454 0.49923825 0.09610642 0.44557306]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.85097164 -0.5360558 1.3870274 1.0117882; 0.5705973 0.4668736 0.103723794 0.43053517]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.84159523 -0.67168766 1.5132828 1.0431014; 0.4902811 0.36031345 0.12996778 0.3821871]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.8403732 -0.6915915 1.5319649 1.0478506; 0.4786486 0.34482992 0.13381909 0.37519988]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.84013695 -0.6965036 1.5366406 1.049088; 0.47584316 0.34107384 0.13476957 0.37352115]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.84277695 -0.6927635 1.5355406 1.0506061; 0.48043942 0.34639376 0.13404587 0.37652144]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.84277695 -0.69276327 1.5355403 1.050606; 0.48043957 0.34639397 0.13404581 0.3765215]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
ERROR: ArgumentError: number of columns of each array must match (got (1, 4))
Stacktrace:
 [1] _typed_vcat(::Type{Float32}, ::Tuple{Array{Float32,1},Array{Float32,2}}) at ./abstractarray.jl:1359
 [2] typed_vcat at ./abstractarray.jl:1373 [inlined]
 [3] vcat at /Users/sabae/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.3/SparseArrays/src/sparsevector.jl:1079 [inlined]
 [4] (::DiffEqFlux.var"#FastDense_adjoint#49"{FastDense{typeof(identity),DiffEqFlux.var"#initial_params#48"{typeof(Flux.glorot_uniform),typeof(Flux.zeros),Int64,Int64}},Array{Float32,2},Array{Float32,2},Array{Float32,2},Array{Float32,2}})(::Base.ReshapedArray{Float32,2,SubArray{Float32,1,Array{Float32,1},Tuple{UnitRange{Int64}},true},Tuple{}}) at /Users/cems/.julia/packages/DiffEqFlux/RLKTh/src/fast_layers.jl:53
 [5] #169#back at /Users/cems/.julia/packages/ZygoteRules/6nssF/src/adjoint.jl:49 [inlined]
 [6] applychain at /Users/cems/.julia/packages/DiffEqFlux/RLKTh/src/fast_layers.jl:20 [inlined]
 [7] (::typeof(∂(applychain)))(::Base.ReshapedArray{Float32,2,SubArray{Float32,1,Array{Float32,1},Tuple{UnitRange{Int64}},true},Tuple{}}) at /Users/cems/.julia/packages/Zygote/tJj2w/src/compiler/interface2.jl:0
 [8] FastChain at /Users/cems/.julia/packages/DiffEqFlux/RLKTh/src/fast_layers.jl:21 [inlined]
 [9] (::typeof(∂(λ)))(::Base.ReshapedArray{Float32,2,SubArray{Float32,1,Array{Float32,1},Tuple{UnitRange{Int64}},true},Tuple{}}) at /Users/cems/.julia/packages/Zygote/tJj2w/src/compiler/interface2.jl:0
 [10] f at /Users/cems/Documents/rusty science fair 2019/BFGS_MWE_error.jl:29 [inlined]
 [11] (::typeof(∂(f)))(::Base.ReshapedArray{Float32,2,SubArray{Float32,1,Array{Float32,1},Tuple{UnitRange{Int64}},true},Tuple{}}) at /Users/cems/.julia/packages/Zygote/tJj2w/src/compiler/interface2.jl:0
 [12] (::DiffEqBase.var"#613#back#542"{typeof(∂(f))})(::Base.ReshapedArray{Float32,2,SubArray{Float32,1,Array{Float32,1},Tuple{UnitRange{Int64}},true},Tuple{}}) at /Users/cems/.julia/packages/ZygoteRules/6nssF/src/adjoint.jl:49
 [13] #3 at /Users/cems/.julia/packages/DiffEqSensitivity/ZX2U1/src/derivative_wrappers.jl:112 [inlined]
 [14] (::typeof(∂(λ)))(::SubArray{Float32,1,Array{Float32,1},Tuple{UnitRange{Int64}},true}) at /Users/cems/.julia/packages/Zygote/tJj2w/src/compiler/interface2.jl:0
 [15] (::Zygote.var"#36#37"{typeof(∂(λ))})(::SubArray{Float32,1,Array{Float32,1},Tuple{UnitRange{Int64}},true}) at /Users/cems/.julia/packages/Zygote/tJj2w/src/compiler/interface.jl:38
 [16] #vecjacobian!#1(::SubArray{Float32,1,Array{Float32,1},Tuple{UnitRange{Int64}},true}, ::Nothing, ::typeof(DiffEqSensitivity.vecjacobian!), ::SubArray{Float32,1,Array{Float32,1},Tuple{UnitRange{Int64}},true}, ::SubArray{Float32,1,Array{Float32,1},Tuple{UnitRange{Int64}},true}, ::Array{Float32,1}, ::Float32, ::DiffEqSensitivity.ODEInterpolatingAdjointSensitivityFunction{DiffEqSensitivity.AdjointDiffCache{Nothing,Nothing,Nothing,Nothing,Nothing,Array{Float32,1},Nothing,Nothing,Nothing,Array{Float32,2},Nothing},DiffEqSensitivity.InterpolatingAdjoint{0,true,Val{:central}},Array{Float32,2},ODESolution{Float32,3,Array{Array{Float32,2},1},Nothing,Nothing,Array{Float32,1},Array{Array{Array{Float32,2},1},1},ODEProblem{Array{Float32,2},Tuple{Float32,Float32},false,Array{Float32,1},ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}},DiffEqBase.StandardODEProblem},Tsit5,OrdinaryDiffEq.InterpolationData{ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Array{Array{Float32,2},1},Array{Float32,1},Array{Array{Array{Float32,2},1},1},OrdinaryDiffEq.Tsit5ConstantCache{Float32,Float32}},DiffEqBase.DEStats},Nothing,Nothing}) at /Users/cems/.julia/packages/DiffEqSensitivity/ZX2U1/src/derivative_wrappers.jl:114
 [17] #vecjacobian! at ./none:0 [inlined]
 [18] (::DiffEqSensitivity.ODEInterpolatingAdjointSensitivityFunction{DiffEqSensitivity.AdjointDiffCache{Nothing,Nothing,Nothing,Nothing,Nothing,Array{Float32,1},Nothing,Nothing,Nothing,Array{Float32,2},Nothing},DiffEqSensitivity.InterpolatingAdjoint{0,true,Val{:central}},Array{Float32,2},ODESolution{Float32,3,Array{Array{Float32,2},1},Nothing,Nothing,Array{Float32,1},Array{Array{Array{Float32,2},1},1},ODEProblem{Array{Float32,2},Tuple{Float32,Float32},false,Array{Float32,1},ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}},DiffEqBase.StandardODEProblem},Tsit5,OrdinaryDiffEq.InterpolationData{ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Array{Array{Float32,2},1},Array{Float32,1},Array{Array{Array{Float32,2},1},1},OrdinaryDiffEq.Tsit5ConstantCache{Float32,Float32}},DiffEqBase.DEStats},Nothing,Nothing})(::Array{Float32,1}, ::Array{Float32,1}, ::Array{Float32,1}, ::Float32) at /Users/cems/.julia/packages/DiffEqSensitivity/ZX2U1/src/local_sensitivity/interpolating_adjoint.jl:79
 [19] ODEFunction at /Users/cems/.julia/packages/DiffEqBase/LNYfU/src/diffeqfunction.jl:229 [inlined]
 [20] initialize!(::OrdinaryDiffEq.ODEIntegrator{Tsit5,true,Array{Float32,1},Float32,Array{Float32,1},Float32,Float32,Float32,Array{Array{Float32,1},1},ODESolution{Float32,2,Array{Array{Float32,1},1},Nothing,Nothing,Array{Float32,1},Array{Array{Array{Float32,1},1},1},ODEProblem{Array{Float32,1},Tuple{Float32,Float32},true,Array{Float32,1},ODEFunction{true,DiffEqSensitivity.ODEInterpolatingAdjointSensitivityFunction{DiffEqSensitivity.AdjointDiffCache{Nothing,Nothing,Nothing,Nothing,Nothing,Array{Float32,1},Nothing,Nothing,Nothing,Array{Float32,2},Nothing},DiffEqSensitivity.InterpolatingAdjoint{0,true,Val{:central}},Array{Float32,2},ODESolution{Float32,3,Array{Array{Float32,2},1},Nothing,Nothing,Array{Float32,1},Array{Array{Array{Float32,2},1},1},ODEProblem{Array{Float32,2},Tuple{Float32,Float32},false,Array{Float32,1},ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}},DiffEqBase.StandardODEProblem},Tsit5,OrdinaryDiffEq.InterpolationData{ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Array{Array{Float32,2},1},Array{Float32,1},Array{Array{Array{Float32,2},1},1},OrdinaryDiffEq.Tsit5ConstantCache{Float32,Float32}},DiffEqBase.DEStats},Nothing,Nothing},LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Symbol,CallbackSet{Tuple{},Tuple{DiscreteCallback{DiffEqCallbacks.var"#33#36"{Base.RefValue{Union{Nothing, Float32}}},DiffEqCallbacks.var"#34#37"{DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},DiffEqSensitivity.var"#41#43"{DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}},Bool,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float32,2},Base.RefValue{Int64},Int64},Base.RefValue{Union{Nothing, Float32}}},DiffEqCallbacks.var"#35#38"{typeof(DiffEqBase.INITIALIZE_DEFAULT),Bool,DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},Base.RefValue{Union{Nothing, Float32}},DiffEqCallbacks.var"#34#37"{DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},DiffEqSensitivity.var"#41#43"{DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}},Bool,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float32,2},Base.RefValue{Int64},Int64},Base.RefValue{Union{Nothing, Float32}}}}}}},Tuple{Symbol},NamedTuple{(:callback,),Tuple{CallbackSet{Tuple{},Tuple{DiscreteCallback{DiffEqCallbacks.var"#33#36"{Base.RefValue{Union{Nothing, Float32}}},DiffEqCallbacks.var"#34#37"{DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},DiffEqSensitivity.var"#41#43"{DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}},Bool,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float32,2},Base.RefValue{Int64},Int64},Base.RefValue{Union{Nothing, Float32}}},DiffEqCallbacks.var"#35#38"{typeof(DiffEqBase.INITIALIZE_DEFAULT),Bool,DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},Base.RefValue{Union{Nothing, Float32}},DiffEqCallbacks.var"#34#37"{DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},DiffEqSensitivity.var"#41#43"{DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}},Bool,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float32,2},Base.RefValue{Int64},Int64},Base.RefValue{Union{Nothing, Float32}}}}}}}}}},DiffEqBase.StandardODEProblem},Tsit5,OrdinaryDiffEq.InterpolationData{ODEFunction{true,DiffEqSensitivity.ODEInterpolatingAdjointSensitivityFunction{DiffEqSensitivity.AdjointDiffCache{Nothing,Nothing,Nothing,Nothing,Nothing,Array{Float32,1},Nothing,Nothing,Nothing,Array{Float32,2},Nothing},DiffEqSensitivity.InterpolatingAdjoint{0,true,Val{:central}},Array{Float32,2},ODESolution{Float32,3,Array{Array{Float32,2},1},Nothing,Nothing,Array{Float32,1},Array{Array{Array{Float32,2},1},1},ODEProblem{Array{Float32,2},Tuple{Float32,Float32},false,Array{Float32,1},ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}},DiffEqBase.StandardODEProblem},Tsit5,OrdinaryDiffEq.InterpolationData{ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Array{Array{Float32,2},1},Array{Float32,1},Array{Array{Array{Float32,2},1},1},OrdinaryDiffEq.Tsit5ConstantCache{Float32,Float32}},DiffEqBase.DEStats},Nothing,Nothing},LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Array{Array{Float32,1},1},Array{Float32,1},Array{Array{Array{Float32,1},1},1},OrdinaryDiffEq.Tsit5Cache{Array{Float32,1},Array{Float32,1},Array{Float32,1},OrdinaryDiffEq.Tsit5ConstantCache{Float32,Float32}}},DiffEqBase.DEStats},ODEFunction{true,DiffEqSensitivity.ODEInterpolatingAdjointSensitivityFunction{DiffEqSensitivity.AdjointDiffCache{Nothing,Nothing,Nothing,Nothing,Nothing,Array{Float32,1},Nothing,Nothing,Nothing,Array{Float32,2},Nothing},DiffEqSensitivity.InterpolatingAdjoint{0,true,Val{:central}},Array{Float32,2},ODESolution{Float32,3,Array{Array{Float32,2},1},Nothing,Nothing,Array{Float32,1},Array{Array{Array{Float32,2},1},1},ODEProblem{Array{Float32,2},Tuple{Float32,Float32},false,Array{Float32,1},ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}},DiffEqBase.StandardODEProblem},Tsit5,OrdinaryDiffEq.InterpolationData{ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Array{Array{Float32,2},1},Array{Float32,1},Array{Array{Array{Float32,2},1},1},OrdinaryDiffEq.Tsit5ConstantCache{Float32,Float32}},DiffEqBase.DEStats},Nothing,Nothing},LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},OrdinaryDiffEq.Tsit5Cache{Array{Float32,1},Array{Float32,1},Array{Float32,1},OrdinaryDiffEq.Tsit5ConstantCache{Float32,Float32}},OrdinaryDiffEq.DEOptions{Float64,Float64,Float32,Float32,typeof(DiffEqBase.ODE_DEFAULT_NORM),typeof(LinearAlgebra.opnorm),CallbackSet{Tuple{},Tuple{DiscreteCallback{DiffEqCallbacks.var"#33#36"{Base.RefValue{Union{Nothing, Float32}}},DiffEqCallbacks.var"#34#37"{DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},DiffEqSensitivity.var"#41#43"{DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}},Bool,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float32,2},Base.RefValue{Int64},Int64},Base.RefValue{Union{Nothing, Float32}}},DiffEqCallbacks.var"#35#38"{typeof(DiffEqBase.INITIALIZE_DEFAULT),Bool,DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},Base.RefValue{Union{Nothing, Float32}},DiffEqCallbacks.var"#34#37"{DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},DiffEqSensitivity.var"#41#43"{DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}},Bool,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float32,2},Base.RefValue{Int64},Int64},Base.RefValue{Union{Nothing, Float32}}}}}}},typeof(DiffEqBase.ODE_DEFAULT_ISOUTOFDOMAIN),typeof(DiffEqBase.ODE_DEFAULT_PROG_MESSAGE),typeof(DiffEqBase.ODE_DEFAULT_UNSTABLE_CHECK),DataStructures.BinaryHeap{Float32,DataStructures.LessThan},DataStructures.BinaryHeap{Float32,DataStructures.LessThan},Nothing,Nothing,Int64,Array{Float32,1},Array{Float32,1},Array{Float32,1}},Array{Float32,1},Float32,Nothing}, ::OrdinaryDiffEq.Tsit5Cache{Array{Float32,1},Array{Float32,1},Array{Float32,1},OrdinaryDiffEq.Tsit5ConstantCache{Float32,Float32}}) at /Users/cems/.julia/packages/OrdinaryDiffEq/8Pn99/src/perform_step/low_order_rk_perform_step.jl:623
 [21] #__init#329(::Array{Float32,1}, ::Array{Float32,1}, ::Array{Float32,1}, ::Nothing, ::Bool, ::Bool, ::Bool, ::Bool, ::CallbackSet{Tuple{},Tuple{DiscreteCallback{DiffEqCallbacks.var"#33#36"{Base.RefValue{Union{Nothing, Float32}}},DiffEqCallbacks.var"#34#37"{DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},DiffEqSensitivity.var"#41#43"{DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}},Bool,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float32,2},Base.RefValue{Int64},Int64},Base.RefValue{Union{Nothing, Float32}}},DiffEqCallbacks.var"#35#38"{typeof(DiffEqBase.INITIALIZE_DEFAULT),Bool,DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},Base.RefValue{Union{Nothing, Float32}},DiffEqCallbacks.var"#34#37"{DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},DiffEqSensitivity.var"#41#43"{DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}},Bool,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float32,2},Base.RefValue{Int64},Int64},Base.RefValue{Union{Nothing, Float32}}}}}}}, ::Bool, ::Bool, ::Float32, ::Float32, ::Float32, ::Bool, ::Bool, ::Rational{Int64}, ::Float64, ::Float64, ::Rational{Int64}, ::Int64, ::Int64, ::Int64, ::Rational{Int64}, ::Bool, ::Int64, ::Nothing, ::Nothing, ::Int64, ::typeof(DiffEqBase.ODE_DEFAULT_NORM), ::typeof(LinearAlgebra.opnorm), ::typeof(DiffEqBase.ODE_DEFAULT_ISOUTOFDOMAIN), ::typeof(DiffEqBase.ODE_DEFAULT_UNSTABLE_CHECK), ::Bool, ::Bool, ::Bool, ::Bool, ::Bool, ::Bool, ::Bool, ::Int64, ::String, ::typeof(DiffEqBase.ODE_DEFAULT_PROG_MESSAGE), ::Nothing, ::Bool, ::Bool, ::Bool, ::Bool, ::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::typeof(DiffEqBase.__init), ::ODEProblem{Array{Float32,1},Tuple{Float32,Float32},true,Array{Float32,1},ODEFunction{true,DiffEqSensitivity.ODEInterpolatingAdjointSensitivityFunction{DiffEqSensitivity.AdjointDiffCache{Nothing,Nothing,Nothing,Nothing,Nothing,Array{Float32,1},Nothing,Nothing,Nothing,Array{Float32,2},Nothing},DiffEqSensitivity.InterpolatingAdjoint{0,true,Val{:central}},Array{Float32,2},ODESolution{Float32,3,Array{Array{Float32,2},1},Nothing,Nothing,Array{Float32,1},Array{Array{Array{Float32,2},1},1},ODEProblem{Array{Float32,2},Tuple{Float32,Float32},false,Array{Float32,1},ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}},DiffEqBase.StandardODEProblem},Tsit5,OrdinaryDiffEq.InterpolationData{ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Array{Array{Float32,2},1},Array{Float32,1},Array{Array{Array{Float32,2},1},1},OrdinaryDiffEq.Tsit5ConstantCache{Float32,Float32}},DiffEqBase.DEStats},Nothing,Nothing},LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Symbol,CallbackSet{Tuple{},Tuple{DiscreteCallback{DiffEqCallbacks.var"#33#36"{Base.RefValue{Union{Nothing, Float32}}},DiffEqCallbacks.var"#34#37"{DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},DiffEqSensitivity.var"#41#43"{DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}},Bool,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float32,2},Base.RefValue{Int64},Int64},Base.RefValue{Union{Nothing, Float32}}},DiffEqCallbacks.var"#35#38"{typeof(DiffEqBase.INITIALIZE_DEFAULT),Bool,DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},Base.RefValue{Union{Nothing, Float32}},DiffEqCallbacks.var"#34#37"{DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},DiffEqSensitivity.var"#41#43"{DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}},Bool,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float32,2},Base.RefValue{Int64},Int64},Base.RefValue{Union{Nothing, Float32}}}}}}},Tuple{Symbol},NamedTuple{(:callback,),Tuple{CallbackSet{Tuple{},Tuple{DiscreteCallback{DiffEqCallbacks.var"#33#36"{Base.RefValue{Union{Nothing, Float32}}},DiffEqCallbacks.var"#34#37"{DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},DiffEqSensitivity.var"#41#43"{DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}},Bool,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float32,2},Base.RefValue{Int64},Int64},Base.RefValue{Union{Nothing, Float32}}},DiffEqCallbacks.var"#35#38"{typeof(DiffEqBase.INITIALIZE_DEFAULT),Bool,DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},Base.RefValue{Union{Nothing, Float32}},DiffEqCallbacks.var"#34#37"{DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},DiffEqSensitivity.var"#41#43"{DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}},Bool,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float32,2},Base.RefValue{Int64},Int64},Base.RefValue{Union{Nothing, Float32}}}}}}}}}},DiffEqBase.StandardODEProblem}, ::Tsit5, ::Array{Array{Float32,1},1}, ::Array{Float32,1}, ::Array{Any,1}, ::Type{Val{true}}) at /Users/cems/.julia/packages/OrdinaryDiffEq/8Pn99/src/solve.jl:386
 [22] (::DiffEqBase.var"#kw##__init")(::NamedTuple{(:callback, :save_everystep, :save_start, :saveat, :tstops, :abstol, :reltol),Tuple{CallbackSet{Tuple{},Tuple{DiscreteCallback{DiffEqCallbacks.var"#33#36"{Base.RefValue{Union{Nothing, Float32}}},DiffEqCallbacks.var"#34#37"{DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},DiffEqSensitivity.var"#41#43"{DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}},Bool,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float32,2},Base.RefValue{Int64},Int64},Base.RefValue{Union{Nothing, Float32}}},DiffEqCallbacks.var"#35#38"{typeof(DiffEqBase.INITIALIZE_DEFAULT),Bool,DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},Base.RefValue{Union{Nothing, Float32}},DiffEqCallbacks.var"#34#37"{DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},DiffEqSensitivity.var"#41#43"{DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}},Bool,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float32,2},Base.RefValue{Int64},Int64},Base.RefValue{Union{Nothing, Float32}}}}}}},Bool,Bool,Array{Float32,1},Array{Float32,1},Float64,Float64}}, ::typeof(DiffEqBase.__init), ::ODEProblem{Array{Float32,1},Tuple{Float32,Float32},true,Array{Float32,1},ODEFunction{true,DiffEqSensitivity.ODEInterpolatingAdjointSensitivityFunction{DiffEqSensitivity.AdjointDiffCache{Nothing,Nothing,Nothing,Nothing,Nothing,Array{Float32,1},Nothing,Nothing,Nothing,Array{Float32,2},Nothing},DiffEqSensitivity.InterpolatingAdjoint{0,true,Val{:central}},Array{Float32,2},ODESolution{Float32,3,Array{Array{Float32,2},1},Nothing,Nothing,Array{Float32,1},Array{Array{Array{Float32,2},1},1},ODEProblem{Array{Float32,2},Tuple{Float32,Float32},false,Array{Float32,1},ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}},DiffEqBase.StandardODEProblem},Tsit5,OrdinaryDiffEq.InterpolationData{ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Array{Array{Float32,2},1},Array{Float32,1},Array{Array{Array{Float32,2},1},1},OrdinaryDiffEq.Tsit5ConstantCache{Float32,Float32}},DiffEqBase.DEStats},Nothing,Nothing},LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Symbol,CallbackSet{Tuple{},Tuple{DiscreteCallback{DiffEqCallbacks.var"#33#36"{Base.RefValue{Union{Nothing, Float32}}},DiffEqCallbacks.var"#34#37"{DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},DiffEqSensitivity.var"#41#43"{DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}},Bool,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float32,2},Base.RefValue{Int64},Int64},Base.RefValue{Union{Nothing, Float32}}},DiffEqCallbacks.var"#35#38"{typeof(DiffEqBase.INITIALIZE_DEFAULT),Bool,DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},Base.RefValue{Union{Nothing, Float32}},DiffEqCallbacks.var"#34#37"{DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},DiffEqSensitivity.var"#41#43"{DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}},Bool,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float32,2},Base.RefValue{Int64},Int64},Base.RefValue{Union{Nothing, Float32}}}}}}},Tuple{Symbol},NamedTuple{(:callback,),Tuple{CallbackSet{Tuple{},Tuple{DiscreteCallback{DiffEqCallbacks.var"#33#36"{Base.RefValue{Union{Nothing, Float32}}},DiffEqCallbacks.var"#34#37"{DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},DiffEqSensitivity.var"#41#43"{DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}},Bool,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float32,2},Base.RefValue{Int64},Int64},Base.RefValue{Union{Nothing, Float32}}},DiffEqCallbacks.var"#35#38"{typeof(DiffEqBase.INITIALIZE_DEFAULT),Bool,DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},Base.RefValue{Union{Nothing, Float32}},DiffEqCallbacks.var"#34#37"{DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},DiffEqSensitivity.var"#41#43"{DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}},Bool,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float32,2},Base.RefValue{Int64},Int64},Base.RefValue{Union{Nothing, Float32}}}}}}}}}},DiffEqBase.StandardODEProblem}, ::Tsit5, ::Array{Array{Float32,1},1}, ::Array{Float32,1}, ::Array{Any,1}, ::Type{Val{true}}) at ./none:0 (repeats 4 times)
 [23] #__solve#328 at /Users/cems/.julia/packages/OrdinaryDiffEq/8Pn99/src/solve.jl:4 [inlined]
 [24] #__solve at ./none:0 [inlined]
 [25] #solve_call#442(::Bool, ::Base.Iterators.Pairs{Symbol,Any,NTuple{6,Symbol},NamedTuple{(:save_everystep, :save_start, :saveat, :tstops, :abstol, :reltol),Tuple{Bool,Bool,Array{Float32,1},Array{Float32,1},Float64,Float64}}}, ::typeof(DiffEqBase.solve_call), ::ODEProblem{Array{Float32,1},Tuple{Float32,Float32},true,Array{Float32,1},ODEFunction{true,DiffEqSensitivity.ODEInterpolatingAdjointSensitivityFunction{DiffEqSensitivity.AdjointDiffCache{Nothing,Nothing,Nothing,Nothing,Nothing,Array{Float32,1},Nothing,Nothing,Nothing,Array{Float32,2},Nothing},DiffEqSensitivity.InterpolatingAdjoint{0,true,Val{:central}},Array{Float32,2},ODESolution{Float32,3,Array{Array{Float32,2},1},Nothing,Nothing,Array{Float32,1},Array{Array{Array{Float32,2},1},1},ODEProblem{Array{Float32,2},Tuple{Float32,Float32},false,Array{Float32,1},ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}},DiffEqBase.StandardODEProblem},Tsit5,OrdinaryDiffEq.InterpolationData{ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Array{Array{Float32,2},1},Array{Float32,1},Array{Array{Array{Float32,2},1},1},OrdinaryDiffEq.Tsit5ConstantCache{Float32,Float32}},DiffEqBase.DEStats},Nothing,Nothing},LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Symbol,CallbackSet{Tuple{},Tuple{DiscreteCallback{DiffEqCallbacks.var"#33#36"{Base.RefValue{Union{Nothing, Float32}}},DiffEqCallbacks.var"#34#37"{DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},DiffEqSensitivity.var"#41#43"{DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}},Bool,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float32,2},Base.RefValue{Int64},Int64},Base.RefValue{Union{Nothing, Float32}}},DiffEqCallbacks.var"#35#38"{typeof(DiffEqBase.INITIALIZE_DEFAULT),Bool,DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},Base.RefValue{Union{Nothing, Float32}},DiffEqCallbacks.var"#34#37"{DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},DiffEqSensitivity.var"#41#43"{DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}},Bool,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float32,2},Base.RefValue{Int64},Int64},Base.RefValue{Union{Nothing, Float32}}}}}}},Tuple{Symbol},NamedTuple{(:callback,),Tuple{CallbackSet{Tuple{},Tuple{DiscreteCallback{DiffEqCallbacks.var"#33#36"{Base.RefValue{Union{Nothing, Float32}}},DiffEqCallbacks.var"#34#37"{DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},DiffEqSensitivity.var"#41#43"{DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}},Bool,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float32,2},Base.RefValue{Int64},Int64},Base.RefValue{Union{Nothing, Float32}}},DiffEqCallbacks.var"#35#38"{typeof(DiffEqBase.INITIALIZE_DEFAULT),Bool,DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},Base.RefValue{Union{Nothing, Float32}},DiffEqCallbacks.var"#34#37"{DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},DiffEqSensitivity.var"#41#43"{DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}},Bool,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float32,2},Base.RefValue{Int64},Int64},Base.RefValue{Union{Nothing, Float32}}}}}}}}}},DiffEqBase.StandardODEProblem}, ::Tsit5) at /Users/cems/.julia/packages/DiffEqBase/LNYfU/src/solve.jl:44
 [26] (::DiffEqBase.var"#kw##solve")(::NamedTuple{(:save_everystep, :save_start, :saveat, :tstops, :abstol, :reltol),Tuple{Bool,Bool,Array{Float32,1},Array{Float32,1},Float64,Float64}}, ::typeof(solve), ::ODEProblem{Array{Float32,1},Tuple{Float32,Float32},true,Array{Float32,1},ODEFunction{true,DiffEqSensitivity.ODEInterpolatingAdjointSensitivityFunction{DiffEqSensitivity.AdjointDiffCache{Nothing,Nothing,Nothing,Nothing,Nothing,Array{Float32,1},Nothing,Nothing,Nothing,Array{Float32,2},Nothing},DiffEqSensitivity.InterpolatingAdjoint{0,true,Val{:central}},Array{Float32,2},ODESolution{Float32,3,Array{Array{Float32,2},1},Nothing,Nothing,Array{Float32,1},Array{Array{Array{Float32,2},1},1},ODEProblem{Array{Float32,2},Tuple{Float32,Float32},false,Array{Float32,1},ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}},DiffEqBase.StandardODEProblem},Tsit5,OrdinaryDiffEq.InterpolationData{ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Array{Array{Float32,2},1},Array{Float32,1},Array{Array{Array{Float32,2},1},1},OrdinaryDiffEq.Tsit5ConstantCache{Float32,Float32}},DiffEqBase.DEStats},Nothing,Nothing},LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Symbol,CallbackSet{Tuple{},Tuple{DiscreteCallback{DiffEqCallbacks.var"#33#36"{Base.RefValue{Union{Nothing, Float32}}},DiffEqCallbacks.var"#34#37"{DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},DiffEqSensitivity.var"#41#43"{DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}},Bool,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float32,2},Base.RefValue{Int64},Int64},Base.RefValue{Union{Nothing, Float32}}},DiffEqCallbacks.var"#35#38"{typeof(DiffEqBase.INITIALIZE_DEFAULT),Bool,DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},Base.RefValue{Union{Nothing, Float32}},DiffEqCallbacks.var"#34#37"{DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},DiffEqSensitivity.var"#41#43"{DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}},Bool,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float32,2},Base.RefValue{Int64},Int64},Base.RefValue{Union{Nothing, Float32}}}}}}},Tuple{Symbol},NamedTuple{(:callback,),Tuple{CallbackSet{Tuple{},Tuple{DiscreteCallback{DiffEqCallbacks.var"#33#36"{Base.RefValue{Union{Nothing, Float32}}},DiffEqCallbacks.var"#34#37"{DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},DiffEqSensitivity.var"#41#43"{DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}},Bool,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float32,2},Base.RefValue{Int64},Int64},Base.RefValue{Union{Nothing, Float32}}},DiffEqCallbacks.var"#35#38"{typeof(DiffEqBase.INITIALIZE_DEFAULT),Bool,DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},Base.RefValue{Union{Nothing, Float32}},DiffEqCallbacks.var"#34#37"{DiffEqSensitivity.var"#40#42"{Base.RefValue{Int64},StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},DiffEqSensitivity.var"#41#43"{DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}},Bool,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float32,2},Base.RefValue{Int64},Int64},Base.RefValue{Union{Nothing, Float32}}}}}}}}}},DiffEqBase.StandardODEProblem}, ::Tsit5) at ./none:0
 [27] #_adjoint_sensitivities#13(::Float64, ::Float64, ::Array{Float32,1}, ::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::typeof(DiffEqSensitivity._adjoint_sensitivities), ::ODESolution{Float32,3,Array{Array{Float32,2},1},Nothing,Nothing,Array{Float32,1},Array{Array{Array{Float32,2},1},1},ODEProblem{Array{Float32,2},Tuple{Float32,Float32},false,Array{Float32,1},ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}},DiffEqBase.StandardODEProblem},Tsit5,OrdinaryDiffEq.InterpolationData{ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Array{Array{Float32,2},1},Array{Float32,1},Array{Array{Array{Float32,2},1},1},OrdinaryDiffEq.Tsit5ConstantCache{Float32,Float32}},DiffEqBase.DEStats}, ::DiffEqSensitivity.InterpolatingAdjoint{0,true,Val{:central}}, ::Tsit5, ::DiffEqSensitivity.var"#df#60"{Array{Float32,3},Array{Float32,2}}, ::StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}, ::Nothing) at /Users/cems/.julia/packages/DiffEqSensitivity/ZX2U1/src/local_sensitivity/sensitivity_interface.jl:16
 [28] _adjoint_sensitivities(::ODESolution{Float32,3,Array{Array{Float32,2},1},Nothing,Nothing,Array{Float32,1},Array{Array{Array{Float32,2},1},1},ODEProblem{Array{Float32,2},Tuple{Float32,Float32},false,Array{Float32,1},ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}},DiffEqBase.StandardODEProblem},Tsit5,OrdinaryDiffEq.InterpolationData{ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Array{Array{Float32,2},1},Array{Float32,1},Array{Array{Array{Float32,2},1},1},OrdinaryDiffEq.Tsit5ConstantCache{Float32,Float32}},DiffEqBase.DEStats}, ::DiffEqSensitivity.InterpolatingAdjoint{0,true,Val{:central}}, ::Tsit5, ::Function, ::StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}, ::Nothing) at /Users/cems/.julia/packages/DiffEqSensitivity/ZX2U1/src/local_sensitivity/sensitivity_interface.jl:13 (repeats 2 times)
 [29] #adjoint_sensitivities#12(::DiffEqSensitivity.InterpolatingAdjoint{0,true,Val{:central}}, ::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::typeof(DiffEqSensitivity.adjoint_sensitivities), ::ODESolution{Float32,3,Array{Array{Float32,2},1},Nothing,Nothing,Array{Float32,1},Array{Array{Array{Float32,2},1},1},ODEProblem{Array{Float32,2},Tuple{Float32,Float32},false,Array{Float32,1},ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}},DiffEqBase.StandardODEProblem},Tsit5,OrdinaryDiffEq.InterpolationData{ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Array{Array{Float32,2},1},Array{Float32,1},Array{Array{Array{Float32,2},1},1},OrdinaryDiffEq.Tsit5ConstantCache{Float32,Float32}},DiffEqBase.DEStats}, ::Tsit5, ::Vararg{Any,N} where N) at /Users/cems/.julia/packages/DiffEqSensitivity/ZX2U1/src/local_sensitivity/sensitivity_interface.jl:6
 [30] (::DiffEqSensitivity.var"#kw##adjoint_sensitivities")(::NamedTuple{(:sensealg,),Tuple{DiffEqSensitivity.InterpolatingAdjoint{0,true,Val{:central}}}}, ::typeof(DiffEqSensitivity.adjoint_sensitivities), ::ODESolution{Float32,3,Array{Array{Float32,2},1},Nothing,Nothing,Array{Float32,1},Array{Array{Array{Float32,2},1},1},ODEProblem{Array{Float32,2},Tuple{Float32,Float32},false,Array{Float32,1},ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}},DiffEqBase.StandardODEProblem},Tsit5,OrdinaryDiffEq.InterpolationData{ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Array{Array{Float32,2},1},Array{Float32,1},Array{Array{Array{Float32,2},1},1},OrdinaryDiffEq.Tsit5ConstantCache{Float32,Float32}},DiffEqBase.DEStats}, ::Tsit5, ::Vararg{Any,N} where N) at ./none:0
 [31] (::DiffEqSensitivity.var"#adjoint_sensitivity_backpass#59"{Tsit5,DiffEqSensitivity.InterpolatingAdjoint{0,true,Val{:central}},Array{Float32,2},Array{Float32,1},Tuple{}})(::Array{Float32,3}) at /Users/cems/.julia/packages/DiffEqSensitivity/ZX2U1/src/local_sensitivity/concrete_solve.jl:67
 [32] #554#back at /Users/cems/.julia/packages/ZygoteRules/6nssF/src/adjoint.jl:55 [inlined]
 [33] pred at /Users/cems/Documents/rusty science fair 2019/BFGS_MWE_error.jl:52 [inlined]
 [34] (::typeof(∂(λ)))(::Array{Float32,3}) at /Users/cems/.julia/packages/Zygote/tJj2w/src/compiler/interface2.jl:0
 [35] loss_batch at /Users/cems/Documents/rusty science fair 2019/BFGS_MWE_error.jl:56 [inlined]
 [36] (::typeof(∂(loss_batch)))(::Float32) at /Users/cems/.julia/packages/Zygote/tJj2w/src/compiler/interface2.jl:0
 [37] #165 at /Users/cems/.julia/packages/Zygote/tJj2w/src/lib/lib.jl:156 [inlined]
 [38] (::Zygote.var"#321#back#167"{Zygote.var"#165#166"{typeof(∂(loss_batch)),Tuple{Tuple{Nothing},Tuple{Nothing}}}})(::Float32) at /Users/cems/.julia/packages/ZygoteRules/6nssF/src/adjoint.jl:49
 [39] #18 at /Users/cems/.julia/packages/DiffEqFlux/RLKTh/src/train.jl:50 [inlined]
 [40] (::typeof(∂(λ)))(::Float32) at /Users/cems/.julia/packages/Zygote/tJj2w/src/compiler/interface2.jl:0
 [41] (::Zygote.var"#46#47"{Zygote.Params,Zygote.Context,typeof(∂(λ))})(::Float32) at /Users/cems/.julia/packages/Zygote/tJj2w/src/compiler/interface.jl:101
 [42] gradient(::Function, ::Zygote.Params) at /Users/cems/.julia/packages/Zygote/tJj2w/src/compiler/interface.jl:47
 [43] macro expansion at /Users/cems/.julia/packages/DiffEqFlux/RLKTh/src/train.jl:49 [inlined]
 [44] macro expansion at /Users/cems/.julia/packages/Juno/oLB1d/src/progress.jl:119 [inlined]
 [45] #sciml_train#16(::Function, ::Int64, ::typeof(DiffEqFlux.sciml_train), ::Function, ::Array{Float32,1}, ::ADAM, ::Array{Tuple{Array{Float32,2}},1}) at /Users/cems/.julia/packages/DiffEqFlux/RLKTh/src/train.jl:48
 [46] (::DiffEqFlux.var"#kw##sciml_train")(::NamedTuple{(:maxiters, :cb),Tuple{Int64,typeof(cb1)}}, ::typeof(DiffEqFlux.sciml_train), ::Function, ::Array{Float32,1}, ::ADAM, ::Array{Tuple{Array{Float32,2}},1}) at ./none:0
 [47] top-level scope at none:0

But Running this manually on the exact same numerical data works fine!: Here I show it sending f() the last data f received before the error. And I show it calling the loss function with the initial data that caused the error. Both run clean when called manually.

julia> f(Float32[0.84277695 -0.69276327 1.5355403 1.050606; 0.48043957 0.34639397 0.13404581 0.3765215],
       Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0],1.0)
fx Float32[0.84277695 -0.69276327 1.5355403 1.050606; 0.48043957 0.34639397 0.13404581 0.3765215]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
2×4 Array{Float32,2}:
  0.00209987  -0.613109  0.615208   0.186032
 -0.318575    -0.437209  0.118633  -0.187413

julia> loss_batch(Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0], Float32[0.84277695 -0.69276327 1.5355403 1.050606; 0.48043957 0.34639397 0.13404581 0.3765215])
==============================
mbFloat32[0.84277695 -0.69276327 1.5355403 1.050606; 0.48043957 0.34639397 0.13404581 0.3765215]
pFloat32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.84277695 -0.69276327 1.5355403 1.050606; 0.48043957 0.34639397 0.13404581 0.3765215]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.84277695 -0.69276327 1.5355403 1.050606; 0.48043957 0.34639397 0.13404581 0.3765215]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.8428065 -0.70139956 1.5442063 1.0532265; 0.4759521 0.3402354 0.13571689 0.37388158]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.8428132 -0.70333946 1.5461528 1.0538151; 0.47494408 0.33885205 0.13609225 0.3732886]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.84301406 -0.71420026 1.5572144 1.0572742; 0.46946445 0.3312709 0.13819376 0.37008327]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.8441626 -0.7516233 1.5957859 1.0696496; 0.45103955 0.30560485 0.14543489 0.35935822]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.844383 -0.7568574 1.6012405 1.0714403; 0.4485224 0.3020749 0.14644764 0.35790008]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.8444417 -0.7581624 1.602604 1.0718905; 0.4478985 0.30119857 0.14670014 0.35753912]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.8444912 -0.7580933 1.6025846 1.0719192; 0.44798413 0.3012975 0.1466868 0.357595]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.84569496 -0.7827067 1.6284018 1.080507; 0.4363143 0.2848651 0.15144937 0.3508549]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.8477977 -0.8079223 1.6557201 1.0901744; 0.4252285 0.26890022 0.15632844 0.34455854]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.8572321 -0.8952769 1.7525091 1.1258152; 0.3889737 0.21574277 0.17323107 0.324251]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.8587636 -0.9077494 1.7665131 1.1310884; 0.38398176 0.20833743 0.17564443 0.3214806]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.8591671 -0.910852 1.7700192 1.1324227; 0.38276252 0.20651785 0.17624475 0.32080725]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.85967463 -0.91001916 1.7696939 1.1326803; 0.38370553 0.20762202 0.17608361 0.321419]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.86517555 -0.94682896 1.8120046 1.1492242; 0.36995372 0.18674773 0.1832061 0.3139295]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.8726255 -0.98480517 1.8574307 1.168067; 0.35754094 0.16698673 0.19055429 0.307445]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.9026579 -1.1183894 2.0210474 1.2381746; 0.3177046 0.10130256 0.21640204 0.28731385]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.90719426 -1.1381172 2.0453115 1.2486293; 0.31192252 0.09170326 0.22021921 0.2844116]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.9083716 -1.1430199 2.0513918 1.2512776; 0.31053552 0.08936757 0.22116792 0.28372532]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.9096701 -1.1405143 2.0501845 1.2518243; 0.3131447 0.092461705 0.22068311 0.2854063]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.9168766 -1.1663414 2.083218 1.266779; 0.3068428 0.08116238 0.22568053 0.28249416]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.92505944 -1.1932232 2.1182828 1.2830262; 0.30096555 0.070083626 0.23088202 0.27994055]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.95538646 -1.2874444 2.242831 1.3416195; 0.28201193 0.0328987 0.2491133 0.2721423]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.9598864 -1.3010079 2.2608945 1.3501884; 0.27941757 0.027679875 0.2517378 0.27111357]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.96102643 -1.3043925 2.2654192 1.3523438; 0.27878746 0.026394852 0.2523927 0.27086893]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
fx Float32[0.9612871 -1.3037353 2.2650225 1.3524076; 0.27939188 0.02712652 0.25226548 0.27125403]
fp Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0]
125.03084f0

Modifed program to be less verbose and report leaving functions

julia> loss_batch(Float32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0],Float32[1.0 0.0 1.0 1.0; 1.0 1.0 0.0 0.7])
==============================
p,mbFloat32[0.47264105, 0.15957993, -0.8247262, -0.9430232, 0.0, 0.0], Float32[1.0 0.0 1.0 1.0; 1.0 1.0 0.0 0.7]
function f returning nowFloat32[-0.35208517 -0.8247262 0.47264105 -0.1046673; -0.7834433 -0.9430232 0.15957993 -0.5005363]
function f returning nowFloat32[-0.35208517 -0.8247262 0.47264105 -0.1046673; -0.7834433 -0.9430232 0.15957993 -0.5005363]
function f returning nowFloat32[-0.3520712 -0.8247149 0.47264367 -0.10465668; -0.78342336 -0.94300115 0.15957774 -0.50052303]
function f returning nowFloat32[-0.3518602 -0.82454425 0.4726841 -0.10449693; -0.78312314 -0.94266784 0.15954472 -0.50032276]
function f returning nowFloat32[-0.35162842 -0.8243569 0.47272855 -0.10432134; -0.78279334 -0.9423018 0.15950848 -0.5001028]
function f returning nowFloat32[-0.35082868 -0.82371074 0.47288203 -0.10371547; -0.78165567 -0.94103914 0.15938345 -0.4993439]
function f returning nowFloat32[-0.35071707 -0.8236206 0.47290346 -0.103630885; -0.7814969 -0.9408629 0.159366 -0.49923798]
function f returning nowFloat32[-0.35068923 -0.8235981 0.47290882 -0.10360981; -0.7814573 -0.9408189 0.15936165 -0.49921158]
function f returning nowFloat32[-0.35068923 -0.8235981 0.47290882 -0.10360981; -0.7814573 -0.94081897 0.15936165 -0.49921158]
function f returning nowFloat32[-0.3501717 -0.8231801 0.4730084 -0.10321762; -0.7807211 -0.94000185 0.15928076 -0.4987205]
function f returning nowFloat32[-0.34963885 -0.82275015 0.4731113 -0.102813795; -0.7799634 -0.93916094 0.15919757 -0.49821508]
function f returning nowFloat32[-0.34780204 -0.8212693 0.47346726 -0.10142117; -0.77735204 -0.9362631 0.15891102 -0.49647304]
function f returning nowFloat32[-0.34754583 -0.821063 0.47351712 -0.10122694; -0.77698797 -0.9358591 0.15887111 -0.4962302]
function f returning nowFloat32[-0.34748188 -0.8210115 0.47352958 -0.10117844; -0.7768971 -0.93575823 0.15886116 -0.4961696]
function f returning nowFloat32[-0.34748197 -0.8210116 0.4735296 -0.10117851; -0.77689725 -0.9357585 0.15886118 -0.4961697]
function f returning nowFloat32[-0.34629723 -0.82005763 0.47376028 -0.10028001; -0.7752135 -0.93389016 0.15867656 -0.4950465]
function f returning nowFloat32[-0.34508005 -0.8190796 0.47399956 -0.09935615; -0.77348477 -0.93197215 0.15848735 -0.49389312]
function f returning nowFloat32[-0.340891 -0.8157204 0.4748294 -0.09617487; -0.7675388 -0.9253761 0.15783736 -0.4899259]
function f returning nowFloat32[-0.3403078 -0.81525373 0.47494593 -0.09573173; -0.7667115 -0.9244586 0.15774706 -0.48937395]
function f returning nowFloat32[-0.3401623 -0.8151374 0.47497502 -0.09562114; -0.7665051 -0.9242298 0.15772454 -0.48923624]
function f returning nowFloat32[-0.34016293 -0.8151381 0.47497514 -0.09562151; -0.76650614 -0.9242309 0.15772468 -0.4892369]
function f returning nowFloat32[-0.33721623 -0.81278163 0.4755653 -0.09338177; -0.7623269 -0.9195957 0.15726873 -0.4864482]
function f returning nowFloat32[-0.3342048 -0.81038725 0.47618237 -0.09108869; -0.75806314 -0.91486865 0.15680543 -0.4836026]
function f returning nowFloat32[-0.323886 -0.80222374 0.47833765 -0.08321894; -0.74347454 -0.8987005 0.15522581 -0.47386447]
function f returning nowFloat32[-0.32245553 -0.80109805 0.4786425 -0.0821261; -0.7414553 -0.8964634 0.15500802 -0.4725163]
function f returning nowFloat32[-0.32209903 -0.8008178 0.47871876 -0.08185372; -0.74095225 -0.8959061 0.1549538 -0.47218043]
function f returning nowFloat32[-0.3221053 -0.80082506 0.4787197 -0.08185779; -0.74096227 -0.8959176 0.15495518 -0.47218704]
function f returning nowFloat32[-0.31558764 -0.7957061 0.48011833 -0.07687585; -0.7317673 -0.88573205 0.15396468 -0.46604767]
function f returning nowFloat32[-0.30900088 -0.7906052 0.48160422 -0.07181938; -0.72251254 -0.8754903 0.15297769 -0.45986548]
function f returning nowFloat32[-0.28661725 -0.7734787 0.48686138 -0.054573644; -0.6911712 -0.8408351 0.14966382 -0.4389207]
function f returning nowFloat32[-0.28353238 -0.7711471 0.48761454 -0.052188344; -0.6868668 -0.8360796 0.14921263 -0.436043]
function f returning nowFloat32[-0.28276563 -0.7705691 0.4878035 -0.051594906; -0.6857978 -0.83489865 0.14910083 -0.43532822]
function f returning nowFloat32[-0.2828238 -0.7706339 0.48781 -0.05163373; -0.68588984 -0.8350034 0.14911336 -0.43538892]
function f returning nowFloat32[-0.270896 -0.76167643 0.4907804 -0.04239313; -0.669277 -0.81665725 0.14738014 -0.4242799]
function f returning nowFloat32[-0.25906157 -0.75306773 0.4940061 -0.03314135; -0.65294003 -0.7986545 0.14571442 -0.4133437]
function f returning nowFloat32[-0.21927987 -0.72489256 0.50561273 -0.001812116; -0.5984218 -0.7386845 0.14026265 -0.37681645]
function f returning nowFloat32[-0.21379752 -0.72109854 0.507301 0.0025320007; -0.5909551 -0.73048365 0.13952854 -0.37181]
function f returning nowFloat32[-0.21244155 -0.7201674 0.5077259 0.003608575; -0.5891121 -0.72846043 0.13934837 -0.37057403]
function f returning nowFloat32[-0.2128105 -0.72056866 0.5077581 0.0033600465; -0.5896908 -0.729117 0.13942602 -0.37095577]
function f returning nowFloat32[-0.19394788 -0.70781827 0.51387036 0.018397572; -0.5641594 -0.70111835 0.1369589 -0.35382387]
function f returning nowFloat32[-0.17573652 -0.6964107 0.5206741 0.03318669; -0.5399815 -0.67473316 0.1347516 -0.33756152]
function f returning nowFloat32[-0.11501745 -0.6606294 0.54561204 0.083171405; -0.4605477 -0.5883757 0.12782812 -0.2840349]
function f returning nowFloat32[-0.10641288 -0.6557103 0.54929745 0.09030023; -0.44937027 -0.5762465 0.12687628 -0.27649626]
function f returning nowFloat32[-0.104304306 -0.6545324 0.550228 0.092055336; -0.44664562 -0.573294 0.12664832 -0.27465746]
function f returning nowFloat32[-0.10605597 -0.656397 0.5503409 0.0908631; -0.44937247 -0.57638174 0.12700915 -0.27645794]
function f returning nowFloat32[-0.08682744 -0.6464902 0.55966264 0.10711958; -0.42496237 -0.5500548 0.12509225 -0.25994596]
function f returning nowFloat32[-0.06838242 -0.63840485 0.57002234 0.12313901; -0.4022885 -0.5258164 0.12352779 -0.24454355]
function f returning nowFloat32[-0.006575223 -0.6146271 0.6080518 0.17781283; -0.32804474 -0.4469718 0.11892693 -0.19395325]
function f returning nowFloat32[0.0024408372 -0.6112648 0.61370534 0.18582001; -0.31727004 -0.43554673 0.11827634 -0.18660627]
function f returning nowFloat32[0.0046428894 -0.6104887 0.61513144 0.18778938; -0.31466216 -0.43278855 0.11812618 -0.18482572]
function f returning nowFloat32[0.002099994 -0.61310846 0.6152084 0.18603249; -0.31857523 -0.4372085 0.11863309 -0.1874128]
loss_batch returning now168.21149
+++++++++++++++++
168.21149f0

julia> 

Appendix: Update I changed the title of this because the error does not require using "optional Data". If you remove the Optional_data, and then slide the minibatch into the loss function by hard coding a closure, the vectorized version still does not work work

code changes to alter this from using the optional data in sciml_train to having the loss function take care of the data:

# this change remove optional data method
loss_vec(p,mb) =  sum(abs, pred(mb,p).- mb)
loss(p) = loss_vec(p,minibatch[1]...)  # closure to embed data
# not using optional data in sciml_train
res0 = DiffEqFlux.sciml_train(loss,p,ADAM(0.005),maxiters=300,cb=cb1)

Again the vector method does not work, the serial method does.

cems2 commented 4 years ago

I am closing this for now as I have found what I think is a simpler working example. going to open an issue on that one instead