SciML / DiffEqBase.jl

The lightweight Base library for shared types and functionality for defining differential equation and scientific machine learning (SciML) problems
Other
320 stars 116 forks source link

DiffEqFunction #52

Closed ChrisRackauckas closed 6 years ago

ChrisRackauckas commented 7 years ago

Instead of using trait dispatch which is inherently unsound after 265, we should instead create a function-holder type like Optim does. This type would just hold a bunch of functions, and then have the dispatches which we currently look for in order to help with version compatibility. To help users, we'll auto-convert the dispatch-based form and throw depwarns for a bit when they put a function with dispatches into the ODEProblem etc.

ChrisRackauckas commented 7 years ago

@ScottPJones you around to help with this?

ChrisRackauckas commented 7 years ago

This will require specailization on keyword arguments in order to be useful, otherwise it'll cause a bunch of inferability issues. It'll need to use keyword arguments because otherwise it'll be horrific to set specific functions since the ordering will never be "good".

That'll be a good time to "prune the list" as well. Given that I let myself try as many of these things as possible, the current list is:

f(t,u,du) # Call the function
f(Val{:analytic},t,u,du) # The analytical solution. Used in testing
f(t,u,params,du) # Call the function to calculate with parameters params (vector)
f(Val{:tgrad},t,u,J) # Call the explicit t-gradient function
f(Val{:a},t,u,2.0,du) # Call the explicit parameter function with a=2.0
f(Val{:deriv},Val{:a},t,u,2.0,df) # Call the explicit parameter derivative function with a=2.0
f(Val{:paramjac},t,u,params,J) # Call the explicit parameter Jacobian function
f(Val{:jac},t,u,J) # Call the explicit Jacobian function
f(Val{:expjac},t,u,γ,J) # Call the explicit exponential Jacobian function exp(γJ)
f(Val{:invjac},t,u,iJ) # Call the explicit Inverse Jacobian function
f(Val{:invW},t,u,γ,iW) # Call the explicit inverse Rosenbrock-W function (M - γJ)^(-1)
f(Val{:invW_t},t,u,γ,iW) # Call the explicit transformed inverse Rosenbrock-W function (M/γ - J)^(-1)
f(Val{:hes},t,u,H) # Call the explicit Hessian function
f(Val{:invhes},t,u,iH) # Call the explicit Inverse Hessian function

However, the ones that I have actually found useful are:

f(t,u,du) # Call the function
f(Val{:analytic},t,u,du) # The analytical solution. Used in testing
f(t,u,params,du) # Call the function to calculate with parameters params (vector)
f(Val{:tgrad},t,u,J) # Call the explicit t-gradient function
f(Val{:paramjac},t,u,params,J) # Call the explicit parameter Jacobian function
f(Val{:jac},t,u,J) # Call the explicit Jacobian function
f(Val{:invjac},t,u,iJ) # Call the explicit Inverse Jacobian function
f(Val{:invW},t,u,γ,iW) # Call the explicit inverse Rosenbrock-W function (M - γJ)^(-1)
f(Val{:invW_t},t,u,γ,iW) # Call the explicit transformed inverse Rosenbrock-W function (M/γ - J)^(-1)

Hessians ended up not being useful at all. The optimization problem on the derivative never comes up in a fashion where it's useful, most interesting systems have a non-invertable Hessian, and I was hoping to make a Rosenbrock-like scheme which uses Hessians but found it's actually impossible. Single parameter derivatives and calls are just too specialized to ever be useful in any sensitivity or estimation setup. Exponential of the Jacobian seems like it would be great, but it's actually almost impossible to symbolically compute for > 2x2 Jacobians so it's not useful. All of these other ones have been put to good use except invjac which can be used in a steady-state solver quite easily though.

So the type would hold the functions for these, and have a keyword argument for each one. Inplace choice would be done like https://github.com/JuliaDiffEq/DiffEqBase.jl/issues/51. https://github.com/JuliaDiffEq/DiffEqBase.jl/issues/51 will change to instead detecting isinplace from the DiffEqFunction. There will also be a "compilation-free" version using FunctionWrappers, i.e. DiffEqFunction{isinplace,Float64,Float64} sets the element type for time and state, and it'll wrap it all via FunctionWrappers to make it concretely typed but not specialize-able, which will make it so that all solve calls won't need to recompile for new functions.

ChrisRackauckas commented 7 years ago

The problem with the inplace calculation is that each problem type needs different numbers of arguments to check for. So the top level may just require {inplace}(...), and then there can be specific ones for things like ODEs/SDEs/etc. that have the number to check set. Or we should just require that the user passes this if they are going far enough to define jacobians and the like?

This can't go default until kwargs are fast. Then all of the internals will be changed to DiffEqFunctions. The "automatic" version will still work since the basic problem types will just set isinplace and turn it into a DiffEqFunction if you pass a non-DiffEqFunction.

For the compilation-free version we can take t and u as inputs (and require inplace). Then we have to (t,u,du) -> f(t,u,du);nothing to make sure in-place returns void, and assume output matches u when not in place (we can also check Core.Inference.return_type to see if this is necessary). That allows the use of FunctionWrappers.jl. I am unsure whether this form should be what's used as the default when a user doesn't pass a DiffEqFunction, it'll require benchmarking. But it probably will be, because then things won't have to recompile for each new function and things will feel snappier.

ChrisRackauckas commented 7 years ago

Note this is already functional:

DiffEqFunction(f;analytic=nothing,
                 tgrad=nothing,
                 jac=nothing,
                 invjac=nothing,
                 invW=nothing,
                 invW_t=nothing,
                 paramjac = nothing)
ChrisRackauckas commented 7 years ago
using FunctionWrappers

struct CallbackF64 f::FunctionWrappers.FunctionWrapper{Void,Tuple{Array{Float64}}} end
(cb::CallbackF64)(v) = cb.f(v)

f!(u) = (u.=u.^2)
f = CallbackF64(f!)
u = Float64[1,2,3]
f(u) # Errors

g!(u) = (u.=u.^2; nothing)
g = CallbackF64(g!)
g(u)
g!(u)

h! = (u) -> (f!(u);nothing)
h = CallbackF64(h!)
h(u)
h!(u)

using BenchmarkTools
@benchmark f!($u)

BenchmarkTools.Trial: 
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     10.949 ns (0.00% GC)
  median time:      12.083 ns (0.00% GC)
  mean time:        14.726 ns (0.00% GC)
  maximum time:     541.074 ns (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     1000

@benchmark g!($u)

BenchmarkTools.Trial: 
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     10.572 ns (0.00% GC)
  median time:      11.328 ns (0.00% GC)
  mean time:        13.529 ns (0.00% GC)
  maximum time:     722.690 ns (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     1000

@benchmark h!($u)

BenchmarkTools.Trial: 
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     26.053 ns (0.00% GC)
  median time:      28.318 ns (0.00% GC)
  mean time:        37.032 ns (0.00% GC)
  maximum time:     1.934 μs (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     1000

Why is h so much worse? https://discourse.julialang.org/t/closure-which-discards-the-return-is-slower/6583

devmotion commented 7 years ago

My comment is more a question: how do DDEs fit in this proposed DiffEqFunction structure?

During the last week I read a bit about sensitivity analysis for DDEs, and it seems for DDEs with constant delays it is not too complicated and also not too different from sensitivity analysis for ODEs. However, even this simple case requires Jacobians of all deviated arguments and parameter derivatives of both f and h to build an extended DDE of the model together with its sensitivities.

The QS models are simple enough to carry out this sensitivity analysis after estimation in reasonable time but still the number of parameters is already so large that manually taking derivatives is quite painful. Hence I hacked together something similar to ParameterizedFunctions for DDEs (with only the features I currently need; I haven't tried autodifferentiation yet since symbolic results seem to be fine) and noticed that the required functions do not really fit in the current framework of parameter derivatives and Jacobians. Parameter derivatives of DDEs also require the history function as function argument, and I also need parameter derivatives of the initial function; hence I ended up with functions of the form (::Type{Val{:deriv}}, ::Type{Val{:f}}, ::Type{Val{param}}, t, u, h, param, du) (and similarly for h). Moreover, since I need multiple Jacobians I ended up with functions of the form (::Type{Val{:jac}}, t, u, h, J, ::Type{Val{idx}}) where idx is just some order of the deviated arguments with Val{0} corresponding to the argument t, i.e. the "regular" Jacobian. I could then put this together to a DDELocalSensitivityProblem, similar to ODELocalSensitivityProblem, but it still feels very rough.

I'm not sure how this should be handled in a better and consistent way, even more with DiffEqFunction. It seems that DDEs demand different keywords... Regarding parameter derivatives, this could be handled by regarding both f and h to be a DiffEqFunction. But the problem with multiple Jacobians and different function signatures still remains. Or is there a much simpler way of fitting DDEs in that framework?

ChrisRackauckas commented 7 years ago

My comment is more a question: how do DDEs fit in this proposed DiffEqFunction structure?

Most of what you discussed is a problem because we're defining this extra stuff by dispatch in a way that doesn't scale. That's being dropped. If everything is in an AbstractDiffEqFunction, then it's clear to use f.jac etc., and each problem can have a well-defined interface for how all of those components are supposed to look like. That is discussed more below.

Regarding parameter derivatives, this could be handled by regarding both f and h to be a DiffEqFunction.

h shouldn't be a DiffEqFunction. It should just be a 1-arg function, h(t) (with extra keywords etc. of course)

Or is there a much simpler way of fitting DDEs in that framework?

I just came to the conclusion that there will need to be a few different forms. DiffEqFunction will help clear this up too, both in docs and code. Essentially what I have now is the basic that works for ODEs, SDEs, steady state problems, and some variations on those. But things like DDEs, DAEs, RODEs, second order ODEs, etc. may need something different. I noticed that this even trips it up when designing the no-specialization versions.

https://github.com/JuliaDiffEq/DiffEqBase.jl/blob/master/src/diffeqfunction.jl#L40

So that's all just for ODEs. Maybe we should clear up the naming and really make this clearly about ODEs, and then we need other specializations

devmotion commented 7 years ago

Maybe we should clear up the naming and really make this clearly about ODEs, and then we need other specializations

Yes, I think it would be good to make it clear that this is only for ODEs.

ChrisRackauckas commented 7 years ago

Alright, will do. The top-level will be AbstractDiffEqFunction but then it'll have things like ODEFunction, etc. In fact, we kind of already have a few of these, like https://github.com/JuliaDiffEq/DiffEqBase.jl/blob/master/src/problems/ode_problems.jl#L36 . This kind of convergence means we're onto something.

This is still all unreleased an undocumented so for now we're free to just change it around 👍

ChrisRackauckas commented 7 years ago

Why is h so much worse? https://discourse.julialang.org/t/closure-which-discards-the-return-is-slower/6583

That was a benchmarking mistake. It's the same. So we're doing this for the function wrappers.

devmotion commented 7 years ago

Are there already plans for parameterized functions? If we only want to pass AbstractDiffEqFunction at some point, we also need a specialized version for those, I assume?

ChrisRackauckas commented 7 years ago

I think it would make sense to fold the parameterized function constructors into this.

ChrisRackauckas commented 7 years ago

It's working so I gave the no-specialization a whirl:

using OrdinaryDiffEq
V = x -> x^2
Vp = x -> 2x
f = (t,λ) -> -Vp(λ)
λ₀ = 2.3  # initial location

prob = ODEProblem(f, λ₀, (0.0, 10.0))
λ = solve(prob, Tsit5(); reltol=1E-6)
f2 = NSODEFunction{false}(f,0.0,λ₀)
prob = ODEProblem(f2, λ₀, (0.0, 10.0))
λ = solve(prob, Tsit5(); reltol=1E-6)

prob = ODEProblem(f, λ₀, (0.0, 10.0))
@time for i in 1:10000 λ = solve(prob, Tsit5(); reltol=1E-6) end

f2 = NSODEFunction{false}(f,0.0,λ₀)
prob = ODEProblem(f2, λ₀, (0.0, 10.0))
@time for i in 1:10000 λ = solve(prob, Tsit5(); reltol=1E-6) end

ff2 = (t,λ) -> -Vp(λ)
f3 = NSODEFunction{false}(ff2,0.0,λ₀)
prob = ODEProblem(f3, λ₀, (0.0, 10.0))
@time λ = solve(prob, Tsit5(); reltol=1E-6)
@time for i in 1:10000 λ = solve(prob, Tsit5(); reltol=1E-6) end

Looks like not specializing on the function could have some real performance dips:

  2.222532 seconds (39.72 M allocations: 759.277 MiB, 5.26% gc time)
  3.764759 seconds (51.82 M allocations: 979.767 MiB, 5.18% gc time)
  0.000508 seconds (5.19 k allocations: 100.484 KiB)
  3.664144 seconds (51.82 M allocations: 979.767 MiB, 5.34% gc time)

but then it does work: changing the function doesn't require re-compiling.

ChrisRackauckas commented 6 years ago
using OrdinaryDiffEq
V = x -> x^2
Vp = x -> 2x
f = (λ,p,t) -> -Vp(λ)
λ₀ = 2.3  # initial location

struct BadWrapper
  f
end
(f::BadWrapper)(u,p,t) = f.f(u,p,t)

println("Timings")

prob = ODEProblem(f, λ₀, (0.0, 10.0))
λ = solve(prob, Tsit5(); reltol=1E-6)
f2 = NSODEFunction{false}(f,λ₀,nothing,0.0)
prob = ODEProblem(f2, λ₀, (0.0, 10.0))
λ = solve(prob, Tsit5(); reltol=1E-6)

prob = ODEProblem(f, λ₀, (0.0, 10.0))
@time for i in 1:10000 λ = solve(prob, Tsit5(); reltol=1E-6) end

f2 = NSODEFunction{false}(f,λ₀,nothing,0.0)
prob = ODEProblem(f2, λ₀, (0.0, 10.0))
λ = solve(prob, Tsit5(); reltol=1E-6)
@time for i in 1:10000 λ = solve(prob, Tsit5(); reltol=1E-6) end

f3 = BadWrapper(f)
prob = ODEProblem(f3, λ₀, (0.0, 10.0))
λ = solve(prob, Tsit5(); reltol=1E-6)
@time for i in 1:10000 λ = solve(prob, Tsit5(); reltol=1E-6) end

println("Recompilation test")

ff2 = (λ,p,t) -> -Vp(λ)
f4 = NSODEFunction{false}(ff2,λ₀,nothing,0.0)
prob = ODEProblem(f4, λ₀, (0.0, 10.0))
@time λ = solve(prob, Tsit5(); reltol=1E-6)
@time for i in 1:10000 λ = solve(prob, Tsit5(); reltol=1E-6) end

f5 = BadWrapper(f)
prob = ODEProblem(f5, λ₀, (0.0, 10.0))
@time λ = solve(prob, Tsit5(); reltol=1E-6)
@time for i in 1:10000 λ = solve(prob, Tsit5(); reltol=1E-6) end

Timings
  1.845485 seconds (40.83 M allocations: 961.151 MiB, 6.35% gc time)
  3.364641 seconds (52.91 M allocations: 1.153 GiB, 4.30% gc time)
  1.800266 seconds (45.62 M allocations: 1.009 GiB, 6.49% gc time)
Recompilation test
  0.000421 seconds (5.31 k allocations: 121.984 KiB)
  3.255309 seconds (52.91 M allocations: 1.153 GiB, 4.32% gc time)
  0.000269 seconds (4.57 k allocations: 105.922 KiB)
  1.791620 seconds (45.62 M allocations: 1.009 GiB, 5.87% gc time)

It seems that a non-typed struct has less overhead than a FunctionWrapper, so we can get rid of the FunctionWrapper stuff.

ChrisRackauckas commented 6 years ago

I took the type parameter off of the DiffEqFunction's f and got:

Timings
  1.923516 seconds (40.83 M allocations: 961.151 MiB, 8.79% gc time)
  3.510622 seconds (52.91 M allocations: 1.153 GiB, 5.51% gc time)
  2.028891 seconds (45.62 M allocations: 1.009 GiB, 8.41% gc time)
Recompilation test
  0.000400 seconds (5.31 k allocations: 121.984 KiB)
  3.468768 seconds (52.91 M allocations: 1.153 GiB, 5.63% gc time)
  0.000245 seconds (4.57 k allocations: 105.922 KiB)
  1.925140 seconds (45.62 M allocations: 1.009 GiB, 7.91% gc time)

Maybe we should just default to having it recompilation-free? I wonder if something changed?

ChrisRackauckas commented 6 years ago

I ran this first with type parameters and second without.


using OrdinaryDiffEq
f(x,p,t) = -2x
λ₀ = 2.3  # initial location

struct BadWrapper
  f
end
(f::BadWrapper)(u,p,t) = f.f(u,p,t)

println("Timings")

prob = ODEProblem(f, λ₀, (0.0, 10.0))
λ = solve(prob, Tsit5(); reltol=1E-6)
@time for i in 1:10000 λ = solve(prob, Tsit5(); reltol=1E-6) end

f2 = NSODEFunction{false}(f,λ₀,nothing,0.0)
prob = ODEProblem(f2, λ₀, (0.0, 10.0))
λ = solve(prob, Tsit5(); reltol=1E-6)
@time for i in 1:10000 λ = solve(prob, Tsit5(); reltol=1E-6) end

f3 = BadWrapper(f)
prob = ODEProblem(f3, λ₀, (0.0, 10.0))
λ = solve(prob, Tsit5(); reltol=1E-6)
@time for i in 1:10000 λ = solve(prob, Tsit5(); reltol=1E-6) end

f4 = DiffEqFunction(f)
prob = ODEProblem(f3, λ₀, (0.0, 10.0))
λ = solve(prob, Tsit5(); reltol=1E-6)
@time for i in 1:10000 λ = solve(prob, Tsit5(); reltol=1E-6) end

println("Recompilation test")

ff2(x,p,t) = -2x

prob = ODEProblem(ff2, λ₀, (0.0, 10.0))
@time λ = solve(prob, Tsit5(); reltol=1E-6)
@time λ = solve(prob, Tsit5(); reltol=1E-6)

f5 = NSODEFunction{false}(ff2,λ₀,nothing,0.0)
prob = ODEProblem(f4, λ₀, (0.0, 10.0))
@time λ = solve(prob, Tsit5(); reltol=1E-6)

f6 = BadWrapper(f)
prob = ODEProblem(f5, λ₀, (0.0, 10.0))
@time λ = solve(prob, Tsit5(); reltol=1E-6)

f7 = DiffEqFunction(f)
prob = ODEProblem(f7, λ₀, (0.0, 10.0))
@time λ = solve(prob, Tsit5(); reltol=1E-6)

Timings
  0.454379 seconds (1.63 M allocations: 362.701 MiB, 15.66% gc time)
  2.523341 seconds (43.19 M allocations: 1.009 GiB, 4.83% gc time)
  1.750404 seconds (40.76 M allocations: 958.710 MiB, 5.83% gc time)
  1.758988 seconds (40.76 M allocations: 958.710 MiB, 5.49% gc time)
Recompilation test
  0.820791 seconds (357.63 k allocations: 17.555 MiB, 0.69% gc time)
  0.000116 seconds (165 allocations: 37.250 KiB)
  1.121036 seconds (394.20 k allocations: 18.775 MiB, 0.67% gc time)
  0.000386 seconds (4.32 k allocations: 105.922 KiB)
  0.000287 seconds (4.08 k allocations: 98.328 KiB)

Timings
  0.437701 seconds (1.95 M allocations: 366.058 MiB, 12.88% gc time)
  2.597095 seconds (43.21 M allocations: 1.009 GiB, 5.21% gc time)
  1.754888 seconds (40.82 M allocations: 959.778 MiB, 6.54% gc time)
  1.821740 seconds (40.82 M allocations: 959.778 MiB, 6.58% gc time)
Recompilation test
  0.445639 seconds (343.67 k allocations: 14.675 MiB)
  0.000168 seconds (199 allocations: 37.641 KiB)
  0.702128 seconds (384.60 k allocations: 16.057 MiB)
  0.000347 seconds (4.33 k allocations: 105.969 KiB)
  0.000352 seconds (4.08 k allocations: 98.375 KiB)

so there is a cost, but we should run the benchmarks.

ChrisRackauckas commented 6 years ago
Timings
  1.669102 seconds (40.46 M allocations: 955.811 MiB, 6.27% gc time)
  2.548835 seconds (43.19 M allocations: 1.009 GiB, 4.52% gc time)
  1.797751 seconds (45.58 M allocations: 1.008 GiB, 5.37% gc time)
  1.785501 seconds (45.58 M allocations: 1.008 GiB, 5.35% gc time)
Recompilation test
  1.021186 seconds (344.07 k allocations: 16.643 MiB, 0.50% gc time)
  0.000239 seconds (4.05 k allocations: 97.984 KiB)
  0.940447 seconds (343.20 k allocations: 16.572 MiB, 0.69% gc time)
  0.000350 seconds (4.32 k allocations: 105.922 KiB)
  0.000304 seconds (4.08 k allocations: 98.328 KiB)

That's with also disabling the type parameter in solve.jl of OrdinaryDiffEq.jl. There is some compilation happening somewhere else as well...

But you can see that this does indeed have a large effect on timing, so false alarm @YingboMa but still no FunctionWrappers.

YingboMa commented 6 years ago

Here is the benchmark that I ran. I removed the specialization by deleting ::F in this line.

With specialization: bench1

Without specialization: bench2

using OrdinaryDiffEq, DiffEqDevTools, Plots
f = (du,u,p,t) -> begin
  x = view(u,1:7)   # x
  y = view(u,8:14)  # y
  v = view(u,15:21) # x′
  w = view(u,22:28) # y′
  du[1:7] .= v
  du[8:14].= w
  for i in 14:28
    du[i] = zero(u[1])
  end
  for i=1:7,j=1:7
    if i != j
      r = ((x[i]-x[j])^2 + (y[i] - y[j])^2)^(3/2)
      du[14+i] += j*(x[j] - x[i])/r
      du[21+i] += j*(y[j] - y[i])/r
    end
  end
end

prob = ODEProblem(f,[3.0,3.0,-1.0,-3.0,2.0,-2.0,2.0,3.0,-3.0,2.0,0,0,-4.0,4.0,0,0,0,0,0,1.75,-1.5,0,0,0,-1.25,1,0,0],(0.0,3.0))

sol = solve(prob,Vern9(),abstol=1e-12,reltol=1e-10,maxiters=1000000)
test_sol = TestSolution(sol);
abstols = 1./10.^(4:8)
reltols = 1./10.^(2:6)
solve.(prob, [Vern9(), DP5(), Tsit5(), BS3(), VCABM()])
setups = [
    Dict(:alg=>Vern9())
    Dict(:alg=>DP5())
    Dict(:alg=>BS3())
    Dict(:alg=>Tsit5())
    Dict(:alg=>VCABM())
]
wp = WorkPrecisionSet(prob,abstols,reltols,setups;appxsol=test_sol,save_everystep=false,numruns=20)
plot(wp)
using BenchmarkTools
# With specialization 2.545 ms (2617 allocations: 360.31 KiB)
# Without specialization 2.717 ms (2622 allocations: 360.39 KiB)
# 6% slowdown
@btime solve($prob, $(Vern9()));
# With specialization 1.238 ms (1263 allocations: 153.44 KiB)
# Without specialization 1.279 ms (1268 allocations: 153.52 KiB)
# 3% slowdown
@btime solve($prob, $(DP5())  );
# With specialization 1.327 ms (1558 allocations: 230.11 KiB)
# Without specialization 1.330 ms (1563 allocations: 230.19 KiB)
# 0.2% slowdown
@btime solve($prob, $(Tsit5()));
# With specialization 1.573 ms (1867 allocations: 240.36 KiB)
# Without specialization 1.577 ms (1872 allocations: 240.44 KiB)
# 0.25% slowdown
@btime solve($prob, $(BS3())  );
# With specialization 680.727 μs (906 allocations: 153.73 KiB)
# Without specialization 680.111 μs (911 allocations: 153.81 KiB)
# 0.1% speedup
@btime solve($prob, $(VCABM()));
YingboMa commented 6 years ago

Here is a compile time benchmark.

With specialization:

julia> @time @testset "Convergence Tests" begin include("test/ode/ode_convergence_tests.jl") end
Stiff Solvers
Higher Order
Stiff Solvers
Higher Order
Test Summary:     | Pass  Total
Convergence Tests |   82     82
160.611925 seconds (104.92 M allocations: 5.130 GiB, 1.78% gc time)
Base.Test.DefaultTestSet("Convergence Tests", Any[], 82, false)

julia> @time @testset "Convergence Tests" begin include("test/ode/ode_convergence_tests.jl") end
Stiff Solvers
Higher Order
Stiff Solvers
Higher Order
Test Summary:     | Pass  Total
Convergence Tests |   82     82
  1.133072 seconds (2.59 M allocations: 147.105 MiB, 24.35% gc time)
Base.Test.DefaultTestSet("Convergence Tests", Any[], 82, false)

Without specialization:

julia> @time @testset "Convergence Tests" begin include("test/ode/ode_convergence_tests.jl") end
Stiff Solvers
Higher Order
Stiff Solvers
Higher Order
Test Summary:     | Pass  Total
Convergence Tests |   82     82
161.215670 seconds (106.48 M allocations: 5.228 GiB, 1.82% gc time)
Base.Test.DefaultTestSet("Convergence Tests", Any[], 82, false)

julia> @time @testset "Convergence Tests" begin include("test/ode/ode_convergence_tests.jl") end
Stiff Solvers
Higher Order
Stiff Solvers
Higher Order
Test Summary:     | Pass  Total
Convergence Tests |   82     82
  1.170612 seconds (2.59 M allocations: 147.050 MiB, 24.10% gc time)
Base.Test.DefaultTestSet("Convergence Tests", Any[], 82, false)
ChrisRackauckas commented 6 years ago

What happens when the recompile flag is flipped? https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/solve.jl#L269-L277

YingboMa commented 6 years ago

After

diff --git a/src/solve.jl b/src/solve.jl
index cfd5821..9b9b85b 100644
--- a/src/solve.jl
+++ b/src/solve.jl
@@ -266,7 +266,7 @@ function init{algType<:OrdinaryDiffEqAlgorithm,recompile_flag}(
                       calculate_error = false)
   end

-  if recompile_flag == true
+  if recompile_flag == false
     FType = typeof(f)
     SolType = typeof(sol)
     cacheType = typeof(cache)

and

diff --git a/src/problems/ode_problems.jl b/src/problems/ode_problems.jl
index 276cb86..5080335 100644
--- a/src/problems/ode_problems.jl
+++ b/src/problems/ode_problems.jl
@@ -1,9 +1,9 @@
 struct StandardODEProblem end

 # Mu' = f
-struct ODEProblem{uType,tType,isinplace,P,F,J,C,MM,PT} <:
+struct ODEProblem{uType,tType,isinplace,P,J,C,MM,PT} <:
                AbstractODEProblem{uType,tType,isinplace}
-  f::F
+  f
   u0::uType
   tspan::Tuple{tType,tType}
   p::P
@@ -21,7 +21,7 @@ struct ODEProblem{uType,tType,isinplace,P,F,J,C,MM,PT} <:
       _mm = mass_matrix
     end
     new{typeof(u0),promote_type(map(typeof,tspan)...),
-       iip,typeof(p),typeof(f),typeof(jac_prototype),
+       iip,typeof(p),typeof(jac_prototype),
        typeof(callback),typeof(_mm),
        typeof(problem_type)}(
        f,u0,tspan,p,jac_prototype,callback,_mm,problem_type)

I have

julia> @time @testset "Convergence Tests" begin include("test/ode/ode_convergence_tests.jl") end
Stiff Solvers
Higher Order
Stiff Solvers
Higher Order
Test Summary:     | Pass  Total
Convergence Tests |   82     82
165.114885 seconds (108.85 M allocations: 5.270 GiB, 1.83% gc time)
Base.Test.DefaultTestSet("Convergence Tests", Any[], 82, false)

julia> @time @testset "Convergence Tests" begin include("test/ode/ode_convergence_tests.jl") end
Stiff Solvers
Higher Order
Stiff Solvers
Higher Order
Test Summary:     | Pass  Total
Convergence Tests |   82     82
  1.504599 seconds (5.84 M allocations: 197.521 MiB, 20.86% gc time)
Base.Test.DefaultTestSet("Convergence Tests", Any[], 82, false)

bench3

julia> @btime solve($prob, $(Vern9())); # slowdown 4.9%
  2.676 ms (3652 allocations: 377.38 KiB)
julia> @btime solve($prob, $(DP5())  ); # slowdown 8.6%
  1.354 ms (1800 allocations: 162.72 KiB)
julia> @btime solve($prob, $(Tsit5())); # slowdown 8.4%
  1.448 ms (2209 allocations: 241.17 KiB)
julia> @btime solve($prob, $(BS3())  ); # slowdown 8.2%
  1.713 ms (2774 allocations: 255.42 KiB)
julia> @btime solve($prob, $(VCABM())); # slowdown 24%
  899.333 μs (3383 allocations: 193.33 KiB)
ChrisRackauckas commented 6 years ago

It looks like something is still recompiling though?

ChrisRackauckas commented 6 years ago

Tests show that DiffEqFunctions infer with kwargs on v0.7. So this was made the default. There are fallbacks so the previous method of overloading warns and automatically builds a DiffEqFunction. In reality, it's an AbstractODEFunction for ODEProblems, and the standard is an ODEFunction. This addresses @devmotion 's point that these functions may/will need to be slightly different for different problem types. We will need to go and create the others, and this will give a nice way to document how to do all of the available overloads for a given problem type.

Additionally, this lets us easily set a non-compiling form by simply making the type parameters on the ODEFunction Any. We can allow users to set a package-wide default for the less specializing form. The tests above show that the cost is about 3.5x for small enough problems, but this is too much of a hit for a performance-focused package. With https://github.com/JuliaLang/Pkg.jl/issues/458, we could just have the user set whether they want faster startups or more specialization as the default. For now, it's just set by a constant RECOMPILE_BY_DEFAULT.

So there's work left to do with non-ODEs, but in general this worked out