SciML / SciMLSensitivity.jl

A component of the DiffEq ecosystem for enabling sensitivity analysis for scientific machine learning (SciML). Optimize-then-discretize, discretize-then-optimize, adjoint methods, and more for ODEs, SDEs, DDEs, DAEs, etc.
https://docs.sciml.ai/SciMLSensitivity/stable/
Other
328 stars 71 forks source link

Better tspan promotion default in ForwardDiffSensitvity #204

Closed ChrisRackauckas closed 4 years ago

ChrisRackauckas commented 4 years ago

Causes failures, like:

https://stackoverflow.com/questions/60572989/stochastic-differential-equation-sensitivity-analysis-with-specified-noise

using StochasticDiffEq, DiffEqBase, DiffEqNoiseProcess, DiffEqSensitivity, Zygote

function lotka_volterra(du,u,p,t)
  x, y = u
  α, β, δ, γ = p
  du[1] = dx = α*x - β*x*y
  du[2] = dy = -δ*y + γ*x*y
end
function lotka_volterra_noise(du,u,p,t)
  du[1] = 0.1u[1]
  du[2] = 0.1u[2]
end
dt = 1//2^(4)
u0 = [1.0,1.0]
p = [2.2, 1.0, 2.0, 0.4]
prob1 = SDEProblem(lotka_volterra,lotka_volterra_noise,u0,(0.0,10.0),p)
sol1 = solve(prob1,EM(),dt=dt,save_noise=true)

W2 = NoiseWrapper(sol1.W)
prob2 = SDEProblem(lotka_volterra,lotka_volterra_noise,u0,(0.0,10.0),p,noise=W2)
sol2 = solve(prob2,EM(),dt=dt)

function predict_sde1(p)
  Array(concrete_solve(remake(prob1,p=p),EM(),dt=dt,sensealg=ForwardDiffSensitivity(convert_tspan=false),saveat=0.1))
end
loss_sde1(p) = sum(abs2,x-1 for x in predict_sde1(p))

loss_sde1(p)

# This gradient is successfully calculated
Zygote.gradient(loss_sde1,p)

function predict_sde2(p)
  Array(concrete_solve(prob2,EM(),prob2.u0,p,dt=dt,sensealg=ForwardDiffSensitivity(convert_tspan=false),saveat=0.1))
end
loss_sde2(p) = sum(abs2,x-1 for x in predict_sde2(p))

# This loss is successfully calculated
loss_sde2(p)

# This gradient calculation raises and error
Zygote.gradient(loss_sde2,p)

I think we can make the default just based on whether there's a known issue, i.e. whether there's a continuous callback present.

ChrisRackauckas commented 4 years ago

Fixed.