Closed ClaudMor closed 3 years ago
This is an example doing 1 initial condition and all parameters: https://diffeqflux.sciml.ai/dev/examples/feedback_control/
Thanks for the reference.
Unfortunately, I'm not yet comfortable enough about neural ODEs, so I was planning to use simpler tools like DiffEqParamEstim ( or others?).
Anyway, from what I saw in the reference I thought I could probably make a custom function like predict_univ
, and then pass it to build_loss_objective
and then using Optim, as described here. But build_loss_objective
takes ::DEProblem
, so I'm not sure it will work this way.
Does a working variation of something like this exist? Or would you have any other suggestion?
You're trying to make a parameter estimation system that doesn't do this do this, instead of using a parameter estimation system that does do this to do this.
using DifferentialEquations, Flux, Optim, DiffEqFlux, DiffEqSensitivity, Plots
function lotka_volterra!(du, u, p, t)
x, y = u
α, β, δ, γ = p
du[1] = dx = α*x - β*x*y
du[2] = dy = -δ*y + γ*x*y
end
# Initial condition
u0 = [1.0, 1.0]
# Simulation interval and intermediary points
tspan = (0.0, 10.0)
tsteps = 0.0:0.1:10.0
# LV equation parameter. p = [α, β, δ, γ]
p = [1.5, 1.0, 3.0, 1.0]
theta = [u0;p]
# Setup the ODE problem, then solve
prob = ODEProblem(lotka_volterra!, u0, tspan, p)
sol = solve(prob, Tsit5())
# Plot the solution
using Plots
plot(sol)
savefig("LV_ode.png")
function loss(theta)
_prob = remake(prob,u0=theta[1:2],p=[3:end])
sol = solve(_prob, Tsit5(), saveat = tsteps)
loss = sum(abs2, sol.-1)
return loss, sol
end
callback = function (p, l, pred)
display(l)
plt = plot(pred, ylim = (0, 6))
display(plt)
# Tell sciml_train to not halt the optimization. If return true, then
# optimization stops.
return false
end
result_ode = DiffEqFlux.sciml_train(loss, theta,
ADAM(0.1),
cb = callback,
maxiters = 100)
Is a relatively straightforward example. I would highly suggest just using the right tool.
Thank you very much,
I just noticed that the line:
_prob = remake(prob,u0=theta[1:2],p=[3:end])
should maybe be changed to:
_prob = remake(prob,u0=theta[1:2],p=theta[3:end])
Hello,
I was wondering if it was possible to simultaneously optimize an ODE's parameters and initial conditions w.r.t. some given dataset.
The approach i followed so far was:
I know I could probably use Turing.jl, but I can't make it scale very well with parameters number. I also read that the
prob_generator
function switches the roles of initial conditions and model parameters, but I didn't really understand if it could help in this mixed situation.So I'd like to ask which are ( if any) the ways to calibrate initial conditions and parameters values simultaneously.
If you'd like to give an explicit example, I'll post below a simple model with hard coded calibration data:
Alternatively, if you'd like to use a more complex model already integrated with Optim, Turing's NUTS and ADVI, you may take a look at the MWE from here.
Thanks in advance