SciML / DiffEqParamEstim.jl

Easy scientific machine learning (SciML) parameter estimation with pre-built loss functions
https://docs.sciml.ai/DiffEqParamEstim/stable/
Other
60 stars 34 forks source link

Build loss function for ensemble of ODE's, but with different sampling times for each simulation in the ensemble #218

Closed TorkelE closed 1 year ago

TorkelE commented 1 year ago

I have a system (modelled as an ODE), for which I have measurements using different initial conditions. I want to find its parameters. I have looked at this example: https://docs.sciml.ai/DiffEqParamEstim/stable/tutorials/ensemble/ and it is pretty much just what I want to do. However, there's one problem:

The various experiments are not sampled at the same timepoints. How do I tune this? In the example, in:

obj = build_loss_objective(enprob,Tsit5(),loss,Optimization.AutoForwardDiff(),trajectories=N,
                           saveat=data_times)

we build the loss function, but also set the options for the ODE solver (saveat=data_times). Here, I would need saveat=data_times to have different values for each run of the ensemble. Is there a good way to do this?

(I have tried setting up my own, which just builds several non-ensemble loss problems, and then sum them all up. However, AD does not work on this, and it generally does not seem to work well)

Vaibhavdixit02 commented 1 year ago

Since you can't provide separate saveats in the solve for EnsembleProblem, unless I am wrong, I don't think this would be possible with this.

What issue were you facing with the separate solves and summing approach?

TorkelE commented 1 year ago

Currently, I do something like this:

function make_optimization_problem(m:Model exps::Vector{Experiments})
    cost_functions = [get_cost_function(m, exp) for exp in exps]
    function cost_function(u,p)
        sum(cost_function(u) for cost_function in cost_functions)
    end
    lb = get_lb(m)
    ub = get_ub(m)
    return Optimization.OptimizationProblem(cost_function, init_p(m); lb=lb, ub=ub)
end

however, for a starter I get a ERROR: Use OptimizationFunction to pass the derivatives or automatically generate them with one of the autodiff backends error when I try to run solve using BFGS(), so I presume some AD stuff don't work through this.

Also, the results of the optimisations are really pad (while optimising on only a single experiment work). Might just be a natural thing, but ass a step in figuring out why I figured I probably should avoid adding as much custom code as possible and just use the standard SciML tools, where possible.

Vaibhavdixit02 commented 1 year ago

You haven't created an OptimizationFunction to pass to OptimizationProblem there that's what the error is saying, which is necessary for the AD stuff to work

Vaibhavdixit02 commented 1 year ago
function make_optimization_problem(m:Model exps::Vector{Experiments})
    cost_functions = [get_cost_function(m, exp) for exp in exps]
    function cost_function(u,p)
        sum(cost_function(u) for cost_function in cost_functions)
    end
    lb = get_lb(m)
    ub = get_ub(m)
    optf = OptimizationFunction(cost_function, AutoForwardDiff())
    return Optimization.OptimizationProblem(optf, init_p(m); lb=lb, ub=ub)
end
TorkelE commented 1 year ago

Thanks, I'll try this an it hopefully should work :)