Closed Martyn-R closed 6 years ago
I never set it up to check before the first iteration because I didn't know there was a use case. In all of my uses, I've always just changed the initial condition. In cases where something like this was necessary, I found that I needed something different from the standard callback call, which is what the initialize
function is for.
But I can see this being useful. Pk/Pd simulations technically start with an event, so this would make them easier to initialize. However, I can see this causing issues. A continuous callback would definitely have issues here, so that wouldn't be included. So it would be special-casing to call discrete callbacks at the start. But this would violate the "end of interval" assumption which is used in many discrete callbacks like manifold projection and uncertainty quantification.
I almost want to say that using initialize
is the proper way to do it. Though what about a keyword argument for allowing the callback at the start?
It would not be my preference if the first event has to be treated as a special case (and only so if the problem is defined over a time span that starts exactly at that first event), and I dont think it matters much if it has to be done manually (either via initialize or by adapting u0) or via a keyword option.
For me it would be OK to just let the time span begin right before the first event. This solution just has to be mentioned in the documentation, as I guess most users will expect the first event to be picked up in case it coincides with the start of the time span.
An additional benefit of this approach is that (the variability of) baseline values can be assessed as well.
Let me document what the issue is here so it's searchable for the future. Both ContinuousCallback
s and DiscreteCallback
s are applied after the steps. I'll go back through the documentation to make sure that's clear. This mechanism is required for most uses. A ContinuousCallback
by definition needs to have a successful step because it has to have an interval to interpolate over to pull back in time, so that for sure has no wiggle room. A DiscreteCallback
on the otherhand in theory can apply to both ends of the interval. However, there are a lot of use cases that need the assumption that a step has been completed. In fact, I think all of the ones in the callback library do: the manifold projection takes the step and projects it back to the manifold, the uncertainty quantification uses the error estimate to perturb the solution by a random amount on the order of the error, etc.
Breaking this assumption would break a lot of callbacks, and the only real gain would be that the user could apply a callback at the start of the simulation (since every other left end of an interval is the right end of the previous step. save_positions
was created to take care of this issue). So the question then becomes, is it worthwhile to have special initialization steps?
I looked into the code and it is possible, but the implementation would not be nice. The implementation assumes that your u0
is supposed to be correct, so using initialize
to modify the integrator
like I mentioned above actually doesn't work since the integrator
has already placed u0
in the saved solution array (because this is how that array is initialized!). So to allow modification of the the initial condition in initialize
, I would need to change the way the u
arrays are built, add a check to initialize that integrator.u
changed, add a special updating phase in that case, and then possibly update prob.u0
because the true initial condition no longer matches the problem's initial condition.
But I'm not sure anything is gained. The initialize
setup of the callback is quite obscure and was put in there for things like manifold projection and jump callbacks to do some fancy things. Most users won't know about it or touch it. So I think the simplest way for users to do this would be to just modify u0
themselves. And the parameter estimation and monte carlo tools give you straightforward ways to modify u0
. Thus I think it's actually easier for users to not use initialize
and I probably shouldn't be recommending it.
So with the workaround to make it work being messy and breaking a lot of assumptions, and the fact that it's much easier for the user to just modify u0
(or start earlier in time), I'm going to take those as the correct way to do this. If you do start earlier in time, note that you can set save_start=false
to make it not save the initial condition, and you can then have it actually do its first save at the event by setting dt
appropriately. That will give you the nice solution object that you want.
An additional benefit of this approach is that (the variability of) baseline values can be assessed as well.
As noted above, that can be done by modifying u0
in the Monte Carlo prob_func
or in the parameter estimation prob_generator
, or in the loop you're using to call solve
, etc. So there is no feature benefit to adding this. It would just be whether it's convenient, and I'm not convinced creating a workaround here is convenient to anyone.
I'll make sure the "after step" assumption is appropriately documented.
Thanks for the background and explanation.
Two proper ways to deal with it are therefore (1) modify u0
manually, or (2) start a bit earlier in time.
Re-opening, I think I can implement this.
FYI: for the specific use case in the issue description, the newly added PeriodicCallback
could come in handy. I made sure that the first time step is handled correctly using an initialize
method. See
https://github.com/JuliaDiffEq/DiffEqCallbacks.jl/blob/d0992033b68338a7351e2588aab912dcf226b3af/src/periodic.jl#L22-L28.
The reordering of https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/pull/212 definitely is a big step forward (I don't know why I didn't think about that!) but there's a few other things that have to be done. It will still get FSAL wrong because the user will likely only change integrator.u
instead of integrator.uprev
which is actually used. But I think that we can have it change those automatically by checking u_modified!
and just recursivecopy!
ing it all over if the user did make a modification (default to assuming they did if the initialize!
is not the trivial default). After some fixes like that, you should be able to change the state in initialize!
which will be a good option to allow users to start at the first point if they wanted.
We'll probably want this Pk/Pd example in the docs as well.
BTW, we should add PeriodicCallback to http://docs.juliadiffeq.org/latest/features/callback_library.html
I got some PRs started on this. There's a few details to work through:
u0
is already saved. What do we do? Modify the saved value? Save another value? Require the user to do savevalues!
if they want to save?With this, my suggestion for people who want the callback on the left side of the intervals would be to simply make initialize do if condition then affect!
.
Re: PeriodicCallback
docs: see https://github.com/JuliaDiffEq/DiffEqDocs.jl/pull/53/files.
As for (3), I think I got it capturing all of the relevant cases where a user goes to modify u
or the cache.
For (1), I think I am going to go with forcing the save to match the callback, i.e. treating it like a discontinuity at u0
if the user set save_positions
to do so, is a little difficult because in a callback set you would have each individual callback have its own callback.save_positions[2]
that could be different. The other reasonable behavior would be to check all of those and if any are true, do a save. The reason would be because if the user doesn't save here, then the interpolation in the first interval will be off due to not accounting for the discontinuity. We have to be very careful about forcing a save when a user may not want it (i.e. for PDEs with large memory usage), but it seems the user would only be setting this true if they wanted the discontinuities to be handled correctly "automatically", so it would be silly if we don't do that in the first interval as well. In fact, I think this works well with save_start
since if someone wants to discard a "fake initial condition" that gets modified by the callback, they can just do save_start=false
and then the discontinuity gets handled and it will save the "correct" initial condition. Somewhere down the line someone might ask for an option to turn off this save and we can make an option for it, but it seems like such an edge case that I am probably just worrying too much.
For (2), I'll have @devmotion help me finish up DelayDiffEq, but OrdinaryDiffEq and StochasticDiffEq should be done pretty soon.
I went ahead and added the option for turning that save off. It'll go down as one of the misc arguments which hopefully an instance of common interface arg bloat and will have a good use case.
It all requires a DiffEqBase.jl tag which is already in motion, and 3 working PRs are in place for the *DiffEq
libraries. They need some tests though, and I'll be off for a bit.
Not super well tested (though I am testing it in a private repo) but it works now. The simple case of
function initialize!(cb,t,u,integrator)
if cb.condition(t,u,integrator)
cb.affect!(integrator)
end
end
does what you think it should.
I have an ODE system in which the state of the integrator is affected at certain time points.
The first 'event' is at t = 0, but when I setup the problem with
ODEProblem()
and let the time span start at t = 0.0, this first event is missed. If I setup the problem with a time span starting before t = 0, say at -0.1, this first discrete condition is picked up.What is the correct way to ensure the event at start of the time span is picked up?
As discrete condition I use:
with
tpoints
equal to(0 : 5 : 100)
.