Open ChrisRackauckas opened 8 years ago
Could you please add a more explicit minimal example?
Here's one:
using ODE
t0 = 0.0
y0 = rand(4,2)
ode = ODE.ExplicitODE(t0,y0,(t,y,dy)->dy[1]=y[1])
opts = Dict(:initstep=>0.1,
:tstop=>1.0,
#:tout=>[0.,0.5,1.],
#:points=>:all,
:reltol=>1e-5,
:abstol=>1e-5)
#stepper = ODE.ModifiedRosenbrockIntegrator
stepper = ODE.ModifiedRosenbrockIntegrator
sol = ODE.solve(ode,stepper;opts...)
for (t,y) in sol # iterate over the solution
println((t,y))
end
Our constraints on that are the F!(t,y,dy)
, which expects the input y
and output dy
to be of the same type as the initial data y0
. We don't want to change that. The issue now is how do we deal with J!(t,y,J)
? It would be consistent to have y
of the same type as in F!
, but then what is J!
going to be? I see we have several options now:
y::AbstractArray{T}
and J::AbstractMatrix{T}
and rely on user reshapes between y
and J
. For this to work we also have to reshape y
inside the integrator and both, user and integrator, reshapes have to be consistent.y::AbstractVector{T}
and J::AbstractMatrix{T}
where y
is reshaped to a vector inside the integrator. Maybe we could provide a reshape method as an option to ode23s
? The downside is that we are inconsistent with the signature of F!
, but on a plus side we could easily generate the Jacobian automatically as we do now.y::AbstractVector{T}
.Of course if we allow y::AbstractArray
we will have to do some reshapes inside the integrator anyway if we want to solve any linear system. Can this be done without the loss of performance in simplest cases of dense arrays?
This is related https://github.com/JuliaLang/ODE.jl/pull/102 . We should probably merge that test into this branch. Note that there, ode23s
is one of the solvers which doesn't work with this. I'll look into it.
I think we should treat the matrix as a scalar: anything not an AbstractVector
is a scalar as far as ODE is concerned.
But we also need some discussion/documentation on scalar vs vector and how to handle different input.
@pwl reshapes don't require a copy, it just creates a new view. So there really shouldn't be a performance loss at all if you reshape(A\vec(b),size(b)).
@ChrisRackauckas it's not about performance but about consistency with API for other solvers. If you have an idea on how to realize this please let us know.
I was responding to:
Can this be done without the loss of performance in simplest cases of dense arrays?
The answer is yes. When done correctly with views (and reshape/vec, which are essentially shorthands for common views), you won't have a measurable loss of performance.
As for the API, I don't see why you would have a restriction at all. Julia's iteration lets you iterate through any array using eachindex(A)
. There are some type inference issues that it can have (the boxing issue, mostly only a problem when multithreading, but these can be all be avoided with the right type-declarations or by isolating things to a function). Doing it like this is also safer for things like SubArrays or non-standard index'd arrays, and in odd cases will even be faster if a linearfast indexing is different than the standard indexing.
Sorry, I didn't notice this line.
You want to have F!(t,y::AbstractArray,dy::AbstractArray)
and reshape y
inside the integrator, which would be totally fine by itself. The issue is that we need an interface for providing a custom Jacobian: J!(t,y::AbstractArray,J)
. For consistency, we would have to force the user to use the same reshape we use inside the integrator.
While this is in principle possible, this kind of solution is not too far from restricting ourselves to F!(t,y::AbstractVector,dy::AbstractVector)
and forcing the user to reshape vectors y
and dy
into an array inside F!
(so you can reinterpret a vector as an array for all the purposes of defining the equation). This approach makes for the simple implementation of the integrator and less arbitrariness in the specification of J!
. It also leaves the particular choice of the reshape in the hands of the user.
I don't have a way of passing the Jacobian yet, but I am going to just document it as that it needs to work on the vec'd version of the input. I think in most cases where the user can hand calculate the Jacobian, they will also have easily put it as a vector anyways so this is more of a fringe case. But when the user doesn't pass the Jacobian, then the generalization still works without a hitch. I can see someone making a different choice though.
ode23s
fails with 2D matrices as the y0. Here's the traceback:The code is generated from DifferentialEquations.jl's wrapper. I'll be making that public ASAP so you'll be able to use that to track it down if needed. I know that it's this method since all of the other methods worked just fine on the same test case.