Open dlfivefifty opened 7 years ago
Note that if you just target the DiffEq common interface, the same code will work for DifferentialEquations.jl, Sundials.jl, ODEInterface.jl, and ODE.jl. That's why I'm suggesting doing that.
All that would be required is that you'd have to be able to build a Problem type
https://juliadiffeq.github.io/DiffEqDocs.jl/latest/types/ode_types.html
If there's a function that builds the problem from an ApproxFun function, then it should all just work.
Ah OK. Note that the code here is derelict, and is of the variety of "discretize in time, ∞-dimensional in space" where your previous request was the more traditional pseudo-spectral approach of "∞-dimensional in time, discretize in space".
I think the latter approach of "∞-dimensional in time, discretize in space" is the right way to go, as it means nothing special has to be done to DiffEq.
For periodic problems it should be really easy.
How do you support constraints? For example, with a Dirichlet heat equation
$$ut = u{xx}, u(t,-1) = u(t,1) = 0$$
It's necessary that we constrain the value at ±1.
Also, is it necessary that u0
is an AbstractArray? Maybe a more elegant approach is to allow u0
to be a Fun
, and as long as the time stepper is written in Julia, it should work.
Also, is it necessary that u0 is an AbstractArray?
That can be relaxed. Will a Fun
index like an array?
No. Why would you need indexing like an array?
On 18 Nov. 2016, at 3:16 pm, Christopher Rackauckas notifications@github.com wrote:
Also, is it necessary that u0 is an AbstractArray?
That can be relaxed. Will a Fun index like an array?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/ApproxFun/SpectralTimeStepping.jl/issues/1#issuecomment-261444980, or mute the thread https://github.com/notifications/unsubscribe-auth/ABLDqZVgZxJx9KmhlBTx7ZuYDx-kv1nvks5q_SaCgaJpZM4K2GeC.
Let me rephrase that. a vector-valued Fun behaves like a vector:
f=Fun(x->[exp(x);sin(x)],[-1,1])
f[1] # is Fun(exp,[-1,1])
f[2] # is Fun(sin,[-1,1])
A scalar-valued Fun doesn't at the moment support indexing, but f[1]
could return f
.
For the update steps which do a linear combination on the partial steps, like in a Runge-Kutta method. For example, say we took two partial RK steps and stored them in k[1]
and k[2]
, the next step would compute something like:
for i in eachindex(u)
tmp[i] = a31*k[1][i] + a32*k[2][i]
end
Is it better than to treat a Fun
like a scalar?
Well here's the big question: is a Fun
mutable? Is it something that you'd update in-place?
Yes, a Fun
should be treated like a scalar. As I understand it, the following code should work to solve heat equation with periodic boundary conditions for t
in [0,1]
if everything is designed correctly:
u0=Fun(θ->cos(cos(θ-0.1)),PeriodicInterval())
prob = ODEProblem((t,u) -> u'',u0,(0,1)) # ' is overriden for a Fun to mean derivative
(In practice, there should be a chop
to make sure the length doesn't blow up, but that's a secondary consideration.)
Fun
is immutable
, but for no particular reason.
(My policy is to always use immutable
to force me to think before I change to type
.)
I think that immutable vs mutable is a better way of making the distinction. The reason is because we accept two different kind of functions: du = f(t,u)
and f(t,u,du)
which updates du
in-place. The first doesn't require mutability. It seems like it should all work out if the types turn out all right. I don't know if this'll work with Sundials/ODEInterface though, I'm pretty sure they need arrays of Float64
.
I think it's better to leave it immutable in this context: the number of coefficients of a Fun
can change with each time step, and so it's hard to see off the shelf code handling this.
I think it's better to leave it immutable in this context: the number of coefficients of a Fun can change with each time step, and so it's hard to see off the shelf code handling this.
OrdinaryDiffEq.jl (the native DifferentialEquations.jl methods) actually already handle that. See the cell model:
https://juliadiffeq.github.io/DiffEqDocs.jl/latest/features/callback_functions.html
But would they change size with every function call? That would be too difficult to handle.
Yes. There is another way we could go about this: send only matrices and arrays to DiffEq.jl, and have this package pre and post process the data...
Would they still be changing size every function call? Because then it might be too difficult to handle anyways, and if you're always allocating to resize I don't think that there'd be too much of a performance difference trying to just chop a few of them out.
Question: what does broadcasting do to a Fun
?
Chris, something of interest for time evolution of PDEs is the use of IMEX Runge-Kutta schemes, i.e. schemes with multiple tableaus to handle multi-physics problems. These would work well with ApproxFun, because one can cache partial QR factorizations of linear operators (whose inverses are required in DIRK schemes for example). Should we consider these in DifferentialEquations.jl?
Here's a few IMEX RK schemes http://www.cs.ubc.ca/~ascher/papers/ars.pdf
These are already being considered, just haven't been implemented yet. It'll look pretty much the same as the ODEProblem:
prob = ODEIMEXProblem(f,g,u0,tspan)
where you give the two functions, stiff and non-stiff, and then
solve(prob,alg;kwargs...)
solves the problem with the chosen algorithm. That's the interface we're looking for. FYI I'm going to be concentrating on more Rosenbrock and exponential RK methods first though.
I'm going to take a crack at getting equations on Fun
working on RK methods first. If I can get that going and have some tests on it, then I'll be able to plan accordingly for the other methods.
These would work well with ApproxFun, because one can cache partial QR factorizations of linear operators (whose inverses are required in DIRK schemes for example).
Is that something that's done automatically? Can you show the docs for that?
Ok. We have an ETDRK4 scheme built here somewhere. It uses some special functions from ApproxFun's specialfunctions.jl
like expα
, expβ
, and expγ
for numerical stability.
I think you just call qrfact
on an Operator
and it stores the rest. Sheehan implemented this.
Question: what does broadcasting do to a Fun?
f.([1,2,3]) == [f(1),f(2),f(3)]
I'll add qrfact to the docs.
Sent from my iPhone
On 18 Nov. 2016, at 15:48, Richard Mikael Slevinsky notifications@github.com wrote:
I think you just call qrfact on an Operator and it stores the rest. Sheehan implemented this.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.
Question: what does broadcasting do to a Fun?
Sorry, maybe this is more helpful
f=Fun(exp,[-1,1])
g=Fun(cos,[-1,1])
h=Fun(x->exp(cos(x)),[-1,1])
f.(g) == h
Or also
g=Fun(cos,[-1,1])
h=Fun(x->exp(cos(x)),[-1,1])
exp.(g) == h
This code works for heat equation with periodic boundary conditions:
S=Fourier()
u0=Fun(θ->cos(cos(θ-0.1)),S)
tspan = (0.,1.)
f(t,u) = (Fun(u,S)'').coefficients
prob = ODEProblem(f,u0.coefficients,tspan)
u = solve(prob)
plot(Fun(u(1.0),S))
I just pushed an update that supports the following code to solve u_t = u_xx + (cos θ + 1) u_x
, by truncating at each stage to be the same number of coefficients as u0
:
using ApproxFun, DifferentialEquations, SpectralTimeStepping, Plots
S=Fourier()
u0=Fun(θ->cos(cos(θ-0.1)),S)
c=Fun(cos,S)
prob = SpectralTimeSteppingProblem((t,u)->u''+(c+1)*u',u0,(0.,1.))
@time u=solve(prob); # 4.087185 seconds (5.85 M allocations: 226.447 MB, 2.34% gc time)
@gif for t in linspace(0.,1.,100)
plot(u(t);ylims=(0,1))
end
Changing the equation (t,u)->0.001u''+(c+1)*u'
demonstrates the limitations of a fixed discretization size. It can be improved by imposing extra coefficients for u0
:
S=Fourier()
u0=Fun(θ->cos(cos(θ-0.1)),S,200)
c=Fun(cos,S)
prob = SpectralTimeSteppingProblem((t,u)->0.001u''+(c+1)*u',u0,(0.,10.))
@time u=solve(prob); # 76.336780 seconds (239.30 M allocations: 9.011 GB, 4.10% gc time)
but the timing is not very impressive, probably due to CFL. It would be good to incorporate implicit methods here.
but the timing is not very impressive, probably due to CFL. It would be good to incorporate implicit methods here.
I set it up to work with the algorithm choices:
@time u=solve(prob,CVODE_BDF());
That then uses Sundials' adaptive BDF method for 12 seconds, and
@time u=solve(prob,Trapezoid(),dt=1/4);
makes it do the Trapezoid (Crank-Nicholson) for 40 seconds (this is on OrdinaryDiffEq.jl master, there'll be a tag that solidifies that syntax). Our stiff native solvers are still being worked on, but this allows any new algorithm we make/optimize to be used. So this should be a good long-term strategy.
That said
the limitations of a fixed discretization size.
This is only a limitation because I did not have enough foresight. I knew something like this would come up, which is why I wanted to hammer out what it would be.
DifferentialEquations.jl offers two ways to declare a function: f(t,u)
for Number
and f(t,u,du)
for AbstractArray
. In general, most packages only support the later syntax (only OrdinaryDiffEq.jl's solver support the first), and what actually happens is the two-argument function is transformed into the three argument function. You should do this yourself for performance, but it's an ease of use thing.
However, the distinction from before was based off of AbstractArray
and Number
. So you probably tried to give a Fun
directly, and it failed, so you made it use a fixed size array. Note that the first syntax should actually work as long as +
, -
, *
, /
, abs
are defined for a Fun
. So really, what's wrong is that I should've been dispatching on mutable
vs immutable
(f(t,u)
treats all immutable types as scalars, f(t,u,du)
handles any mutable). The mutable form will could then only change the sizes if +
, -
, *
, /
can deal with the different sizes.
Does that make sense? So I think I can play with it a bit and get it to work directly on changing discretization sizes. That will only make native Julia solvers compatible with it (which, if overtime our growth matches that of the non-stiff solvers, we should be getting better performance than other packages), so it would be nice to have the problem object offer the ability for fixed and changing size since the fixed size will (and already is) be compatible with any method, even those that are from C/Fortran.
While this won't be a priority for a bit (I have a few other things first, say a few months before I'd really get back to this), I'd like to start hacking away at this. Do you think this should stay in ApproxFun or should this (or something based off of it) go to JuliaDiffEq? I'd like to make some solid problem and solution types to match the full interfaces, and then a bunch of "pre-made PDEs" which are discretized via this method. Let me know what you think, but I'd like to at least get push access will likely work want to have things like a separate docs and stuff for it. I think we can really get a good piece of software out of this combo.
I gave you write access to this repos. Let me know if you need access to ApproxFun.jl as well.
I also had a go overriding to get it to work with Fun instead of Number by overriding routines:
RecursiveArrayTools.recursive_one(f::Fun) = ones(space(f))
OrdinaryDiffEq.ODE_DEFAULT_NORM(f::Fun) = OrdinaryDiffEq.ODE_DEFAULT_NORM(f.coefficients)
OrdinaryDiffEq.ODE_DEFAULT_NORM{F<:Fun}(f::Array{F}) =
OrdinaryDiffEq.ODE_DEFAULT_NORM(map(OrdinaryDiffEq.ODE_DEFAULT_NORM,f))
but I got stuck pretty quickly.
Do you think this should stay in ApproxFun or should this (or something based off of it) go to JuliaDiffEq?
No preference. I think the current plan is to rename the organization JuliaApproximation so that it can be a more general home.
I also had a go overriding to get it to work with Fun instead of Number by overriding routines:
Yeah, there's a few things there that will probably need to be worked out:
Fun
where I was dispatching Number
before (probably changing to traits).Fun
Vector{Fun}
part of the solution to "work right" with interpolations when sizes are changing.abstol
and reltol
be Fun
, which I can see causing problems.But in the end making it work this way probably will be much better (for the algorithms that can handle it).
No preference. I think the current plan is to rename the organization JuliaApproximation so that it can be a more general home.
Then I think what I'd like to do is make an ApproxFunDiffEq.jl in JuliaDiffEq which will be just for making ApproxFun types compatible with DifferentialEquations.jl and the component solvers. From this small amount of playing around, it looks like that work is pretty much done on your side, but I'll have some finagling around to get the interaction up to speed, so most of the code will probably be DiffEq specific. I think ApproxFunDiffEq.jl is a good clear name for the interaction (matches both naming schemes). And if it's in JuliaDiffEq, you won't get mad when I fill it up with a bunch of really JuliaDiffEq specific stuff 👍 .
I'll have to learn a bit more about ApproxFun to really make this go though, so it'll take some time.
My last question is then, is there anything that your timestepping methods like ETDRK4
is doing that is "ApproxFun-specific"? Any optimizations which can only be done on Fun
types which I should know about? I'm not quite sure what you're doing with the cache
and chop
operations.
BTW, I tried:
@time u=solve(prob,radau());
@time u=solve(prob,rk45());
as well, so I can confirm this (fixed discretization size) method works with ODE.jl and ODEInterface.jl too through the DifferentialEquations.jl common interface.
The big question is how best to incorporate boundary conditions. Usually I would incorporate them into an implicit linear time step, but I don't see how that fits into JuliaDiffEq.jl.
For fixed discretizations, I suppose it could be accomplished as a DAE....
Isn't it usually imposed by the discretization? I know for FEM/FDM it's done by the discretization of the operators, and for periodic it's done by the choice of basis.
For spectral methods its usually imposed by extra constraints on the differential equation. For example, consider the ODE u(±1) = 0, u'' + u = f
. The operator will have two extra rows corresponding to evaluation:
S=Chebyshev()
D=Derivative(S)
A=[Evaluation(S,-1);
Evaluation(S,1);
D^2 + I]
and so the resulting operator has the form:
1.0 -1.0 1.0 -1.0 1.0 … 1.0 -1.0 ⋯
1.0 1.0 1.0 1.0 1.0 1.0 1.0 ⋯
1.0 0.0 3.3333333333333335 0.0 0.16666666666666666
0.25 0.0 5.625 0.0
0.16666666666666666 0.0 7.733333333333333
0.125 0.0 …
0.1 0.07142857142857142
0.0 0.0625
15.873015873015873 0.0 ⋱
0.0 17.8875 ⋱
… ⋱ ⋱ ⋱
where the first two rows are boundary rows. The constraints are then imposed on the right-hand side, so one would solve:
u=A\[0.;0.;f]
so that Evaluation(S,-1)*u ≈ 0
, Evaluation(S,1)*u ≈ 0
and (D^2+I)*u ≈ f
.
In the context of PDEs, suppose we solve heat equation with Dirichlet: u_t = u_xx, u(t,±1) = 0, u(0,x) = u0
. If we use backward Euler, we include the extra constaints in the implicit solve:
h=0.01
A=[Evaluation(S,-1);
Evaluation(S,1);
I-h*D^2]
u[1] = u0
u[k+1] = A\[0.;0.;u[k]]
I'm not sure how to fit this into DifferentialEquations.jl
Note that there is some auto-differentiation support implemented for nonlinear solvers, which could be used to suss out the structure from an anonymous function, e.g. heat equation with Dirichlet could be specified via
(t,u,du) -> [u(-1);u(1);du-u'']
Is this then not a Differential Algebraic equation (DAE)? This is supported by the Sundials IDA and the DASSL.jl solvers.
Is this then not a Differential Algebraic equation (DAE)? This is supported by the Sundials IDA and the DASSL.jl solvers.
Yeah that's probably overkill though and will limit the methods which can be used.
we include the extra constaints in the implicit solve
Are you doing this in timeevolution.jl? If so, how so? Do you just need to be able to inject a function somewhere?
Yes I think so, for fixed discretizations.
Ideally one could leveraging the fact that \ is O(N), which will be lost in a general DAE solver I think, though that might be too hard.
In BDF2
on line 15, the argument B
is a vector of functional constraints, with bcs
their values. So in the example above, one could take
B=[Evaluation(S,-1);
Evaluation(S,1)]
bcs=[0.;0.]
SBDF2
is then the operator including the boundary conditions. g
contains the nonlinear part, which is handled explicitly using RK4. glp
was to support live plotting with Reactive.jl.
I'm working on the docs a bit:
SBDF2 is then the operator including the boundary conditions. g contains the nonlinear part
Wouldn't these two things be what you would use to make the DiffEq f
? Put the bcs into A
and then f = A*u+g
. And solve that f
using whatever method? (We don't have IMEX methods yet, but when we do you would just give the two parts). Maybe I need to play with ApproxFun to see why this could/couldn't be the case.
This should be a home for integrating ApproxFun with standard time-stepping codes, e.g.:
https://github.com/JuliaDiffEq/ODE.jl https://github.com/luchr/ODEInterface.jl
This follows up on conversations at TU Munich with @luchr and @al-Khwarizmi, and a discourse discussion
https://discourse.julialang.org/t/pros-cons-of-approxfun/396/10
with @ChrisRackauckas.