SciML / OrdinaryDiffEq.jl

High performance ordinary differential equation (ODE) and differential-algebraic equation (DAE) solvers, including neural ordinary differential equations (neural ODEs) and scientific machine learning (SciML)
https://diffeq.sciml.ai/latest/
Other
523 stars 199 forks source link

Added Runge–Kutta pairs of orders 9(8) for use in quadruple precision computations #2166

Closed Theozeud closed 3 months ago

Theozeud commented 3 months ago

Checklist

Additional context

I added the Runge-Kutta method from https://doi.org/10.1007/s11075-023-01632-8 . The method seems to work, but I have not yet succeeded in getting a test file to work with good results. I need to investigate this further. The nomenclature of the algorithm is not necessarily right and the functions are not in the right place, as I do not know the whole genealogy of the methods.

2159

ChrisRackauckas commented 3 months ago

Show plot(sim) on the convergence simulation. It has a standard plot which can be quite helpful.

Theozeud commented 3 months ago

Show plot(sim) on the convergence simulation. It has a standard plot which can be quite helpful.

I get this for the ode du(t) = cos(t) with u0 = 0.0 image

So I suppose there is a small mistake somewhere

ChrisRackauckas commented 3 months ago

No you're just at the barrier of what Float64 can compute. Bump up dt a bit and see, or change to the BigFloat version of the problem.

Theozeud commented 3 months ago

No you're just at the barrier of what Float64 can compute. Bump up dt a bit and see, or change to the BigFloat version of the problem.

Ok I see. Now I get this :

image

Is there a better way than putting BigFloat everywhere ?

f = (du, u, p, t) -> du[1] = BigFloat(cos(BigFloat(t)))
u0 = [BigFloat(0.0)]
tspan = (0.0, 10.0)
prob_ode_sin = ODEProblem(ODEFunction(f; analytic = (u0, p, t) -> [BigFloat(sin(BigFloat(t)))]), u0,
tspan)
alg = QPRK98()
dts = BigFloat.(1 .// 2 .^ (12:-1:1))
sim = test_convergence(dts, prob_ode_sin, alg)
plot(sim)
ChrisRackauckas commented 3 months ago

prob_ode_bigfloat2Dlinear is the one setup to do that.

tspan = (0.0, 10.0) You still had time as Float64. Set that to big.((0.0,10.0)).

julia> eps(BigFloat)
1.727233711018888925077270372560079914223200072887256277004740694033718360632485e-77

if all is right it should get about 70 digits before it stalls. This looks pretty good but I would guess that you might have something in a later digit that's off.

ChrisRackauckas commented 3 months ago

Use https://github.com/SciML/DiffEqDevTools.jl/blob/master/src/ode_tableaus.jl#L1-L9 to get the tableau out (make it BigFloat) and check the first and second order conditions accuracy, i.e. sum(b) = 1 and sum(a_ij)_j = c_i

Theozeud commented 3 months ago

Ok, thanks for the details. Using the deduce_Butcher_tableau function, I found that the conditions are not valid, with an error of the magnitude of the stall (1e-33 - 1e-34). I checked the coefficients b for example and I don't make any mistake compared with those written in the paper or the links given therein.

Theozeud commented 3 months ago

if all is right it should get about 70 digits before it stalls. This looks pretty good but I would guess that you might have something in a later digit that's off.

Isn't it normal not to have better precision since the paper gives a method for quadruple precision? If we want better precision, I guess we should recompute the coefficients with better precision.

ChrisRackauckas commented 3 months ago

Isn't it normal not to have better precision since the paper gives a method for quadruple precision? If we want better precision, I guess we should recompute the coefficients with better precision.

Ahh, I saw that it was rational coefficients so I thought they rationalized to exact.

julia> using Quadmath

julia> eps(Float128)
1.92592994438723585305597794258492732e-34

This looks pretty spot on then!

Remove the lowest dts where it's saturated to fix the convergence test.

ChrisRackauckas commented 3 months ago

Given that these will be a bit more bespoke to run at higher precision, I think it's fine to keep them in a separate file. But you need to add a line in https://github.com/Theozeud/OrdinaryDiffEq.jl/blob/Quadruple_precision_RK/test/runtests.jl#L147 so they actually run.

oscardssmith commented 3 months ago

would it be easy to tweak the coefficients so it's first order at least for BigFloat?

ChrisRackauckas commented 3 months ago

You'd need to do it for all conditions. I think it's fine to do the paper as-is.

ChrisRackauckas commented 3 months ago

Just run the formatter.

https://github.com/SciML/SciMLStyle?tab=readme-ov-file#juliaformatter

Theozeud commented 3 months ago

Just run the formatter.

https://github.com/SciML/SciMLStyle?tab=readme-ov-file#juliaformatter

Done

ChrisRackauckas commented 3 months ago

Codecov is being annoying but diving in it looks like there aren't any regressions here, so done.