Closed Theozeud closed 3 months ago
Show plot(sim)
on the convergence simulation. It has a standard plot which can be quite helpful.
Show
plot(sim)
on the convergence simulation. It has a standard plot which can be quite helpful.
I get this for the ode du(t) = cos(t) with u0 = 0.0
So I suppose there is a small mistake somewhere
No you're just at the barrier of what Float64 can compute. Bump up dt a bit and see, or change to the BigFloat version of the problem.
No you're just at the barrier of what Float64 can compute. Bump up dt a bit and see, or change to the BigFloat version of the problem.
Ok I see. Now I get this :
Is there a better way than putting BigFloat everywhere ?
f = (du, u, p, t) -> du[1] = BigFloat(cos(BigFloat(t)))
u0 = [BigFloat(0.0)]
tspan = (0.0, 10.0)
prob_ode_sin = ODEProblem(ODEFunction(f; analytic = (u0, p, t) -> [BigFloat(sin(BigFloat(t)))]), u0,
tspan)
alg = QPRK98()
dts = BigFloat.(1 .// 2 .^ (12:-1:1))
sim = test_convergence(dts, prob_ode_sin, alg)
plot(sim)
prob_ode_bigfloat2Dlinear
is the one setup to do that.
tspan = (0.0, 10.0)
You still had time as Float64
. Set that to big.((0.0,10.0))
.
julia> eps(BigFloat)
1.727233711018888925077270372560079914223200072887256277004740694033718360632485e-77
if all is right it should get about 70 digits before it stalls. This looks pretty good but I would guess that you might have something in a later digit that's off.
Use https://github.com/SciML/DiffEqDevTools.jl/blob/master/src/ode_tableaus.jl#L1-L9 to get the tableau out (make it BigFloat) and check the first and second order conditions accuracy, i.e. sum(b) = 1
and sum(a_ij)_j = c_i
Ok, thanks for the details. Using the deduce_Butcher_tableau
function, I found that the conditions are not valid, with an error of the magnitude of the stall (1e-33 - 1e-34). I checked the coefficients b for example and I don't make any mistake compared with those written in the paper or the links given therein.
if all is right it should get about 70 digits before it stalls. This looks pretty good but I would guess that you might have something in a later digit that's off.
Isn't it normal not to have better precision since the paper gives a method for quadruple precision? If we want better precision, I guess we should recompute the coefficients with better precision.
Isn't it normal not to have better precision since the paper gives a method for quadruple precision? If we want better precision, I guess we should recompute the coefficients with better precision.
Ahh, I saw that it was rational coefficients so I thought they rationalized to exact.
julia> using Quadmath
julia> eps(Float128)
1.92592994438723585305597794258492732e-34
This looks pretty spot on then!
Remove the lowest dt
s where it's saturated to fix the convergence test.
Given that these will be a bit more bespoke to run at higher precision, I think it's fine to keep them in a separate file. But you need to add a line in https://github.com/Theozeud/OrdinaryDiffEq.jl/blob/Quadruple_precision_RK/test/runtests.jl#L147 so they actually run.
would it be easy to tweak the coefficients so it's first order at least for BigFloat?
You'd need to do it for all conditions. I think it's fine to do the paper as-is.
Just run the formatter.
https://github.com/SciML/SciMLStyle?tab=readme-ov-file#juliaformatter
Just run the formatter.
https://github.com/SciML/SciMLStyle?tab=readme-ov-file#juliaformatter
Done
Codecov is being annoying but diving in it looks like there aren't any regressions here, so done.
Checklist
Additional context
I added the Runge-Kutta method from https://doi.org/10.1007/s11075-023-01632-8 . The method seems to work, but I have not yet succeeded in getting a test file to work with good results. I need to investigate this further. The nomenclature of the algorithm is not necessarily right and the functions are not in the right place, as I do not know the whole genealogy of the methods.
2159