Open zhaofeng-shu33 opened 2 years ago
For example, there is a treatment to shrink the new step when the last try is rejected in Hairer's implementation, which he did not explain in his textbook.
No, the treatment to shrink the time step is here, which corresponds to the step reject controller. But you are pointing out something interesting. Instead that's a stabilizer, where if the previous step is a rejection then the next step takes the minimum of the new step and the previous upon accept. Though that's a weird way. Indeed we aren't doing that piece, but we could add it. @YingboMa thoughts?
We do have tests that things match Hairer's implementation up to floating point error, though there is a drift that can happen on some very large problems and this might explain that. It might finally be the answer to one of the oldest issues in the repo, https://github.com/SciML/OrdinaryDiffEq.jl/issues/64
scipy.integrate.solve_ivp
makes this feature compulsory for DP5
,BS3
and DP853
. I made a PR on this issue.
By the way, I could not get exactly the same solution result for BS3
or DP5
with IController between the Python implementation
and this Julia package, even if I used the PR mentioned above. What is the cause of this difference? Is is related with the slightly
different implementation of the local error estimation?
I found that the accuracy defects comes from OrdinaryDiffEq
. The critical line is here. In the IController
, the power operation is computed using float32 number. Though it could speed up the computation,
I think it produces noticeable numerical difference compared with other packages. By changing this line to use normal
power EEst ^ expo
. I can obtain the almost the same numerical results with scipy.integrate.solve_ivp
for BS3
and DP5
.
I found that the accuracy defects I can obtain the almost the same numerical results with scipy.integrate.solve_ivp for BS3 and DP5.
That's not a measure of accuracy, that's a measure of same-ness. If you check the accuracy, the solvers are fine.
We could add an option to swap back the floating point pow implementation in the controllers, though using a same-accuracy default which is faster is fine. IIRC, it's a 3ulp version, so the floating point error the solver itself is much larger than the ^
error, and the ^
computation is a heuristic which is not even close to 3ulp accurate anyways. So that's what I am saying about accuracy: if you check the actual work-precision accuracy it does not effect the solver's ability to produce accurate solutions, but yes it will give a floating point difference in the low end values of the stepping behavior (which can grow over time, but washes out in terms of an actual difference)
I found that the accuracy defects I can obtain the almost the same numerical results with scipy.integrate.solve_ivp for BS3 and DP5.
That's not a measure of accuracy, that's a measure of same-ness. If you check the accuracy, the solvers are fine.
Yes it is the sameness.
We could add an option to swap back the floating point pow implementation in the controllers, though using a same-accuracy default which is faster is fine. IIRC, it's a 3ulp version, so the floating point error the solver itself is much larger than the
^
error, and the^
computation is a heuristic which is not even close to 3ulp accurate anyways. So that's what I am saying about accuracy: if you check the actual work-precision accuracy it does not effect the solver's ability to produce accurate solutions, but yes it will give a floating point difference in the low end values of the stepping behavior (which can grow over time, but washes out in terms of an actual difference)
Its indeed a good techniques of acceleration for ODE solvers using float64 type. Though it does not impact the final accuracy, it makes some noticeable difference in the intermediate values. For example, the time points produced are differed in a level of about 1e-5 in this comparison.
By modulization I think the solution results of
DP5
andDP8
produced withOrdinaryDiffEq.jl
differs from the original fortran code of Hairer. For example, there is a treatment to shrink the new step when the last try is rejected in Hairer's implementation, which he did not explain in his textbook. I could not find the corresponding line of code in this package. If some experimental justification of my conjecture is needed, I think the following julia code could be used.I wonder how is this difference evaluated.