SciML / DiffEqGPU.jl

GPU-acceleration routines for DifferentialEquations.jl and the broader SciML scientific machine learning ecosystem
https://docs.sciml.ai/DiffEqGPU/stable/
MIT License
272 stars 27 forks source link

Fix automated conversion in adaptive solve #317

Open utkarsh530 opened 7 months ago

utkarsh530 commented 7 months ago

@ChrisRackauckas, I just wanted your comment here. So currently StepRangeLen promotes ref type in it to FP64 as it's done here: https://github.com/JuliaLang/julia/blob/2fb06a7c25fa2b770a8f6e8a45fec48c002268e4/base/twiceprecision.jl#L369

From the document on that code file itself:

# Necessary for creating nicely-behaved ranges like r = 0.1:0.1:0.3
# that return r[3] == 0.3.  Otherwise, we have roundoff error due to
#     0.1 + 2*0.1 = 0.30000000000000004

So basically by default it will create some FP64 types in the StepRangeLen something like this:

julia> saveat = 0.1f0:0.1f0:10.0f0
0.1f0:0.1f0:10.0f0

julia> typeof(saveat)
StepRangeLen{Float32, Float64, Float64, Int64}

This causes issues with backends that do not completely support double precision (Apple, Intel).

The current PR explicitly creates the StepRangeLen completely with types of the range argument, allowing complete type as FP32 if the types of the ranges are so. The tests fail because they still have FP64 types, which will generate different values within ranges due to rounding-off errors. What should we do in this case, update the tests, remove the explicit cast by us, and give warnings when using saveat as ranges in limited double precision support backends?

ChrisRackauckas commented 7 months ago

Give nice errors when using saveat as ranges in limited double precision support backends