# Necessary for creating nicely-behaved ranges like r = 0.1:0.1:0.3
# that return r[3] == 0.3. Otherwise, we have roundoff error due to
# 0.1 + 2*0.1 = 0.30000000000000004
So basically by default it will create some FP64 types in the StepRangeLen something like this:
This causes issues with backends that do not completely support double precision (Apple, Intel).
The current PR explicitly creates the StepRangeLen completely with types of the range argument, allowing complete type as FP32 if the types of the ranges are so. The tests fail because they still have FP64 types, which will generate different values within ranges due to rounding-off errors. What should we do in this case, update the tests, remove the explicit cast by us, and give warnings when using saveat as ranges in limited double precision support backends?
@ChrisRackauckas, I just wanted your comment here. So currently
StepRangeLen
promotesref
type in it to FP64 as it's done here: https://github.com/JuliaLang/julia/blob/2fb06a7c25fa2b770a8f6e8a45fec48c002268e4/base/twiceprecision.jl#L369From the document on that code file itself:
So basically by default it will create some FP64 types in the
StepRangeLen
something like this:This causes issues with backends that do not completely support double precision (Apple, Intel).
The current PR explicitly creates the
StepRangeLen
completely with types of the range argument, allowing complete type as FP32 if the types of the ranges are so. The tests fail because they still have FP64 types, which will generate different values within ranges due to rounding-off errors. What should we do in this case, update the tests, remove the explicit cast by us, and give warnings when using saveat as ranges in limited double precision support backends?