Open JeffreySarnoff opened 5 years ago
Thanks ! I should definitely use these if they exist elsewhere and have been optimized.
On a related note, what do you think of using double precision arithmetic to define specialized versions of EFTs for single precision floats? I tend to do things like this because I would think it is more efficient than using algorithms such as TwoSum or TwoProd, but I have never actually timed both versions to make sure of that.
The function definitions I listed above automatically specialize for Float64s and Float32s. Using Float64s to emulate error-free transformations of Float32s could work because the number of significand bits in a Float64 is more than twice the number of significand bits in a Float32.
function twosum(a::Float32, b::Float32)
hilo = Float64(a) + Float64(b)
hi = Float32(hilo)
lo = Float32(hilo - hi)
return hi, lo
end
function twodiff(a::Float32, b::Float32)
hilo = Float64(a) - Float64(b)
hi = Float32(hilo)
lo = Float32(hilo - hi)
return hi, lo
end
function twoprod(a::Float32, b::Float32)
hilo = Float64(a) * Float64(b)
hi = Float32(hilo)
lo = Float32(hilo - hi)
return hi, lo
end
function twodiv(a::Float32, b::Float32)
hilo = Float64(a) / Float64(b)
hi = Float32(hilo)
lo = Float32(hilo - hi)
return hi, lo
end
I benchmark these as 10%-12% faster.
I have spent a good amount of time on optimizing error-free transformations in Julia. q.v. https://github.com/JeffreySarnoff/ErrorfreeArithmetic.jl in particular