dzhang314 / MultiFloats.jl

Fast, SIMD-accelerated extended-precision arithmetic for Julia
MIT License
77 stars 10 forks source link

conversion to/from other extended precision types #5

Closed GregPlowman closed 4 years ago

GregPlowman commented 4 years ago

This is really a question rather than an issue. I hope that's OK.

I would like to convert between various extended-precision types. Are these conversions for MultiFloats correct and idiomatic?

using DoubleFloats
using MultiFloats
using Quadmath

@inline MultiFloat{Float64,2}(x::DoubleFloats.Double64) = Float64x2(x.hi) + x.lo

@inline function MultiFloat{Float64,2}(x::Quadmath.Float128)
    hi = Float64(x)
    lo = Float64(x - hi)
    return Float64x2(hi) + lo
end

@inline Quadmath.Float128(x::Float64x2) = Quadmath.Float128(x._limbs[1]) + x._limbs[2]
function test(x)
    d = Double64(x)
    m = Float64x2(x)
    q = Float128(x)
    b = BigFloat(x)
    return d, m, q, b
end

test(Double64(1.0))
test(Float64x2(2.0))
test(Float128(3.0))
test(BigFloat(4.0))
dzhang314 commented 4 years ago

Hi @GregPlowman, thanks for your continued interest in my project! Yes, those conversion functions look totally fine. In fact, my Float64x2 representation should be just about bit-for-bit identical to DoubleFloats.Double64 (modulo some minor rounding/normalization differences), so you should be able to write

@inline MultiFloat{Float64,2}(x::DoubleFloats.Double64) = Float64x2((x.hi, x.lo))

to avoid the minor overhead of performing an addition.

Are you perhaps interested in performing some benchmarks between MultiFloats.jl and other extended-precision Julia packages? If so, I would be happy to see the results and learn how MultiFloats.jl could be further improved.

GregPlowman commented 4 years ago

Thanks David.