dzhang314 / MultiFloats.jl

Fast, SIMD-accelerated extended-precision arithmetic for Julia
MIT License
75 stars 10 forks source link

Suboptimal performance of multiplication by an integer #34

Open nsajko opened 1 year ago

nsajko commented 1 year ago

Just a little note on performance:

f(v::Vector{<:Real}, binop::F, f::Real) where {F <: Function} =
  map!(
    let f = f
      e -> binop(e, f)
    end,
    v,
    v)

@benchmark f(v, *, Float64x4(3)) setup=(v = rand(Float64x4, 2^14))  # median time: 260.556 μs

@benchmark f(v, *, 3.0) setup=(v = rand(Float64x4, 2^14))  # median time: 169.787 μs

@benchmark f(v, *, 0x3) setup=(v = rand(Float64x4, 2^14))  # median time: 250.470 μs

@benchmark f(v, *, Float32(3)) setup=(v = rand(Float64x4, 2^14))  # median time: 249.599 μs

Comparing the first and second benchmarks, it seems that multiplication with Float64 is special-cased, leading to very good performance.

The third and fourth benchmarks are disappointing, though, I guess that the UInt8 and Float32 are first converted to Float64x4 before the multiplication?

Perhaps converting to Float64 instead of to Float64x4 would lead to better performance? Or maybe generating special cases for multiplication with small integers or with Float32?

dzhang314 commented 1 year ago

Hey @nsajko, you're exactly right! The only special-cased arithmetic operators in MultiFloats.jl are Float64xN * Float64, and everything else is handled by Julia's numeric promotion system.

I can certainly add special cases for types like (U)Int8, ..., (U)Int32 which can be losslessly converted to Float64, but I wonder if there's a better way to handle this within Julia's promotion system. Will look into this the next time I work on MultiFloats.jl.