JuliaLang / julia

The Julia Programming Language
https://julialang.org/
MIT License
45.88k stars 5.49k forks source link

Allow `@simd` to use vectorised math operations when available #15265

Open simonbyrne opened 8 years ago

simonbyrne commented 8 years ago

It would be nice if we could write something like

function foo(X)
    u = zero(eltype(X))
    @simd for x in X
        u += log(x)
    end
    u
end

and have it automatically use a vector math function such as _mm_log_ps from MKL VML, or its Accelerate or Yeppp equivalents (we do have packages for those libraries, but they only expose the direct array operations, which require memory allocation).

It seems that LLVM has some optimisations for Accelerate, such that the following works on master on a Mac:

function llog(x::Float32)
    Base.llvmcall(("declare float @llvm.log.f32(float  %Val)",
                   """%2 = call float @llvm.log.f32(float %0)
                   ret float %2"""),
                  Float32,Tuple{Float32},x)
end

function foo(X)
    u = zero(eltype(X))
    @simd for x in X
        u += llog(x)
    end
    u
end

X = map(Float32,randexp(100))
@code_llvm foo(X)

but ideally this shouldn't require custom LLVM, and should work across multiple libraries. Perhaps we may be able to take advantage of developments in #15244 to get this to work more generally.

cc: @SimonDanisch, @eschnett, @ArchRobison

nalimilan commented 8 years ago

No, what would really be cool is getting the same feature with this. :-)

foo(X) = @simd sum(log(x) for x in X)

(Anyway I guess having the longer form work is a step towards this.)

eschnett commented 8 years ago

I have a C++ library Vecmathlib https://bitbucket.org/eschnett/vecmathlib/wiki/Home that goes very much in this direction. This library is implemented in C++, using various vector intrinsics; a long-standing plan of mine is to rewrite this in Julia, and targeting LLVM directly.

This would essentially provide a new implementation of log et al. that (a) can be tuned (so that it's faster when @fastmath is in effect), and that (b) can be efficiently vectorized. That may mean it's a bit slower in the scalar case, but the 8x speedup due to vectorization should still be a clear advantage.

The SIMD package https://github.com/eschnett/SIMD.jl is a first step in this direction.

vtjnash commented 8 years ago

i'm going to close this as a non-actionable feature-wish. with https://github.com/JuliaLang/julia/pull/15244, i think the content here should be implementable in a package.

ArchRobison commented 8 years ago

I think #15244 is unrelated, since it is addressing tuple vectorization. What we want for this request is for Julia transcendentals to be lowered to LLVM intrinsics, and vectorization of those to target whatever vector library is available. That seems like pure LLVM work. The Apple Accelerate patch seems like a good reference.

eschnett commented 8 years ago

This feature request is actionable. The transformations required for this are equivalent to those performed by @fastmath. We'd need to introduce additional functions vlog etc., and make @simd replace log by vlog.

StefanKarpinski commented 8 years ago

It must be spring time – the weather is nice and Jameson is on his annual issue closing spree!

ArchRobison commented 8 years ago

@simd does not do vectorization. It just marks loops as not having cross-iteration dependences, and leaves the vectorization to LLVM. It's the LLVM vectorizer that has to replace scalar transcendentals with vector transcendentals, and hence has to be told how the mapping is done (like the Apple Accelerate patch does).

ArchRobison commented 8 years ago

The closing worked to make me pay attention :-)

eschnett commented 8 years ago

I was imagining a vlog function that still takes a scalar input, but which contains no opaque code (e.g. no calls to libm), so that LLVM can inline and vectorize it.

ArchRobison commented 8 years ago

I'm worried about code bloat (and compilation time) from inlining stuff as complicated as log. The LLVM vectorizer evidently has support for vectorizing transcendentals. We just have to figure out how to cater to it. (I have horror stories from a previous life on putting transforms in the wrong place because of non-technical reasons.)

simonbyrne commented 8 years ago

As a rough idea, I was thinking that when the vectorizer sees a call to foo(x::T), it would look for a method foo(x::SIMD{T}), which would call the vectorized library function.

ArchRobison commented 8 years ago

That's what the table VecFuncs in https://github.com/llvm-mirror/llvm/blob/246537be50cedeb0eb52c26439d201faf9be933a/lib/Analysis/TargetLibraryInfo.cpp#L511-L551 is essentially specifying. I suppose that if we wanted to call our own versions, maybe we could add a hook that lets Julia specify the table.

I think a similar discussion came up years ago, and there was some stumbling block (on Windows) where if we mapped transcendentals to LLVM intrinsics, we were stuck with them being mapped to the system libm in the backend, but I don't remember the details.

simonbyrne commented 8 years ago

I suppose that if we wanted to call our own versions, maybe we could add a hook that lets Julia specify the table.

That would be awesome, though I don't know how easy that would be

where if we mapped transcendentals to LLVM intrinsics, we were stuck with them being mapped to the system libm in the backend

I think that this is still a problem.

ViralBShah commented 3 years ago

Is it reasonable to close this issue given all the progress in the SIMD ecosystem?

cc @chriselrod

vchuravy commented 3 years ago

IIUC this is still an open issue that is only solved in special-cases.

oscardssmith commented 3 years ago

LoopVectorization does a pretty good job solving this, but it would be nice if the compiler got good enough to make it often happen automatically.

chriselrod commented 3 years ago

I don't think it's solved. A couple possibilities that are within reach:

  1. @simd won't vectorize sqrt at the moment, while @fastmath will (as does use Base.FastMath.sqrt_fast or Base.sqrt_llvm directly).
  2. @simd ivdep would vectorize the current exp(::Float64) implementation if it inlined, but it is not inlined.

I think applying the vector function abi to call sites when using @simd + shipping a SIMD library supporting that abi would probably be the best way to solve this issue. But ideally, we could implement the vector variants in pure Julia just like our scalar variants are (especially when, like in @oscardssmith's exp, the implementation is good for both vector and scalar, but not inlining prevents the autovectorizer from handling it). I don't know enough about the llvm side of things to speculate what that would or should look like, but it seems like a possibility.

@avx of course works through multiple dispatch to call different implementations scalar vs vector implementations, which is the Julian approach.