Open simonbyrne opened 8 years ago
No, what would really be cool is getting the same feature with this. :-)
foo(X) = @simd sum(log(x) for x in X)
(Anyway I guess having the longer form work is a step towards this.)
I have a C++ library Vecmathlib https://bitbucket.org/eschnett/vecmathlib/wiki/Home that goes very much in this direction. This library is implemented in C++, using various vector intrinsics; a long-standing plan of mine is to rewrite this in Julia, and targeting LLVM directly.
This would essentially provide a new implementation of log
et al. that (a) can be tuned (so that it's faster when @fastmath
is in effect), and that (b) can be efficiently vectorized. That may mean it's a bit slower in the scalar case, but the 8x speedup due to vectorization should still be a clear advantage.
The SIMD
package https://github.com/eschnett/SIMD.jl is a first step in this direction.
i'm going to close this as a non-actionable feature-wish. with https://github.com/JuliaLang/julia/pull/15244, i think the content here should be implementable in a package.
I think #15244 is unrelated, since it is addressing tuple vectorization. What we want for this request is for Julia transcendentals to be lowered to LLVM intrinsics, and vectorization of those to target whatever vector library is available. That seems like pure LLVM work. The Apple Accelerate patch seems like a good reference.
This feature request is actionable. The transformations required for this are equivalent to those performed by @fastmath
. We'd need to introduce additional functions vlog
etc., and make @simd
replace log
by vlog
.
It must be spring time – the weather is nice and Jameson is on his annual issue closing spree!
@simd
does not do vectorization. It just marks loops as not having cross-iteration dependences, and leaves the vectorization to LLVM. It's the LLVM vectorizer that has to replace scalar transcendentals with vector transcendentals, and hence has to be told how the mapping is done (like the Apple Accelerate patch does).
The closing worked to make me pay attention :-)
I was imagining a vlog
function that still takes a scalar input, but which contains no opaque code (e.g. no calls to libm
), so that LLVM can inline and vectorize it.
I'm worried about code bloat (and compilation time) from inlining stuff as complicated as log
. The LLVM vectorizer evidently has support for vectorizing transcendentals. We just have to figure out how to cater to it. (I have horror stories from a previous life on putting transforms in the wrong place because of non-technical reasons.)
As a rough idea, I was thinking that when the vectorizer sees a call to foo(x::T)
, it would look for a method foo(x::SIMD{T})
, which would call the vectorized library function.
That's what the table VecFuncs
in https://github.com/llvm-mirror/llvm/blob/246537be50cedeb0eb52c26439d201faf9be933a/lib/Analysis/TargetLibraryInfo.cpp#L511-L551
is essentially specifying. I suppose that if we wanted to call our own versions, maybe we could add a hook that lets Julia specify the table.
I think a similar discussion came up years ago, and there was some stumbling block (on Windows) where if we mapped transcendentals to LLVM intrinsics, we were stuck with them being mapped to the system libm in the backend, but I don't remember the details.
I suppose that if we wanted to call our own versions, maybe we could add a hook that lets Julia specify the table.
That would be awesome, though I don't know how easy that would be
where if we mapped transcendentals to LLVM intrinsics, we were stuck with them being mapped to the system libm in the backend
I think that this is still a problem.
Is it reasonable to close this issue given all the progress in the SIMD ecosystem?
cc @chriselrod
IIUC this is still an open issue that is only solved in special-cases.
LoopVectorization
does a pretty good job solving this, but it would be nice if the compiler got good enough to make it often happen automatically.
I don't think it's solved. A couple possibilities that are within reach:
@simd
won't vectorize sqrt
at the moment, while @fastmath
will (as does use Base.FastMath.sqrt_fast
or Base.sqrt_llvm
directly).@simd ivdep
would vectorize the current exp(::Float64)
implementation if it inlined, but it is not inlined.I think applying the vector function abi to call sites when using @simd
+ shipping a SIMD library supporting that abi would probably be the best way to solve this issue.
But ideally, we could implement the vector variants in pure Julia just like our scalar variants are (especially when, like in @oscardssmith's exp, the implementation is good for both vector and scalar, but not inlining prevents the autovectorizer from handling it).
I don't know enough about the llvm side of things to speculate what that would or should look like, but it seems like a possibility.
@avx
of course works through multiple dispatch to call different implementations scalar vs vector implementations, which is the Julian approach.
It would be nice if we could write something like
and have it automatically use a vector math function such as
_mm_log_ps
from MKL VML, or its Accelerate or Yeppp equivalents (we do have packages for those libraries, but they only expose the direct array operations, which require memory allocation).It seems that LLVM has some optimisations for Accelerate, such that the following works on master on a Mac:
but ideally this shouldn't require custom LLVM, and should work across multiple libraries. Perhaps we may be able to take advantage of developments in #15244 to get this to work more generally.
cc: @SimonDanisch, @eschnett, @ArchRobison