Open novacrazy opened 3 years ago
How do you plan to support NEON? Would you be willing to help out stabilize std::arch
's NEON API? Also, are you aware of the stdsimd
effort to provide an MVP for backend-agnostic SIMD in std?
@dragostis I'm planning to use the arm and aarch64 NEON intrinsics behind a feature-flag until they are stabilized.
As for stdsimd
, I find the design unusable in-practice. The lack of dynamic dispatch or even consistent static dispatch is a deal-breaker. If anything causes an stdsimd
function to de-inline from the #[target-feature(...)]
scope, it will fall back to scalar, because that's just how #[target-feature(...)]
works in Rust (and C++ as well I think). The only way to use it correctly is by setting target-feature
or target-cpu
on the entire binary, and give up all dispatch entirely, which is unacceptable in a real application. It's the same problem packed_simd
had.
Thermite, on the other hand, uses its Simd
trait alongside the #[dispatch]
macro to ensure all functions are properly monomorphized to use the correct instruction-set, regardless of whether it was inlined or not, without any extra machinery on the users' part.
Thanks for the detailed reply. I see what you mean about stdsimd
. Dynamic dispatch sounds like something that a mature version of std::simd
would have eventually.
I was actually more curious about how they're using LLVM intrinsics for the arm part. Since this is not what you want to do, do you have plans to move forward the stabilization of the arm part of core::arch
? I've read a bit about it and it seems like it will require quite a bit more push until it will be close to stabilization.
I am entirely unaffiliated with Rust core or the stabilization efforts. I'm not familiar with what it would take to advance stabilization, either.
Regarding the LLVM intrinsics (platform intrinsics), they are both great and annoying at the same time. LLVM has implemented some great codegen algorithms to do a variety of tasks, but it's missing some operations that do exist as dedicated instructions, and the code it generates can be slightly rigid and overly safe at times. (shuffles and selects/blends come to mind).
After having used packed_simd
for a couple years, I prefer to stay away from the platform-intrinsics. Individual instruction intrinsics are far more predictable.
However, at the same time, Rust's usage of platform-intrinsics internally using arbitrary types leads to a lot of extra LLVM bytecode being generated where I expect just a simple intrinsic call, which has led to small deoptimizations in isolated cases, mostly centered around const-folding (not Rust const, but LLVM const) and algebraic simplification. I've tried to minimize that as much as possible in Thermite, but it probably doesn't matter much on a larger scale anyway. Just a nitpick.
Also, while I'm here, I'm going to find some time soon to continue on the other backends. Scalar is mostly complete, but I need to be careful with select/blend ops to ensure good codegen with those abstractions. SSE4.2 will be next.
Backends
Extra data types
These can use 128-bit registers even on AVX/AVX2, and 256-bit registers on AVX512
Polyfills
Iterator library
Vectorized math library
Currently fully implemented for single and double-precision:
sin
,cos
,tan
,asin
,acos
,atan
,atan2
,sinh
,cosh
,tanh
,asinh
,acosh
,atanh
,exp
,exp2
,exph
(0.5 * exp
),exp10
,exp_m1
,cbrt
,powf
,ln
,ln_1p
,ln2
,ln10
,erf
,erfinv
,tgamma
,lgamma
,next_float
,prev_float
Precision-agnostic implementations:
lerp
,scale
,fmod
,powi
(single and vector exponents),poly
,poly_f
,poly_rational
,summation_f
,product_f
,smoothstep
,smootherstep
,smootheststep
,hermite
(single and vector degrees),jacobi
,legendre
,bessel_y
TODO:
Bessel functions:
Complex and Dual number libraries
Precision Improvements
lgamma
where possible.ln(tgamma(x))
when we know it won't overflow?sin(x*π)
, etc.)Performance improvements:
1 - (1 - x)
is the trick.Policy improvements:
Size
policy, especially when WASM support is added (both scalar and SIMD)Testing