WebAssembly / relaxed-simd

Relax the strict determinism requirements of SIMD operations.
Other
42 stars 7 forks source link

Relaxed Integer Dot Product instructions #52

Open Maratyszcza opened 2 years ago

Maratyszcza commented 2 years ago

What are the instructions being proposed?

I propose relaxed 8-bit versions of the Dot Product instructions introduced in WebAssembly/simd#127. These instruction expose multipliplication of 8-bit (unsigned or signed) elements by 7-bit (treated as unsigned) elements with accumulation of adjacent products. These instructions are designed to expose the performance benefits of the following native instructions in a portable way:

Discussion on Issue #9 goes into great length explaining the performance benefits of the above-mentioned native instructions.

I suggest i16x8.dot_i8x16_i7x16_s, i16x8.dot_i8x16_i7x16_u, i32x4.dot_i8x16_i7x16_add_s, and i32x4.dot_i8x16_i7x16_add_u as the tentative names for the proposed instructions.

What are the semantics of these instructions?

Both x86 and ARM provide variants of Dot Product instructions on SIMD vectors of 8-bit elements, but differ in the semantics of the input elements:

The proposed instructions resolve this incompatibility by guaranteeing the result only when elements of the second input SIMD vector have at most 7 non-zero bits, as in this case there is no difference between signed and unsigned representation.

i16x8.dot_i8x16_i7x16_s is a 2-element dot product instruction consuming signed 8-bit integer elements as the first input SIMD vector and 7-bit integer elements (treated as unsigned) as the second input SIMD vector and producing signed 16-bit integer output elements. The 2-element dot product never overflows as the worst case outputs fit into signed 16-bit integer:

i16x8.dot_i8x16_i7x16_u is a 2-element dot product instruction consuming unsigned 8-bit integer elements as the first input SIMD vector and 7-bit integer elements (treated as unsigned) as the second input SIMD vector and producing unsigned 16-bit integer output elements. The 2-element dot product never overflows as the worst case outputs fit into unsigned 16-bit integer:

i32x4.dot_i8x16_i7x16_add_s is a 4-element dot product with accumulation instruction consuming signed 8-bit integer elements in the first input SIMD vector, 7-bit integer elements (treated as unsigned) in the second input SIMD vector, and 32-bit integer elements (signedness-agnostic) in the third input SIMD vector and producing (signedness-agnostic) 32-bit integer output elements. The 4-element dot product producing a 32-bit result never overflows, and the addition of the third input SIMD vector is performed in modulo arithmetics.

i32x4.dot_i8x16_i7x16_add_u is a 4-element dot product with accumulation instruction consuming unsigned 8-bit integer elements in the first input SIMD vector, 7-bit integer elements (treated as unsigned) in the second input SIMD vector, and 32-bit integer elements (signedness-agnostic) in the third input SIMD vector and producing (signedness-agnostic) 32-bit integer output elements. The 4-element dot product producing a 32-bit result never overflows, and the addition of the third input SIMD vector is performed in modulo arithmetics.

How will these instructions be implemented?

x86/x86-64 processors with AVX2-VNNI or AVX512-VNNI instruction set

x86/x86-64 processors with XOP instruction set

x86/x86-64 processors with AVX instruction set

x86/x86-64 processors with SSSE3 instruction set

ARM64 processors with Dot Product extension

ARM64 processors

Reference lowering through the WAsm SIMD128 instruction set

How does behavior differ across processors? What new fingerprinting surfaces will be exposed?

As the native equivalents of the proposed Dot Product instructions on x86 perform signed-by-unsigned multiplication and the native equivalents on ARM perform either signed-by-signed, or unsigned-by-unsigned multiplications, it is possible to distinguish these architectures from results on out-of-bounds (when the high bit of the elements in the second input SIMD vector) inputs. x86/x86-64 can already be distinguished from ARM/ARM64 based on NaN behavior, so this aspect doesn't expose any new fingerprinting surfaces.

However, it is also possible to distinguish processors with AVX2-VNNI or AVX512-VNNI instruction sets on x86 from processors without these instruction sets by detecting saturation of intermediate results (in PMADDUBSW instruction), and distinguish ARM processors with Dot Product extension from ARM processors without this extension by detecting wrapping of intermediate results (in ADDP instructions). WebAssembly engines have three options to manage exposure of this fingerprinting surface:

  1. Wait it out, as new processors tend to support the AVX2-VNNI / AVX512-VNNI extension on the x86 and the NEON Dot Product extension on ARM. 2022 processor cores from Intel (Golden Cove, Gracemont), AMD (Zen 4), ARM (Cortex-X2, Cortex-A710, Cortex-A510), and Apple (A15) all support these instruction set extensions.

  2. Mimick the behaviour of the AVX2-VNNI / AVX512-VNNI VPDPBUSD instruction on the x86 processors without this instruction extension and the behaviour of the NEON Dot Product instructions on the ARM processors without this instruction set extension. This option comes at a performance cost on the older processors.

  3. Avoid the AVX2-VNNI / AVX512-VNNI VPDPBUSD instruction on x86 and the NEON Dot Product instructions on ARM, and use the same instruction sequences as the older processors. This option comes at a performance cost on the newer processors.

What use cases are there?

lars-t-hansen commented 2 years ago

I'm in favor of trying to make this instruction work, but in reference to #9 i think we need to check whether that performance number is realizable when the more relaxed semantics are applied, or whether intermediate results need to be fixed up to account for the relaxed semantics. The experiment @kpu references was x86-only and the semantics were explicitly those of PMADDUBSW. @yurydelendik looked into whether the experiment could use the dot products on arm64 and ran into some trouble, iirc.

yurydelendik commented 2 years ago

FWIW the code that benefited from introducing PMADDUBSW can be found at https://github.com/kpu/intgemm/blob/d3657687c9b84d2ea2ea39b2fac7b89597bde848/intgemm/multiply.h#L305-L360

ngzhian commented 2 years ago

This instruction is the first one that is not just a relaxed version of SIMD proposal. I wonder if it should be i16x8.relaxed_dot_i8x16_s instead. It's a dot product of i8x16 elements, the "relaxed" means that the results are only guaranteed if the top bit of the second input is set. Anyway, I have added these to the overview, with the proposed names used in this issue. We can change them at a later time.

yurydelendik commented 2 years ago

During implementation I encountered this case i16x8.dot_i8x16_i7x16_u (v128.const i8x16 129,192, ....) (v128.const i8x16 65,127, ....) which generates v128.const i16x8 32767, ... on Intel. I wonder if OP analysis is incorrect.

PMADDUBS produces intermediate signed 16-bit integers, and the saturated result of adding is packed to the destination. The above 129 65 + 192 127 == 32769, but was saturated to 32767.

Maratyszcza commented 2 years ago

@yurydelendik You are right, I missed the saturating behavior of PMADDUBS. I suggest we remove i16x8.dot_i8x16_i7x16_u and i32x4.dot_i8x16_i7x16_add_u instructions from the proposal, as I don't see a way to implement them equally efficiently across both x86 and ARM64.

kpu commented 2 years ago

The end goal is matrix multiply. Native matrix multiply implementations have highly-optimized implementations that are separate for x86 and ARM. If I'm honest about how I would use these instructions, I would:

  1. Compile both x86 and ARM versions to WASM using the proposed instructions.
  2. At runtime, reverse engineer which CPU is actually running via saturation behavior.
  3. Run the relevant native implementation by exploiting the unofficial arch-dependent behavior of using these instructions outside the documented bounds.

Models quantized for the dominant case of native code will use all 8 bits and be prepared for Intel's saturation (some scale both arguments down by sqrt(2) to avoid saturation); restricting to 7 bits would require new models to be distributed and validated, probably with quantization loss.
There's also the issue of register allocation, since multiply routines use all the registers they can but popping to the stack slows things down. Here, sensing the underlying architecture then calling an arch-specific multiply will correctly optimize register allocation provided the JIT is written well enough.

So really I'm asking for instructions whose behavior can be exploited to implement 8-bit multiply.

kpu commented 2 years ago

What about just having an ARM USDOT / x86 VNNI wrapper that always does unsigned * signed?

On pre-VNNI x86 it lowers to pmaddubsw (which gets saturated 16 bit results) followed by pmaddwd against a register of 1s to accomplish a horizontal add into a 32-bit signed result. This is the main idiom used for 8-bit multiply on pre-VNNI x86 and yes it saturates.

Disadvantage is older ARM with only SDOT (no USDOT) is slower than necessary.

Maratyszcza commented 2 years ago

USDOT is pretty much non-existent on ARM (there're two SoCs on the market the support it), while SDOT is widespread.

The specification of i32x4.dot_i8x16_i7x16_add_s enables both x86 VNNI and ARM SDOT to be used efficiently.

Maratyszcza commented 2 years ago

Updated the proposal to reflect the removal of i16x8.dot_i8x16_i7x16_u and i32x4.dot_i8x16_i7x16_add_u instructions.

ngzhian commented 1 year ago

For the Wasm SIMD 128 lowering, i assume the wasm_i16x8_extmul_low_i8x16 refers to the signed ext mul, right?

Maratyszcza commented 1 year ago

Yes, wasm_i16x8_extmul_low_i8x16 is signed extended multiplication.