WebAssembly / wide-arithmetic

WebAssembly proposal for wide arithmetic
https://webassembly.github.io/spec/
Other
5 stars 1 forks source link

Alternative or addition: instructions with overflow/carry flags #6

Open alexcrichton opened 1 month ago

alexcrichton commented 1 month ago

Perhaps the primary alternative to this proposal that would try to solve the same original problem would be to add instructions that manipulate/expose the overflow flag directly from native hardware. This is lower level than 128-bit operations themselves and the theory is that 128-bit operations could be built from these lower level instructions. The original proposal explicitly did not propose this due to complexity of optimizing these instructions and the ease of achieving performance on desired benchmarks with 128-bit operations. In the CG meeting however it was brought up that these lower-level primitives might be a better choice.

This issue is intended to be a discussion location for continuing to flesh out the flags-vs-128-bit-ops story. More thorough rationale will be needed in later phases to either explicitly use flag-based ops or not. This will likely be informed through experience in implementations in other engines in addition to performance numbers of relevant benchmarks.

alexcrichton commented 1 month ago

I've opened https://github.com/WebAssembly/128-bit-arithmetic/pull/10 with some more discussion behind the background of this proposal. It notably outlines the possible shape of these instructions plus why they had bad performance in Wasmtime with the initial implementation.

alexcrichton commented 1 month ago

One concern I might have with going exclusively with overflow/carry flags is that platforms like RISC-V don't have the same instructions as x86/aarch64 and they might be slower. For example implementing i64.add_with_carry_u on RISC-V to achieve 128-bit addition might end up being slower than what RISC-V already generates today. I'm not aware of other platforms missing these instructions, though, so RISC-V may be the only one affected and I additionally do not have a machine to benchmark on to confirm this hypothesis one way or another.

waterlens commented 1 month ago

One concern I might have with going exclusively with overflow/carry flags is that platforms like RISC-V don't have the same instructions as x86/aarch64 and they might be slower. For example implementing i64.add_with_carry_u on RISC-V to achieve 128-bit addition might end up being slower than what RISC-V already generates today. I'm not aware of other platforms missing these instructions, though, so RISC-V may be the only one affected and I additionally do not have a machine to benchmark on to confirm this hypothesis one way or another.

It is more likely to be a problem with the design of RISC-V ISA itself, as suggested by this discussion. It could be a big sacrifice for WASM if we give up such common operations on other hardware platforms, just because RISC-V people currently don't care about this.

alexcrichton commented 1 month ago

That's a good point! I should clarify that it's something I think is worth measuring, but I don't think it's a showstopper preventing such an alternative such as this.


Orthogonally I wanted to flesh out some more numbers on this of what I'm measuring locally. Using the benchmarks in https://github.com/alexcrichton/wasm-benchmark-i128 I measured the slowdown relative-to-native of wasm today, with this proposal as-is with 128-bit ops, and then with this alternative instead of overflow/carry flags (plus i64.mul_wide_*). Each column in the table is a different wasm binary benchmarked and each row is a benchmark. Lower is better as it means it's closer to native. This was collected with Wasmtime 25ish and on Linux x86_64.

Benchmark Wasm Today Wasm + 128-bit-arithmetic Wasm + overflow/carry/mul_wide
blind-sig 617% 74% 79%
lehmer 362% 12% -7%
mul-bignum 391% 30% 111%
fib_10000 122% 15% 100%
shl 93% 92% 93%
shr 98% 98% 95%
div-bignum 345% 160% 234%
sort signed 63% 64% 64%
sort unsigned 72% 72% 69%

Summary,:

The most notable result is that add128 is performing much better in addition operations than overflow/carry operations. This may be counterintuitive to expectation. I've outlined the reason as to why in the explainer a bit but to summarize that here the most general lowerings of overflow/carry ops require moving the carry flag out of the EFLAGS register and back into it. That in turn causes significant slowdown if it's not optimized away, and Wasmtime/Cranelift do not have a means of optimizing it away. In the 128-bit op case that more closely matches the original program semantics

alexcrichton commented 2 weeks ago

I've posted some more thoughts to record in the overview in https://github.com/WebAssembly/128-bit-arithmetic/pull/18

waterlens commented 2 weeks ago

Basically, I agree with your opinion. The correct modeling of how the carry flag works in the compiler is rare. There is a trade-off between less effort and optimal performance. The instructions to be added can not be perfect semantically, especially considering there aren't many people dedicated to the work. The current speedup is already considerable, so please go ahead and push this proposal upstream!