sysprog21 / rv32emu

Compact and Efficient RISC-V RV32I[MAFC] emulator
MIT License
388 stars 92 forks source link

Investigate interpreter dispatch methods #97

Closed jserv closed 1 year ago

jserv commented 1 year ago

It would still make sense to consolidate the existing interpreter as the foundation of tiered compilation before we actually develop JIT compiler (#81). See A look at the internals of 'Tiered JIT Compilation' in .NET Core for context. Although #95 uses tail-cail optimization (TCO) to reduce interpreter dispatch cost, we still need to investigate at several interpreter dispatch techniques before deciding how to move forward with more performance improvements and code maintenance.

The author of wasm3 provides an interesting project interp, which implements the following methods:

Preliminary experiments on Intel Xeon CPU E5-2650 v4 @ 2.20GHz with bench.

[ Calls Loop ]

time                 2.782 s    (1.765 s .. 3.482 s)
                     0.985 R²   (0.949 R² .. 1.000 R²)
mean                 2.743 s    (2.623 s .. 2.903 s)
std dev              167.7 ms   (43.46 ms .. 225.4 ms)
variance introduced by outliers: 19% (moderately inflated)

[ Switching ]

time                 2.430 s    (2.135 s .. 2.684 s)
                     0.998 R²   (0.994 R² .. 1.000 R²)
mean                 2.550 s    (2.461 s .. 2.682 s)
std dev              135.7 ms   (23.52 ms .. 176.6 ms)
variance introduced by outliers: 19% (moderately inflated)

[ Direct Threaded Code ]

time                 2.058 s    (1.242 s .. 2.725 s)
                     0.974 R²   (0.964 R² .. 1.000 R²)
mean                 1.756 s    (1.571 s .. 1.920 s)
std dev              191.4 ms   (108.1 ms .. 268.4 ms)
variance introduced by outliers: 23% (moderately inflated)

[ Token (Indirect) Threaded Code ]

time                 1.912 s    (1.376 s .. 3.088 s)
                     0.957 R²   (0.931 R² .. 1.000 R²)
mean                 1.564 s    (1.456 s .. 1.762 s)
std dev              193.0 ms   (12.64 ms .. 237.9 ms)
variance introduced by outliers: 23% (moderately inflated)

[ Tail Calls ]

time                 1.414 s    (1.027 s .. 1.736 s)
                     0.987 R²   (0.985 R² .. 1.000 R²)
mean                 1.131 s    (1.020 s .. 1.239 s)
std dev              139.4 ms   (2.226 ms .. 168.8 ms)
variance introduced by outliers: 23% (moderately inflated)

[ machine code Inlining ]

time                 344.6 ms   (57.24 ms .. 478.0 ms)
                     0.923 R²   (NaN R² .. 1.000 R²)
mean                 383.3 ms   (342.6 ms .. 412.6 ms)
std dev              42.76 ms   (23.80 ms .. 53.86 ms)
variance introduced by outliers: 23% (moderately inflated)

After #95 is merged, we are concerned about

Reference:

jserv commented 1 year ago

Mike Pall, creator of LuaJIT, talked about writing a fast interpreter with control-flow graph optimization.

The control-flow graph of an interpreter with C switch-based dispatch looks like this:

      .------.
      V      |
      |      |
      L      |  L = instruction load
      D      |  D = instruction dispatch
   / /|\ \   |
  / / | \ \  |
  C C C C C  |  C = operand decode
  X X X X X  |  X = instruction execution
  \ \ | / /  |
   \ \|/ /   |
      |      |
      V      |
      `------'

Each individual instruction execution looks like this:

  ......V......
  :X    |     :
  :     |\ \  :
  :     F S S :  F = fast path
  :     |/ /  :  S = slow paths
  :     |     :
  :.....V.....:

We're talking here about dozens of instructions and hundreds of slow paths. The compiler has to deal with the whole mess and gets into trouble:

We can use a direct or indirect-threaded interpreter even in C, e.g. with the computed 'goto &' feature of GCC:

  * * * * *
  | | | | |
  C C C C C    C = operand decode
  X X X X X    X = instruction execution
  L L L L L    L = next instruction load
  D D D D D    D = next instruction dispatch
  | | | | |
  V V V V V

This effectively replicates the load and the dispatch, which helps the CPU branch predictors. But it has its own share of problems:

If you write an interpreter loop in assembler, you can do much better:

Here's how this would look like:

  *  *  *  *  *
  |  |  |  |  |
  C  C  C  C  C    C = partial operand decode for this instruction
  F> F> F> F> F>   F = fast path, > = exit to slow path
  L  L  L  L  L    L = next instruction load
  C  C  C  C  C    C = partial operand decode for the next instruction
  D  D  D  D  D    D = next instruction dispatch
  |  |  |  |  |
  V  V  V  V  V

You can get this down to just a few machine code instructions. LuaJIT's interpreter is fast, because

jserv commented 1 year ago

Fast VMs without assembly - speeding up the interpreter loop: threaded interpreter, duff's device, JIT, Nostradamus distributor by the author of Bochs x86 emulator.

jserv commented 1 year ago

Virtual Machine Dispatch Experiments in Rust

Computed gotos or tail calls may give a worthwhile advantage on older or low-power architectures when implementing an FSM or a VM dispatch loop. There are a lot of these around, ARM processors being ubiquitous. The performance improvement over a single match statement could be up to 20%.

jserv commented 1 year ago

JamVM was an efficient interpreter-only Java virtual machine with code-copying technique.

qwe661234 commented 1 year ago

Our experimental results show that the greater the number of instructions in a basic block, the greater the impact TCO has. We also discovered that if we use the branch instruction as the end of a basic block, there are only a few instructions in a basic block. To enlarge the number of instructions in a basic block, we violate the definition of basic block and only use a jump or call instruction as the end of a block, rather than a branch instruction. Related implementation is in branch wip/enlarge_insn_in_block.

CoreMark Result

Model Compiler TCO Enlarge Basic Block Speedup
Core i7-8700 clang-15 971.951 1035.826899 +6.6%
Core i7-8700 gcc-12 963.336 1123.895132 +16.6%
eMAG 8180 clang-15 335.396 383.819427 +14.4%
eMAG 8180 gcc-12 332.561 374.303071 +12.5%

Compare the number of instructions in a basic block

Model: Core i7-8700, Compiler: clang-15

jserv commented 1 year ago

JamVM was an efficient interpreter-only Java virtual machine with code-copying technique.

I reworked JamVM to make it work with OpenJDK 8. See the revised jamvm, which supports both x86-64 and Aarch64 for GNU/Linux.

jserv commented 1 year ago

Web49 claims to be a faster WebAssembly interpreter, using "one big function" with computed goto. HackerNews discussion

qwe661234 commented 1 year ago

This discussion of Web49 shows some problems of threaded code jumps and its solutions. The first problem is poor register allocation, it also mentioned in Re: Suggestions on implementing an efficient instruction set simulator in LuaJIT2:

The register allocator can only treat each of these segments separately and will do a real bad job. There's just no way to give it a goal function like "I want the same register assignment before each goto".

The solution mentioned in this discussion, the first solution is grouping instructions that use the same registers into their own functions would help with that (arithmetic expressions tend to generate sequences like this).

The second limitations with computed gotos is the inability to derive the address of a label from outside the function. You always end up with some amount of superfluous conditional code for selecting the address inside the function, or indexing through a table.

One solution in this discussion is exported goto labels directly using inline assembly. Further, inline assembly can now represent control flow, so you can define the labels in inline assembly and the computed jump at the end of an opcode. That's pretty robust to compiler transforms.

qwe661234 commented 1 year ago

JamVM was an efficient interpreter-only Java virtual machine with code-copying technique.

To investigate dispatch method, machine code inlining, I intend to follow the machine code inlining technique in jamvm and rewrite the dispatch funciton of rv32emu.

jserv commented 1 year ago

Superinstruction is well-known techniques of improving performance of interpreters, eliminating jumps between virtual machine operations (interpreter dispatch) and enabling more optimizations in merged code. Quote from Towards Superinstructions for Java Interpreters:

See also: Threaded code and quick instructions for Kaffe benchmark

using SPEC Client98 benchmark suite Configuration: Pentium 130 (no L2 cache) seconds needed for 10 runs (real):

check mtrt    jess   compress  db  mpegaudio  jack    javac   
intrp     0.65  529.9   192.5   1035.1  85.8   803.4   442.1   141.5
quick     0.35  163.6    51.4    282.5  26.7   358.5   156.8    42.9  

US Patent USRE36204E Method and apparatus for resolving data references in generated code filed by Sun Microsystems has been expired.

qwe661234 commented 1 year ago

This branch is for investigate code-copying dispatch. However, there are some issues now, as we cannot reuse the copied page to emulate. It can pass some fundamental tests, such as hello.elf, coremark.elf, and puzzle.elf, without reusing copied pages.

jserv commented 1 year ago

This branch is for investigate code-copying dispatch. However, there are some issues now, as we cannot reuse the copied page to emulate. It can pass some fundamental tests, such as hello.elf, coremark.elf, and puzzle.elf, without reusing copied pages.

Can you use GCC's _attribute__((always_inline)) to forcely inline functions rather than introducing function-like macros? The former is better for type checking.

qwe661234 commented 1 year ago

This branch is for investigate code-copying dispatch. However, there are some issues now, as we cannot reuse the copied page to emulate. It can pass some fundamental tests, such as hello.elf, coremark.elf, and puzzle.elf, without reusing copied pages.

Can you use GCC's _attribute__((always_inline)) to forcely inline functions rather than introducing function-like macros? The former is better for type checking

Only the function insn_is_misaligned needs to be changed to function-like macros after testing. I attempt to forcefully inline the function insn_is_misaligned using GCC's _attribute__((always_inline)), however, it would fail because calling function in copying code might result in an incorrect return address being pushed.

Using inline assembly to push the right return address and jump to the function insn_is_misaligned is another option, but I don't think it's a good one.

qwe661234 commented 1 year ago

We can reuse the copied page to emulate some fundamental tests in the most recent commit of this branch, but some tests still need to be fixed.

There is an important issue with memory page size; some basic blocks in our 'arch-test' are so large that they require approximately 95537 bytes memory. This problem can be solved by increasing the memory page size to more than 95537 bytes, but this wastes memory because most basic blocks require less than 8152 bytes memory.

qwe661234 commented 1 year ago

We can reuse the copied page to emulate some fundamental tests in the most recent commit of this branch, but some tests still need to be fixed.

There is an important issue with memory page size; some basic blocks in our 'arch-test' are so large that they require approximately 95537 bytes memory. This problem can be solved by increasing the memory page size to more than 95537 bytes, but this wastes memory because most basic blocks require less than 8152 bytes memory.

The problem was resolved by roughly estimating how many memory pages a basic block needed before being allocated.

qwe661234 commented 1 year ago

This branch is used to investigate code-copying dispatch, and the latest commit, compiled with clang-15 and gcc-12, can now pass all arch-tests on an x86 machine. However, it still has some issues when running on an aarch64 machine. Worse, the performance of code-copying dispatch is worse than TCO, as shown in the relative analysis below.

Model Compiler a304446 0e1333a Speedup
Core i7-8700 clang-15 971.951 715.869 -26.3%
Core i7-8700 gcc-12 963.336 672.198 -30.0%

In the latest commit, I modified all functions called by copying code as function pointers stored in structure rv to make copying code calculate relative address correctly.

jserv commented 1 year ago

This branch is used to investigate code-copying dispatch, and the latest commit, compiled with clang-15 and gcc-12, can now pass all arch-tests on an x86 machine. However, it still has some issues when running on an aarch64 machine. Worse, the performance of code-copying dispatch is worse than TCO, as shown in the relative analysis below.

It looks reasonably fine because there are many small blocks. Then, extend the scope of block by changing the way to separate blocks.

jserv commented 1 year ago

The Cacao interpreter contains several novel research papers along with open source work. See also: Interpreter Research

jserv commented 1 year ago

Close in favor of baseline JIT compiler.