Closed jserv closed 1 year ago
Mike Pall, creator of LuaJIT, talked about writing a fast interpreter with control-flow graph optimization.
The control-flow graph of an interpreter with C switch-based dispatch looks like this:
.------.
V |
| |
L | L = instruction load
D | D = instruction dispatch
/ /|\ \ |
/ / | \ \ |
C C C C C | C = operand decode
X X X X X | X = instruction execution
\ \ | / / |
\ \|/ / |
| |
V |
`------'
Each individual instruction execution looks like this:
......V......
:X | :
: |\ \ :
: F S S : F = fast path
: |/ / : S = slow paths
: | :
:.....V.....:
We're talking here about dozens of instructions and hundreds of slow paths. The compiler has to deal with the whole mess and gets into trouble:
We can use a direct or indirect-threaded interpreter even in C, e.g. with the computed 'goto &' feature of GCC:
* * * * *
| | | | |
C C C C C C = operand decode
X X X X X X = instruction execution
L L L L L L = next instruction load
D D D D D D = next instruction dispatch
| | | | |
V V V V V
This effectively replicates the load and the dispatch, which helps the CPU branch predictors. But it has its own share of problems:
If you write an interpreter loop in assembler, you can do much better:
Here's how this would look like:
* * * * *
| | | | |
C C C C C C = partial operand decode for this instruction
F> F> F> F> F> F = fast path, > = exit to slow path
L L L L L L = next instruction load
C C C C C C = partial operand decode for the next instruction
D D D D D D = next instruction dispatch
| | | | |
V V V V V
You can get this down to just a few machine code instructions. LuaJIT's interpreter is fast, because
Fast VMs without assembly - speeding up the interpreter loop: threaded interpreter, duff's device, JIT, Nostradamus distributor by the author of Bochs x86 emulator.
Virtual Machine Dispatch Experiments in Rust
Computed gotos or tail calls may give a worthwhile advantage on older or low-power architectures when implementing an FSM or a VM dispatch loop. There are a lot of these around, ARM processors being ubiquitous. The performance improvement over a single match statement could be up to 20%.
JamVM was an efficient interpreter-only Java virtual machine with code-copying technique.
Our experimental results show that the greater the number of instructions in a basic block, the greater the impact TCO has. We also discovered that if we use the branch instruction as the end of a basic block, there are only a few instructions in a basic block. To enlarge the number of instructions in a basic block, we violate the definition of basic block and only use a jump or call instruction as the end of a block, rather than a branch instruction. Related implementation is in branch wip/enlarge_insn_in_block.
Model | Compiler | TCO | Enlarge Basic Block | Speedup |
---|---|---|---|---|
Core i7-8700 | clang-15 | 971.951 | 1035.826899 | +6.6% |
Core i7-8700 | gcc-12 | 963.336 | 1123.895132 | +16.6% |
eMAG 8180 | clang-15 | 335.396 | 383.819427 | +14.4% |
eMAG 8180 | gcc-12 | 332.561 | 374.303071 | +12.5% |
Instruction | Count | Invoked Times
1 | 257 [ 14.91 %]| 67811665 [ 0.00 %]
2 | 326 [ 18.91 %]| 128953458 [ 0.00 %]
3 | 287 [ 16.65 %]| 113370334 [ 0.00 %]
4 | 150 [ 8.70 %]| 28792048 [ 0.00 %]
5 | 178 [ 10.32 %]| 87276451 [ 0.00 %]
6 | 102 [ 5.92 %]| 42771905 [ 0.00 %]
7 | 74 [ 4.29 %]| 94368206793558 [100.00 %]
.
.
.
Instruction | Count | Invoked Times
0 | 0 [ 0.00 %]| 0 [ 0.00 %]
1 | 43 [ 3.48 %]| 221728 [ 0.02 %]
2 | 76 [ 6.15 %]| 46721545 [ 4.35 %]
3 | 109 [ 8.83 %]| 7935500 [ 0.74 %]
4 | 78 [ 6.32 %]| 29186819 [ 2.72 %]
5 | 79 [ 6.40 %]| 24730008 [ 2.30 %]
6 | 66 [ 5.34 %]| 98268483 [ 9.15 %]
7 | 54 [ 4.37 %]| 93258283 [ 8.68 %]
8 | 36 [ 2.91 %]| 15350793 [ 1.43 %]
9 | 42 [ 3.40 %]| 1424689 [ 0.13 %]
10 | 32 [ 2.59 %]| 9619665 [ 0.90 %]
11 | 41 [ 3.32 %]| 1017886 [ 0.09 %]
12 | 32 [ 2.59 %]| 63336493 [ 5.90 %]
13 | 36 [ 2.91 %]| 27690567 [ 2.58 %]
14 | 39 [ 3.16 %]| 87729161 [ 8.17 %]
15 | 27 [ 2.19 %]| 16175213 [ 1.51 %]
16 | 32 [ 2.59 %]| 2152239 [ 0.20 %]
17 | 23 [ 1.86 %]| 41594781 [ 3.87 %]
18 | 31 [ 2.51 %]| 23869232 [ 2.22 %]
19 | 25 [ 2.02 %]| 91090239 [ 8.48 %]
20 | 21 [ 1.70 %]| 1145062 [ 0.11 %]
21 | 23 [ 1.86 %]| 331961 [ 0.03 %]
22 | 13 [ 1.05 %]| 10018980 [ 0.93 %]
23 | 13 [ 1.05 %]| 10111048 [ 0.94 %]
24 | 12 [ 0.97 %]| 2596 [ 0.00 %]
25 | 12 [ 0.97 %]| 19443780 [ 1.81 %]
26 | 5 [ 0.40 %]| 17 [ 0.00 %]
27 | 16 [ 1.30 %]| 28597552 [ 2.66 %]
28 | 12 [ 0.97 %]| 28894686 [ 2.69 %]
29 | 6 [ 0.49 %]| 2834075 [ 0.26 %]
30 | 11 [ 0.89 %]| 124481 [ 0.01 %]
31 | 3 [ 0.24 %]| 10 [ 0.00 %]
32 | 13 [ 1.05 %]| 49620 [ 0.00 %]
33 | 7 [ 0.57 %]| 935059 [ 0.09 %]
34 | 4 [ 0.32 %]| 7939518 [ 0.74 %]
35 | 5 [ 0.40 %]| 4060710 [ 0.38 %]
36 | 6 [ 0.49 %]| 3733205 [ 0.35 %]
37 | 2 [ 0.16 %]| 12 [ 0.00 %]
38 | 6 [ 0.49 %]| 75 [ 0.00 %]
39 | 4 [ 0.32 %]| 93365 [ 0.01 %]
40 | 3 [ 0.24 %]| 4242 [ 0.00 %]
41 | 8 [ 0.65 %]| 6495489 [ 0.60 %]
42 | 15 [ 1.21 %]| 494 [ 0.00 %]
43 | 6 [ 0.49 %]| 182707629 [ 17.01 %]
.
.
.
Web49 claims to be a faster WebAssembly interpreter, using "one big function" with computed goto. HackerNews discussion
This discussion of Web49
shows some problems of threaded code jumps
and its solutions. The first problem is poor register allocation, it also mentioned in Re: Suggestions on implementing an efficient instruction set simulator in LuaJIT2:
The register allocator can only treat each of these segments separately and will do a real bad job. There's just no way to give it a goal function like "I want the same register assignment before each goto".
The solution mentioned in this discussion, the first solution is grouping instructions that use the same registers into their own functions would help with that (arithmetic expressions tend to generate sequences like this).
The second limitations with computed gotos is the inability to derive the address of a label from outside the function. You always end up with some amount of superfluous conditional code for selecting the address inside the function, or indexing through a table.
One solution in this discussion is exported goto labels directly using inline assembly. Further, inline assembly can now represent control flow, so you can define the labels in inline assembly and the computed jump at the end of an opcode. That's pretty robust to compiler transforms.
JamVM was an efficient interpreter-only Java virtual machine with code-copying technique.
- VM Rumble - JamVM by Dr. Robert Lougher
- JamVM interpreter
- JamVM code-copying
- JamVM techniques
- Implementation and Evaluation of Fast Untyped Memory in a Java Virtual Machine, Chapter 3
- Java On Raspberry Pi Performance: comparisons with JIT-enabled JVM implementations
To investigate dispatch method, machine code inlining, I intend to follow the machine code inlining technique in jamvm and rewrite the dispatch funciton of rv32emu
.
Superinstruction is well-known techniques of improving performance of interpreters, eliminating jumps between virtual machine operations (interpreter dispatch) and enabling more optimizations in merged code. Quote from Towards Superinstructions for Java Interpreters:
See also: Threaded code and quick instructions for Kaffe benchmark
using SPEC Client98 benchmark suite Configuration: Pentium 130 (no L2 cache) seconds needed for 10 runs (real):
check mtrt jess compress db mpegaudio jack javac intrp 0.65 529.9 192.5 1035.1 85.8 803.4 442.1 141.5 quick 0.35 163.6 51.4 282.5 26.7 358.5 156.8 42.9
US Patent USRE36204E Method and apparatus for resolving data references in generated code filed by Sun Microsystems has been expired.
This branch is for investigate code-copying dispatch. However, there are some issues now, as we cannot reuse the copied page to emulate. It can pass some fundamental tests, such as hello.elf
, coremark.elf
, and puzzle.elf
, without reusing copied pages.
This branch is for investigate code-copying dispatch. However, there are some issues now, as we cannot reuse the copied page to emulate. It can pass some fundamental tests, such as
hello.elf
,coremark.elf
, andpuzzle.elf
, without reusing copied pages.
Can you use GCC's _attribute__((always_inline))
to forcely inline functions rather than introducing function-like macros? The former is better for type checking.
This branch is for investigate code-copying dispatch. However, there are some issues now, as we cannot reuse the copied page to emulate. It can pass some fundamental tests, such as
hello.elf
,coremark.elf
, andpuzzle.elf
, without reusing copied pages.Can you use GCC's
_attribute__((always_inline))
to forcely inline functions rather than introducing function-like macros? The former is better for type checking
Only the function insn_is_misaligned
needs to be changed to function-like macros after testing. I attempt to forcefully inline the function insn_is_misaligned
using GCC's _attribute__((always_inline))
, however, it would fail because calling function in copying code might result in an incorrect return address being pushed.
Using inline assembly to push the right return address and jump to the function insn_is_misaligned
is another option, but I don't think it's a good one.
We can reuse the copied page to emulate some fundamental tests in the most recent commit of this branch, but some tests still need to be fixed.
There is an important issue with memory page size; some basic blocks in our 'arch-test' are so large that they require approximately 95537 bytes memory. This problem can be solved by increasing the memory page size to more than 95537 bytes, but this wastes memory because most basic blocks require less than 8152 bytes memory.
We can reuse the copied page to emulate some fundamental tests in the most recent commit of this branch, but some tests still need to be fixed.
There is an important issue with memory page size; some basic blocks in our 'arch-test' are so large that they require approximately 95537 bytes memory. This problem can be solved by increasing the memory page size to more than 95537 bytes, but this wastes memory because most basic blocks require less than 8152 bytes memory.
The problem was resolved by roughly estimating how many memory pages a basic block needed before being allocated.
This branch is used to investigate code-copying dispatch, and the latest commit, compiled with clang-15
and gcc-12
, can now pass all arch-tests on an x86 machine. However, it still has some issues when running on an aarch64 machine. Worse, the performance of code-copying dispatch is worse than TCO, as shown in the relative analysis below.
Model | Compiler | a304446 | 0e1333a | Speedup |
---|---|---|---|---|
Core i7-8700 | clang-15 | 971.951 | 715.869 | -26.3% |
Core i7-8700 | gcc-12 | 963.336 | 672.198 | -30.0% |
In the latest commit, I modified all functions called by copying code as function pointers stored in structure rv
to make copying code calculate relative address correctly.
This branch is used to investigate code-copying dispatch, and the latest commit, compiled with
clang-15
andgcc-12
, can now pass all arch-tests on an x86 machine. However, it still has some issues when running on an aarch64 machine. Worse, the performance of code-copying dispatch is worse than TCO, as shown in the relative analysis below.
It looks reasonably fine because there are many small blocks. Then, extend the scope of block by changing the way to separate blocks.
The Cacao interpreter contains several novel research papers along with open source work. See also: Interpreter Research
Close in favor of baseline JIT compiler.
It would still make sense to consolidate the existing interpreter as the foundation of tiered compilation before we actually develop JIT compiler (#81). See A look at the internals of 'Tiered JIT Compilation' in .NET Core for context. Although #95 uses tail-cail optimization (TCO) to reduce interpreter dispatch cost, we still need to investigate at several interpreter dispatch techniques before deciding how to move forward with more performance improvements and code maintenance.
The author of wasm3 provides an interesting project interp, which implements the following methods:
Preliminary experiments on Intel Xeon CPU E5-2650 v4 @ 2.20GHz with bench.
[ Calls Loop ]
[ Switching ]
[ Direct Threaded Code ]
[ Token (Indirect) Threaded Code ]
[ Tail Calls ]
[ machine code Inlining ]
After #95 is merged, we are concerned about
Reference: