WebAssembly / binaryen

Optimizer and compiler/toolchain library for WebAssembly
Apache License 2.0
7.54k stars 745 forks source link
c-plus-plus compilers emscripten hacktoberfest webassembly

CI

Binaryen

Binaryen is a compiler and toolchain infrastructure library for WebAssembly, written in C++. It aims to make compiling to WebAssembly easy, fast, and effective:

Toolchains using Binaryen as a component (typically running wasm-opt) include:

For more on how some of those work, see the toolchain architecture parts of the V8 WasmGC porting blogpost.

Compilers using Binaryen as a library include:

Binaryen also provides a set of toolchain utilities that can

Consult the contributing instructions if you're interested in participating.

Binaryen IR

Binaryen's internal IR is designed to be

There are a few differences between Binaryen IR and the WebAssembly language:

As a result, you might notice that round-trip conversions (wasm => Binaryen IR => wasm) change code a little in some corner cases.

Notes when working with Binaryen IR:

Intrinsics

Binaryen intrinsic functions look like calls to imports, e.g.,

(import "binaryen-intrinsics" "foo" (func $foo))

Implementing them that way allows them to be read and written by other tools, and it avoids confusing errors on a binary format error that could happen in those tools if we had a custom binary format extension.

An intrinsic method may be optimized away by the optimizer. If it is not, it must be lowered before shipping the wasm, as otherwise it will look like a call to an import that does not exist (and VMs will show an error on not having a proper value for that import). That final lowering is not done automatically. A user of intrinsics must run the pass for that explicitly, because the tools do not know when the user intends to finish optimizing, as the user may have a pipeline of multiple optimization steps, or may be doing local experimentation, or fuzzing/reducing, etc. Only the user knows when the final optimization happens before the wasm is "final" and ready to be shipped. Note that, in general, some additional optimizations may be possible after the final lowering, and so a useful pattern is to optimize once normally with intrinsics, then lower them away, then optimize after that, e.g.:

wasm-opt input.wasm -o output.wasm -O --intrinsic-lowering -O

Each intrinsic defines its semantics, which includes what the optimizer is allowed to do with it and what the final lowering will turn it to. See intrinsics.h for the detailed definitions. A quick summary appears here:

Tools

This repository contains code that builds the following tools in bin/ (see the building instructions):

All of the Binaryen tools are deterministic, that is, given the same inputs you should always get the same outputs. (If you see a case that behaves otherwise, please file an issue.)

Usage instructions for each are below.

Binaryen Optimizations

Binaryen contains a lot of optimization passes to make WebAssembly smaller and faster. You can run the Binaryen optimizer by using wasm-opt, but also they can be run while using other tools, like wasm2js and wasm-metadce.

See each optimization pass for details of what it does, but here is a quick overview of some of the relevant ones:

"LTO" in the above means an optimization is Link Time Optimization-like in that it works across multiple functions, but in a sense Binaryen is always "LTO" as it usually is run on the final linked wasm.

Advanced optimization techniques in the Binaryen optimizer include SSAification, Flat IR, and Stack/Poppy IR.

See the Optimizer Cookbook wiki page for more on how to use the optimizer effectively.

Binaryen also contains various passes that do other things than optimizations, like legalization for JavaScript, Asyncify, etc.

Building

Binaryen uses git submodules (at time of writing just for gtest), so before you build you will have to initialize the submodules:

git submodule init
git submodule update

After that you can build with CMake:

cmake . && make

A C++17 compiler is required. On macOS, you need to install cmake, for example, via brew install cmake. Note that you can also use ninja as your generator: cmake -G Ninja . && ninja.

To avoid the gtest dependency, you can pass -DBUILD_TESTS=OFF to cmake.

Binaryen.js can be built using Emscripten, which can be installed via the SDK.

Visual C++

  1. Using the Microsoft Visual Studio Installer, install the "Visual C++ tools for CMake" component.

  2. Generate the projects:

    mkdir build
    cd build
    "%VISUAL_STUDIO_ROOT%\Common7\IDE\CommonExtensions\Microsoft\CMake\CMake\bin\cmake.exe" ..

    Substitute VISUAL_STUDIO_ROOT with the path to your Visual Studio installation. In case you are using the Visual Studio Build Tools, the path will be "C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools".

  3. From the Developer Command Prompt, build the desired projects:

    msbuild binaryen.vcxproj

    CMake generates a project named "ALL_BUILD.vcxproj" for conveniently building all the projects.

Releases

Builds are distributed by the various toolchains that use Binaryen, like Emscripten, wasm-pack, etc. There are also official releases on GitHub:

https://github.com/WebAssembly/binaryen/releases

Currently builds of the following platforms are included:

Running

wasm-opt

Run

bin/wasm-opt [.wasm or .wat file] [options] [passes, see --help] [--help]

The wasm optimizer receives WebAssembly as input, and can run transformation passes on it, as well as print it (before and/or after the transformations). For example, try

bin/wasm-opt test/lit/passes/name-types.wast -all -S -o -

That will output one of the test cases in the test suite. To run a transformation pass on it, try

bin/wasm-opt test/lit/passes/name-types.wast --name-types -all -S -o -

The name-types pass ensures each type has a name and renames exceptionally long type names. You can see the change the transformation causes by comparing the output of the two commands.

It's easy to add your own transformation passes to the shell, just add .cpp files into src/passes, and rebuild the shell. For example code, take a look at the name-types pass.

Some more notes:

wasm2js

Run

bin/wasm2js [input.wasm file]

This will print out JavaScript to the console.

For example, try

bin/wasm2js test/hello_world.wat

That output contains

 function add(x, y) {
  x = x | 0;
  y = y | 0;
  return x + y | 0 | 0;
 }

as a translation of

 (func $add (; 0 ;) (type $0) (param $x i32) (param $y i32) (result i32)
  (i32.add
   (local.get $x)
   (local.get $y)
  )
 )

wasm2js's output is in ES6 module format - basically, it converts a wasm module into an ES6 module (to run on older browsers and Node.js versions you can use Babel etc. to convert it to ES5). Let's look at a full example of calling that hello world wat; first, create the main JS file:

// main.mjs
import { add } from "./hello_world.mjs";
console.log('the sum of 1 and 2 is:', add(1, 2));

The run this (note that you need a new enough Node.js with ES6 module support):

$ bin/wasm2js test/hello_world.wat -o hello_world.mjs
$ node --experimental-modules main.mjs
the sum of 1 and 2 is: 3

Things keep to in mind with wasm2js's output:

wasm-ctor-eval

wasm-ctor-eval executes functions, or parts of them, at compile time. After doing so it serializes the runtime state into the wasm, which is like taking a "snapshot". When the wasm is later loaded and run in a VM, it will continue execution from that point, without re-doing the work that was already executed.

For example, consider this small program:

(module
 ;; A global variable that begins at 0.
 (global $global (mut i32) (i32.const 0))

 (import "import" "import" (func $import))

 (func "main"
  ;; Set the global to 1.
  (global.set $global
   (i32.const 1))

  ;; Call the imported function. This *cannot* be executed at
  ;; compile time.
  (call $import)

  ;; We will never get to this point, since we stop at the
  ;; import.
  (global.set $global
   (i32.const 2))
 )
)

We can evaluate part of it at compile time like this:

wasm-ctor-eval input.wat --ctors=main -S -o -

This tells it that there is a single function that we want to execute ("ctor" is short for "global constructor", a name that comes from code that is executed before a program's entry point) and then to print it as text to stdout. The result is this:

trying to eval main
  ...partial evalling successful, but stopping since could not eval: call import: import.import
  ...stopping
(module
 (type $none_=>_none (func))
 (import "import" "import" (func $import))
 (global $global (mut i32) (i32.const 1))
 (export "main" (func $0_0))
 (func $0_0
  (call $import)
  (global.set $global
   (i32.const 2)
  )
 )
)

The logging shows us managing to eval part of main(), but not all of it, as expected: We can eval the first global.get, but then we stop at the call to the imported function (because we don't know what that function will be when the wasm is actually run in a VM later). Note how in the output wasm the global's value has been updated from 0 to 1, and that the first global.get has been removed: the wasm is now in a state that, when we run it in a VM, will seamlessly continue to run from the point at which wasm-ctor-eval stopped.

In this tiny example we just saved a small amount of work. How much work can be saved depends on your program. (It can help to do pure computation up front, and leave calls to imports to as late as possible.)

Note that wasm-ctor-eval's name is related to global constructor functions, as mentioned earlier, but there is no limitation on what you can execute here. Any export from the wasm can be executed, if its contents are suitable. For example, in Emscripten wasm-ctor-eval is even run on main() when possible.

wasm-merge

wasm-merge combines wasm files together. For example, imagine you have a project that uses wasm files from multiple toolchains. Then it can be helpful to merge them all into a single wasm file before shipping, since in a single wasm file the calls between the modules become just normal calls inside a module, which allows them to be inlined, dead code eliminated, and so forth, potentially improving speed and size.

wasm-merge operates on normal wasm files. It differs from wasm-ld in that respect, as wasm-ld operates on wasm object files. wasm-merge can help in multi-toolchain situations where at least one of the toolchains does not use wasm object files.

For example, imagine we have these two wasm files:

;; a.wasm
(module
  (import "second" "bar" (func $second.bar))

  (export "main" (func $func))

  (func $func
    (call $second.bar)
  )
)
;; b.wasm
(module
  (import "outside" "log" (func $log (param i32)))

  (export "bar" (func $func))

  (func $func
    (call $log
      (i32.const 42)
    )
  )
)

The filenames on your local drive are a.wasm and b.wasm, but for merging / bundling purposes let's say that the first is known as "first" and the second as "second". That is, we want the first module's import of "second.bar" to call the function $func in the second module. Here is a wasm-merge command for that:

wasm-merge a.wasm first b.wasm second -o output.wasm

We give it the first wasm file, then its name, and then the second wasm file and then its name. The merged output is this:

(module
  (import "outside" "log" (func $log (param i32)))

  (export "main" (func $func))
  (export "bar" (func $func_2))

  (func $func
    (call $func_2)
  )

  (func $func_2
    (call $log
      (i32.const 42)
    )
  )
)

wasm-merge combined the two files into one, merging their functions, imports, etc., all while fixing up name conflicts and connecting corresponding imports to exports. In particular, note how $func calls $func_2, which is exactly what we wanted: $func_2 is the function from the second module (renamed to avoid a name collision).

Note that the wasm output in this example could benefit from additional optimization. First, the call to $func_2 can now be easily inlined, so we can run wasm-opt -O3 to do that for us. Also, we may not need all the imports and exports, for which we can run wasm-metadce. A good workflow could be to run wasm-merge, then wasm-metadce, then finish with wasm-opt.

wasm-merge is kind of like a bundler for wasm files, in the sense of a "JS bundler" but for wasm. That is, with the wasm files above, imagine that we had this JS code to instantiate and connect them at runtime:

// Compile the first module.
var first = await fetch("a.wasm");
first = new WebAssembly.Module(first);

// Compile the first module.
var second = await fetch("b.wasm");
second = new WebAssembly.Module(second);

// Instantiate the second, with a JS import.
second = new WebAssembly.Instance(second, {
  outside: {
    log: (value) => {
      console.log('value:', value);
    }
  }
});

// Instantiate the first, importing from the second.
first = new WebAssembly.Instance(first, {
  second: second.exports
});

// Call the main function.
first.exports.main();

What wasm-merge does is basically what that JS does: it hooks up imports to exports, resolving names using the module names you provided. That is, by running wasm-merge we are moving the work of connecting the modules from runtime to compile time. As a result, after running wasm-merge we need a lot less JS to get the same result:

// Compile the single module.
var merged = await fetch("merged.wasm");
merged = new WebAssembly.Module(merged);

// Instantiate it with a JS import.
merged = new WebAssembly.Instance(merged, {
  outside: {
    log: (value) => {
      console.log('value:', value);
    }
  }
});

// Call the main function.
merged.exports.main();

We still need to fetch and compile the merged wasm, and to provide it the JS import, but the work to connect two wasm modules is not needed any more.

Handling exports

By default wasm-merge errors if there are overlapping export names. That is, wasm-merge will automatically handle overlapping function names and so forth, because those are not externally visible (the code still behaves the same), but if we renamed exports then the outside would need to be modified to expect the new export names, and so we error instead on such name conflicts.

If you do want exports to be renamed, run wasm-merge with --rename-export-conflicts. Later exports will have a suffix appended to them to ensure they do not overlap with previous exports. The suffixes are deterministic, so once you see what they are you can call them from the outside.

Another option is to use --skip-export-conflicts which will simply skip later exports that have conflicting names. For example, this can be useful in the case where the first module is the only one that interacts with the outside and the later modules just interact with the first module.

Features

wasm-merge uses the multi-memory and multi-table features. That is, if multiple input modules each have a memory then the output wasm will have several memories, and will depend on the multi-memory feature, which means that older wasm VMs might not be able to run the wasm. (As a workaround for such older VMs you can run wasm-opt --multi-memory-lowering to lower multiple memories into a single one.)

Testing

./check.py

(or python check.py) will run wasm-shell, wasm-opt, etc. on the testcases in test/, and verify their outputs.

The check.py script supports some options:

./check.py [--interpreter=/path/to/interpreter] [TEST1] [TEST2]..

Note that we are trying to gradually port the legacy wasm-opt tests to use lit and filecheck as we modify them. For passes tests that output wast, this can be done automatically with scripts/port_passes_tests_to_lit.py and for non-passes tests that output wast, see https://github.com/WebAssembly/binaryen/pull/4779 for an example of how to do a simple manual port.

For lit tests the test expectations (the CHECK lines) can often be automatically updated as changes are made to binaryen. See scripts/update_lit_checks.py.

Non-lit tests can also be automatically updated in most cases. See scripts/auto_update_tests.py.

Setting up dependencies

./third_party/setup.py [mozjs|v8|wabt|all]

(or python third_party/setup.py) installs required dependencies like the SpiderMonkey JS shell, the V8 JS shell and WABT in third_party/. Other scripts automatically pick these up when installed.

Run pip3 install -r requirements-dev.txt to get the requirements for the lit tests. Note that you need to have the location pip installs to in your $PATH (on linux, ~/.local/bin).

Fuzzing

./scripts/fuzz_opt.py [--binaryen-bin=build/bin]

(or python scripts/fuzz_opt.py) will run various fuzzing modes on random inputs with random passes until it finds a possible bug. See the wiki page for all the details.

Design Principles

Debug Info Support

Source Maps

Binaryen can read and write source maps (see the -ism and -osm flags to wasm-opt). It can also read and read source map annotations in the text format, that is,

;;@ src.cpp:100:33
(i32.const 42)

That 42 constant is annotated as appearing in a file called src.cpp at line 100 and column 33. Source maps and text format annotations are interchangeable, that is, they both lead to the same IR representation, so you can start with an annotated wat and have Binaryen write that to a binary + a source map file, or read a binary + source map file and print text which will contain those annotations.

The IR representation of source map info is simple: in each function we have a map of expressions to their locations. Optimization passes should update the map as relevant. Often this "just works" because the optimizer tries to reuse nodes when possible, so they keep the same debug info.

Shorthand notation

The text format annotations support a shorthand in which repeated annotations are not necessary. For example, children are tagged with the debug info of the parent, if they have no annotation of their own:

;;@ src.cpp:100:33
(i32.add
  (i32.const 41)      ;; This receives an annotation of src.cpp:100:33
  ;;@ src.cpp:111:44
  (i32.const 1)
)

The first const will have debug info identical to the parent, because it has none specified, and generally such nesting indicates a "bundle" of instructions that all implement the same source code.

Note that text printing will not emit such repeated annotations, which can be confusing. To print out all the annotations, set BINARYEN_PRINT_FULL=1 in the environment. That will print this for the above add:

[i32] ;;@ src.cpp:100:33
(i32.add
 [i32] ;;@ src.cpp:100:33
 (i32.const 41)
 [i32] ;;@ src.cpp:111:44
 (i32.const 1)
)

(full print mode also adds a [type] for each expression, right before the debug location).

The debug information is also propagated from an expression to its next sibling:

;;@ src.cpp:100:33
(local.set $x
 (i32.const 0)
)
(local.set $y ;; This receives an annotation of src.cpp:100:33
 (i32.const 0)
)

You can prevent the propagation of debug info by explicitly mentioning that an expression has not debug info using the annotation ;;@ with nothing else:

;;@ src.cpp:100:33
(local.set $x
 ;;@
 (i32.const 0) ;; This does not receive any annotation
)
;;@
(local.set $y ;; This does not receive any annotation
 (i32.const 7)
)

This stops the propagatation to children and siblings as well. So, expression (i32.const 7) does not have any debug info either.

There is no shorthand in the binary format. That is, roundtripping (writing and reading) through a binary + source map should not change which expressions have debug info on them or the contents of that info.

Implementation Details

The source maps format defines a mapping using segments, that is, if a segment starts at binary offset 10 then it applies to all instructions at that offset and until another segment begins (or the end of the input is reached). Binaryen's IR represents a mapping from expressions to locations, as mentioned, so we need to map to and from the segment-based format when writing and reading source maps.

That is mostly straightforward, but one thing we need to do is to handle the lack of debug info in between things that have it. If we have A B C where B lacks debug info, then just emitting a segment for A and C would lead A's segment to also cover B, since in source maps segments do not have a size - rather they end when a new segment begins. To avoid B getting smeared in this manner, we emit a source maps entry to B of size 1, which just marks the binary offset it has, and without the later 3 fields of the source file, line number, and column. (This appears to be the intent of the source maps spec, and works in browsers and tools.)

DWARF

Binaryen also has optional support for DWARF. This primarily just tracks the locations of expressions and rewrites the DWARF's locations accordingly; it does not handle things like re-indexing of locals, and so passes that might break DWARF are disabled by default. As a result, this mode is not suitable for a fully optimized release build, but it can be useful for local debugging.

FAQ

Binaryen's name was inspired by Emscripten's: Emscripten's name suggests it converts something into a script - specifically JavaScript - and Binaryen's suggests it converts something into a binary - specifically WebAssembly. Binaryen began as Emscripten's WebAssembly generation and optimization tool, so the name fit as it moved Emscripten from something that emitted the text-based format JavaScript (as it did from its early days) to the binary format WebAssembly (which it has done since WebAssembly launched).

"Binaryen" is pronounced in the same manner as "Targaryen".

Yes, it does. Here's a step-by-step tutorial on how to compile it under Windows 10 x64 with with CMake and Visual Studio 2015. However, Visual Studio 2017 may now be required. Help would be appreciated on Windows and OS X as most of the core devs are on Linux.