Closed radu-matei closed 3 years ago
Follow-up with a larger module, a 10M module built with SwiftWasm:
cargo run --release -- --config examples/modules.toml
Finished release [optimized] target(s) in 0.67s
Running `target/release/wagi --config examples/modules.toml`
=> Starting server
instantiation time for module examples/swift-wagi.wasm: 996.081268ms
instantiation time for module examples/swift-wagi.wasm: 968.473457ms
instantiation time for module examples/swift-wagi.wasm: 970.542515ms
cargo run --release -- --config examples/modules.toml --cache examples/cache.toml
Finished release [optimized] target(s) in 0.13s
Running `target/release/wagi --config examples/modules.toml --cache examples/cache.toml`
=> Starting server
instantiation time for module examples/swift-wagi.wasm: 1.268687194s
instantiation time for module examples/swift-wagi.wasm: 279.28753ms
instantiation time for module examples/swift-wagi.wasm: 299.680852ms
instantiation time for module examples/swift-wagi.wasm: 286.663018ms
As a side node, CPU performance and file access speed will have an impact on the gains (or losses) that can be seen with caching at this level, so testing is recommended before taking decisions here.
ref #3
This commit updates WAGI to allow using the Wasmtime cache in order to speed up the execution of modules. (more info about Wasmtime caching - https://docs.wasmtime.dev/cli-cache.html)
Specifically, for large modules, this has a noticeable impact on instantiation time. While I don't have extensive benchmarks yet, for a 2.1M module (built from https://github.com/deislabs/env_wagi):
I suspect the improvements in performance are better for larger modules, but this is something we should test further.