Open yusuf-ilgun opened 1 year ago
What is WASM (Native)
here as compared to WASM (Native)
?
sorry for not being more clear, with wasm native i meant: nodejs. Let me rephrase all 3.
Native (C++ on Ubuntu, no WASM): using gcc as a compiler running directly on ubuntu.
WASM on Node.js (Local): compiled with -sENVIRONMENT='node' running on node v20
WASM (Web): Compiled with -sENVIRONMENT='web' running on browsers, I used 3 browsers so far, and it was all same (Chrome, firefox, safari: Browsers are always up to date, so latest version for all)
Hmm that is very strange that you are seeing perf differences between Node.js and chrome since in both cases we are dealing with the same v8 engines. Perhaps a version difference might account for it.
Can you try building with -sENVIRONMENT=node,web
so you can literally run the exact same build/binary in those two environment?
Description:
I have been working on a C++ inference project using onnxruntime. The goal is to compile the C++ code into WebAssembly (WASM) for use in jitsi-meet.
During my testing, I noticed a significant performance degradation in inference time based on how I compiled with Emscripten: Native (no WASM): Inference runs in an average of 1ms. WASM (Native): Average inference time increases to about 5ms. WASM (Web): Average inference time further increases to about 11ms.
I've tried multiple versions of Emscripten. Version 3.1.41 is the latest that works for me due to an atob issue in subsequent versions. I believe solving the native compilation problem might also address the web issue.
Environment Information:
command line in full:
Tried with and without optimizations, msimd, flto.
Code might not be relevant but here you go:
Steps to Reproduce: Compile the C++ inference code with the above options(Or any other onnx inference code). Test the inference time in a native environment. Compile the code for WASM (both native and web). Compare the inference times. (I can provide full project files if it is needed)
Expected Result: The performance difference between native and WASM-compiled versions should be minimal.
Actual Result: Inference times increased significantly when compiling with Emscripten for WASM.
Additional Notes: This is not rnnoise as the function names suggested, i am just trying to implement my own solution instead rnnoise. So I didn't change the function names for convenience.
Any assistance or insight into resolving this performance discrepancy would be greatly appreciated. Any advice about compilation options, emscripten version, or even a better language for the wasm would also be appreciated.