Open CanyonWind opened 2 years ago
Hello community, any help on this? Thanks a lot
For a single function, it is true that webassembly could give you near-native speed on web. However, the inference processes of some DNN models call sequences of a number of functions corresponding to the layers that compose the network. Since Javascript function call overheads are not neglectable, there should be performance issue in whole inference time of the network. You may profile your JS inference using devtools (Ctrl + Shift + I), performance tab.
(Add) Current ort-web implementation consists of:
I think 2-3 should be merged into wasm part in order to get more performance, but there will be possible issues like:
Describe the bug Hi, in the onnxruntime-web blog, it claims near-native speed on the web. I tested mobilenetv2 as a benchmark and our own panoptic segmentation model as well. It runs 11 and 17 times slower than native inference for mobilenet v2 and our model. Wonder whether this is expected or if some inference configs are messed up on our side?
Just for reference,
tensorflow-js
with SIMD and multi-thread enabled runs 12ms for mobilenetv2, andonnxruntime-web
takes about 45ms. Native inference with onnxruntime takes 4ms on my 2019 MacBook pro.We would like to use onnxruntime-web as the inference engine because of the easy portability for our existing onnx models. But the speed difference between tf-js is quite significant. Help would be appreciated.
Urgency High
System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): MacOS 12.0.1
- ONNX Runtime web installed from (source or binary): binary from npm
- ONNX Runtime web version: 1.11 (latest from https://www.npmjs.com/package/onnxruntime-web)
To Reproduce For mobilenetv2:
For our model:
- cannot share because of confidentiality.
same problem, I want to know your final decision or solution, please
- tf-js: used this demo
tfjs demo may has bug, if you select multi-thread, then the speed will be very slow, and can't recover even unselect it.
I wonder your wasm test result may be not correct, tfjs used webgl because of bug
Hey @CanyonWind , Did you manage to find any solution to make onnxruntime-web at least on pair with tensorflow.js?
Thanks
I am also running into this issue: I find that onnxruntime-web is ~10x slower for inference than onnxruntime-node and onnxruntime in python (which both have comparable performance) when using the same model and input data. The web profiler indicates that all of the time consumed during the onnxruntime-web inference is in wasm functions. This issue appears to have been around for a while; is it simply an accepted performance limitation when using onnxruntime-web? It's not obvious why the performance of the onnxruntime-web implementation should be so slow, particularly compared to the node implementaton.
Any progress on this?
I am trying to run web inference on a transformer model, with the modules separately exported to onnx. I can confirm that this issue still exists for onnx runtime web. Web inference is much slower compared to Python inference using the same onnx modules.
@kabyanil There are multiple reasons that may cause perf diff with the native, and it's case by case. Could you please share your model and how to run it?
@gyagp Please refer to this issue I opened yesterday for more info #21535
Describe the bug Hi, in the onnxruntime-web blog, it claims near-native speed on the web. I tested mobilenetv2 as a benchmark and our own panoptic segmentation model as well. It runs 11 and 17 times slower than native inference for mobilenet v2 and our model. Wonder whether this is expected or if some inference configs are messed up on our side?
Just for reference,
tensorflow-js
with SIMD and multi-thread enabled runs 12ms for mobilenetv2, andonnxruntime-web
takes about 45ms. Native inference with onnxruntime takes 4ms on my 2019 MacBook pro.We would like to use onnxruntime-web as the inference engine because of the easy portability for our existing onnx models. But the speed difference between tf-js is quite significant. Help would be appreciated.
Urgency High
System information
To Reproduce For mobilenetv2:
For our model: