microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.1k stars 2.84k forks source link

onnxruntime-web is 11-17x times slower than native inference #11181

Open CanyonWind opened 2 years ago

CanyonWind commented 2 years ago

Describe the bug Hi, in the onnxruntime-web blog, it claims near-native speed on the web. I tested mobilenetv2 as a benchmark and our own panoptic segmentation model as well. It runs 11 and 17 times slower than native inference for mobilenet v2 and our model. Wonder whether this is expected or if some inference configs are messed up on our side?

Just for reference, tensorflow-js with SIMD and multi-thread enabled runs 12ms for mobilenetv2, and onnxruntime-web takes about 45ms. Native inference with onnxruntime takes 4ms on my 2019 MacBook pro.

We would like to use onnxruntime-web as the inference engine because of the easy portability for our existing onnx models. But the speed difference between tf-js is quite significant. Help would be appreciated.

Urgency High

System information

To Reproduce For mobilenetv2:

For our model:

CanyonWind commented 2 years ago

Hello community, any help on this? Thanks a lot

ncianeo commented 2 years ago

For a single function, it is true that webassembly could give you near-native speed on web. However, the inference processes of some DNN models call sequences of a number of functions corresponding to the layers that compose the network. Since Javascript function call overheads are not neglectable, there should be performance issue in whole inference time of the network. You may profile your JS inference using devtools (Ctrl + Shift + I), performance tab.

ncianeo commented 2 years ago

(Add) Current ort-web implementation consists of:

  1. onnx model parsing & loading weights: Javascript(Typescript)
  2. infer each layer (wasm function call): Javascript(Typescript)
  3. actual computation of each layer: wasm(WebAssembly)

I think 2-3 should be merged into wasm part in order to get more performance, but there will be possible issues like:

  1. model weights should be sent to wasm buffer at first (memory usage issue)
  2. it will be not compatible with webgl backend (whole webgl backend is written in javascript(typescript) + glsl). In this moment (at the development stage), this will decrease productivity of the library because the development of webgl backend is not even completed yet.
vacing commented 1 year ago

Describe the bug Hi, in the onnxruntime-web blog, it claims near-native speed on the web. I tested mobilenetv2 as a benchmark and our own panoptic segmentation model as well. It runs 11 and 17 times slower than native inference for mobilenet v2 and our model. Wonder whether this is expected or if some inference configs are messed up on our side?

Just for reference, tensorflow-js with SIMD and multi-thread enabled runs 12ms for mobilenetv2, and onnxruntime-web takes about 45ms. Native inference with onnxruntime takes 4ms on my 2019 MacBook pro.

We would like to use onnxruntime-web as the inference engine because of the easy portability for our existing onnx models. But the speed difference between tf-js is quite significant. Help would be appreciated.

Urgency High

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): MacOS 12.0.1
  • ONNX Runtime web installed from (source or binary): binary from npm
  • ONNX Runtime web version: 1.11 (latest from https://www.npmjs.com/package/onnxruntime-web)

To Reproduce For mobilenetv2:

  • onnxruntime-web: used this repo, with the latest onnxruntime-web version
  • tf-js: used this demo

For our model:

  • cannot share because of confidentiality.

same problem, I want to know your final decision or solution, please

vacing commented 1 year ago

tfjs demo may has bug, if you select multi-thread, then the speed will be very slow, and can't recover even unselect it.

I wonder your wasm test result may be not correct, tfjs used webgl because of bug

francis2tm commented 1 year ago

Hey @CanyonWind , Did you manage to find any solution to make onnxruntime-web at least on pair with tensorflow.js?

Thanks

sebastian-east commented 11 months ago

I am also running into this issue: I find that onnxruntime-web is ~10x slower for inference than onnxruntime-node and onnxruntime in python (which both have comparable performance) when using the same model and input data. The web profiler indicates that all of the time consumed during the onnxruntime-web inference is in wasm functions. This issue appears to have been around for a while; is it simply an accepted performance limitation when using onnxruntime-web? It's not obvious why the performance of the onnxruntime-web implementation should be so slow, particularly compared to the node implementaton.

chinmayakcv commented 7 months ago

Any progress on this?

kabyanil commented 1 month ago

I am trying to run web inference on a transformer model, with the modules separately exported to onnx. I can confirm that this issue still exists for onnx runtime web. Web inference is much slower compared to Python inference using the same onnx modules.

gyagp commented 1 month ago

@kabyanil There are multiple reasons that may cause perf diff with the native, and it's case by case. Could you please share your model and how to run it?

kabyanil commented 1 month ago

@gyagp Please refer to this issue I opened yesterday for more info #21535