Open kabyanil opened 1 month ago
Weight quantization may save IO, but may not impact the inference time obviously as the underlying compute is still FP32. If you need more performance, can you try WebGPU EP? If it doesn't work well as expected, please share the model and web app to run it.
My target environment may not facilitate GPUs. Therefore I cannot resort to WebGPU. What is your opinion on onnx runtime web vs tfjs in terms of CPU performance?
I think you mean WASM (TFJS also has a CPU backend written in TypeScript, in addition to WASM backend written in C++), but I don't have a concrete idea about their perf comparison. BTW, SIMD and Multi-threading usually bring a lot of perf gain for WASM.
This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.
Describe the issue
I have a transformer model from which I'm exporting all the modules (i.e. source embedding, positional encoding, encoder, decoder, projection layer etc) separately to onnx. For simplicity, I am going to focus on just one module - the encoder. The non-quantized encoder module was sized 75.7 MB and it took around 110 milliseconds for inference in onnx runtime javascript. I used the following code to quantize the module -
The generated quantized model is of size 19.2 MB. However, the web inference is still taking roughly the same time, meaning the quantization has not had an impact in inference time.
This is the inference code -
This is the session configuration -
Why is the quantized model smaller, but take the same time to infer as the non-quantized model?
To reproduce
Unfortunately, the onnx files are too big to upload here.
Urgency
No response
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
ONNX Runtime Web v1.18.0
Execution Provider
'wasm'/'cpu' (WebAssembly CPU)