microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.13k stars 2.85k forks source link

[Web] Quantized model decreases in size, but takes same amount of inference time as non-quantized model #21535

Open kabyanil opened 1 month ago

kabyanil commented 1 month ago

Describe the issue

I have a transformer model from which I'm exporting all the modules (i.e. source embedding, positional encoding, encoder, decoder, projection layer etc) separately to onnx. For simplicity, I am going to focus on just one module - the encoder. The non-quantized encoder module was sized 75.7 MB and it took around 110 milliseconds for inference in onnx runtime javascript. I used the following code to quantize the module -

# encoder
quantize_dynamic(
    model_input=f'{common_dir}/encoder.onnx',
    model_output=f'{common_dir}/quantized/encoder.onnx',
    weight_type=QuantType.QUInt8,
)

The generated quantized model is of size 19.2 MB. However, the web inference is still taking roughly the same time, meaning the quantization has not had an impact in inference time.

This is the inference code -

 const src_encoder_out = await session.src_encode.run({
    input_1: src_pos_out,
    input_2: src_mask,
 }).then((res) => res[871])

This is the session configuration -

const sessionOptions = {
               executionProviders: ['wasm'],
               enableCpuMemArena: true,
               // enableGraphCapture: true,
               executionMode: "parallel",
               enableMemPattern: true,
               intraOpNumThreads: 4,
               graphOptimizationLevel: "extended"
            }

            // create the session variable
            const session = {
               ...
               src_encode: await ort.InferenceSession.create("./models/encoder.onnx", sessionOptions),
               ...
            }

Why is the quantized model smaller, but take the same time to infer as the non-quantized model?

To reproduce

Unfortunately, the onnx files are too big to upload here.

Urgency

No response

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

ONNX Runtime Web v1.18.0

Execution Provider

'wasm'/'cpu' (WebAssembly CPU)

gyagp commented 1 month ago

Weight quantization may save IO, but may not impact the inference time obviously as the underlying compute is still FP32. If you need more performance, can you try WebGPU EP? If it doesn't work well as expected, please share the model and web app to run it.

kabyanil commented 1 month ago

My target environment may not facilitate GPUs. Therefore I cannot resort to WebGPU. What is your opinion on onnx runtime web vs tfjs in terms of CPU performance?

gyagp commented 1 month ago

I think you mean WASM (TFJS also has a CPU backend written in TypeScript, in addition to WASM backend written in C++), but I don't have a concrete idea about their perf comparison. BTW, SIMD and Multi-threading usually bring a lot of perf gain for WASM.

github-actions[bot] commented 6 days ago

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.