usefulsensors / qc_npu_benchmark

Code sample showing how to run and benchmark models on Qualcomm's Window PCs
Apache License 2.0
82 stars 2 forks source link

There's useless DQ node in matmul_model_quant_io.onnx #1

Open HectorSVC opened 5 days ago

HectorSVC commented 5 days ago

There's useless DQ node in matmul_model_quant_io.onnx useless_dq_node

Also have some questions:

  1. The model has 2 inputs and 1 output with large data size, which means huge IO cost for NPU, maybe you can try something different like, make 2nd input as initializer, change inputs to [1, 6, 256, 1500] * [1, 6, 1500, 1500], so output is [1, 6, 256, 256]
  2. in your benchmark script, the time cost includes the 1st inference run. Normally we would skip the 1st inference run as warmup.
nonnull-ca commented 5 days ago

Regarding #1, I will note in the readme:

This benchmark is designed to resemble some real world models we depend on

Regarding #2, Whisper (and most other other models) doesn't run the same matrix multiplication over and over again. Instead it runs a bunch of different (large) multiplications in a row. This tends to push weights out of cache, and as such I'd argue that cold-cache performance for a single layer's operations is, if anything, more important than warm-cache performance.

HectorSVC commented 5 days ago

Does your real word models have same IO size? It doesn't make sense that just extract part of the model and test it separately. It makes more sense to test a full model instead.

HectorSVC commented 5 days ago

Also the benchmark script compare QDQ model on NPU vs fp32 model on CPU, it's not apple to apple.