microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.77k stars 2.94k forks source link

Enable QNN HTP spill fill buffer setting to save RAM usage. #22853

Open HectorSVC opened 6 days ago

HectorSVC commented 6 days ago

Description

Enable QNN HTP spill fill buffer setting to save RAM usage. This feature is available after QNN 2.28. Need to re-generate QNN context binary. https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/htp_backend.html#qnn-htp-backend-api

HectorSVC commented 6 days ago

@chiwwang, could you help to take a look?

chiwwang commented 4 days ago

Hi Hector, This looks good for me but let me ping others and see if they can also take a look.

HectorSVC commented 5 hours ago

Comments from QC: The approach has the limitation that it always gets the max spill fill buffer size form the 1st QNN context. The max spill file buffer size should be across all QNN contexts. To fill the gap, we need to go through all QNN context to:

  1. Load the QNN context binary buffer and extract the max spill fille buffer size for each QNN context
  2. Compare the max spill fille buffer size across all QNN context and track the index of the QNN context
  3. Load and deserialize the QNN context (to get the graph info for future execute) which has the max spill fille buffer size first, also set the max spill fill buffer, set the group handle to 0.
  4. Load and deserialize other QNN contexts, set the max spill buffer size, and set the group handle to the context in step3.

Considering this feature is mostly target for large models which has large context binary size, so there will be big overhead for step 1 & 2. Another approach is we dump the max spill fill buffer size for each Qnn context in EPContext node when we generate the model to make this information ready ahead of time instead of get it during normal session creation time. We can get the information from all EPContext nodes to get the max size and load that one first.