Open HectorSVC opened 6 days ago
@chiwwang, could you help to take a look?
Hi Hector, This looks good for me but let me ping others and see if they can also take a look.
Comments from QC: The approach has the limitation that it always gets the max spill fill buffer size form the 1st QNN context. The max spill file buffer size should be across all QNN contexts. To fill the gap, we need to go through all QNN context to:
Considering this feature is mostly target for large models which has large context binary size, so there will be big overhead for step 1 & 2. Another approach is we dump the max spill fill buffer size for each Qnn context in EPContext node when we generate the model to make this information ready ahead of time instead of get it during normal session creation time. We can get the information from all EPContext nodes to get the max size and load that one first.
Description
Enable QNN HTP spill fill buffer setting to save RAM usage. This feature is available after QNN 2.28. Need to re-generate QNN context binary. https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/htp_backend.html#qnn-htp-backend-api