ARM-software / ComputeLibrary

The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologies.
MIT License
2.75k stars 767 forks source link

ACL operators need to be made stateless to avoid runtime initialization overhead #1085

Open snadampal opened 6 months ago

snadampal commented 6 months ago

Output of 'strings libarm_compute.so | grep arm_compute_version': arm_compute_version=v23.11 Build options: {'Werror': '0', 'debug': '0', 'neon': '1', 'opencl': '0', 'embed_kernels': '0', 'os': 'linux', 'arch': 'armv8a', 'build': 'native', 'multi_isa': '1', 'fixed_format_kernels': '1', 'openmp': '1', 'cppthreads': '0'} Git hash=b'add70ace1e57f65d1ae4d0cedaec6e4578cf87ff'

Platform: AWS c7g.16xl

Operating System: Ubuntu 22.04

Problem description: One of the important optimizations for better inference performance is to cut down the kernel initialization overhead. This can be achieved by caching the operator after first time initialization and reuse it across similar tensor shapes. Today it's not possible to cache ACL operator because they maintain the workspace state along with the initialization and the workspace is specific to the gemm operation. The requirement is to make the operators stateless so that they get initialized once and reused across multiple gemm operations of the same shapes. more details are in this oneDNN discussion: https://github.com/oneapi-src/oneDNN/pull/1455#discussion_r979043207

morgolock commented 5 months ago

Hi @snadampal

Thanks for raising this. We will discuss the feature request with the team.

milpuz01 commented 4 months ago

We are about to start the exploratory work for 24.05 to integrate two oneDNN primitives convolution and matrix multiplication to use existing non-public ACL API for using stateless object in order to better understand what (if any) requirements are necessary on ACL side. Once that work is done we plan to address any changes in ACL for 24.08 release and port the rest of oneDNN primitives. We expect the work to be done by 24.11.

As the most of the work that will start will initially be oneDNN specific we will link to this issue PRs from oneDNN to track progress.