Contains the changes to build system required to compile and link code referring to MKL functions, along with various scripts etc. to exercise those functions and gather data on the impact that they may have.
Tools and packages needed for testing MKL setting with pytorch_inference
Start with latest ml_linux_build Docker image (30)
Checkout the code in this PR on a linux x86_64 machine and configure CMake as normal, but ensure that pytorch_inference is linked against libtcmalloc. This can be done with e.g.
These scripts can be tweaked in various ways before running. In the case of evaluate.py edit the script to:
use either heapprof (from gperftools) or heapcheck.
Alter how many inferences are requested and in how many batches.
Choose how frequently to send the mkl_free_buffers control request
Viewing results
If running pytorch_inference under heapprof there will be a reasonably large number of output files generated, e.g. /tmp/heapprof.0040.heap. These files need to be post processed by a tool called pprof e.g.:
Checkpointing current status for visibility.
Contains the changes to build system required to compile and link code referring to MKL functions, along with various scripts etc. to exercise those functions and gather data on the impact that they may have.
Tools and packages needed for testing MKL setting with
pytorch_inference
/usr/local/gcc103
yum install python3
(for running scripts for testing inference)intel-oneapi-mkl-devel-2024.0
as per linux_image Dockerfile and do:pprof
- see https://gperftools.github.io/gperftools/heapprofile.html)yum install ghostscript
yum install graphviz
yum install libunwind
- forheaptrack
Compiling the code.
Checkout the code in this PR on a linux x86_64 machine and configure
CMake
as normal, but ensure thatpytorch_inference
is linked againstlibtcmalloc
. This can be done with e.g.Running
pytorch_inference
There are several python scripts in the
bin/pytorch_inference
directory that are capable of runningpytorch_inference
on various models. Examples areThese scripts can be tweaked in various ways before running. In the case of
evaluate.py
edit the script to:mkl_free_buffers
control requestViewing results
If running
pytorch_inference
underheapprof
there will be a reasonably large number of output files generated, e.g./tmp/heapprof.0040.heap
. These files need to be post processed by a tool calledpprof
e.g.:to generate a pdf file of the
heapprof
results (other output formats are available).Heapcheck
has its own GUI especially for viewing results - https://github.com/KDE/heaptrack?tab=readme-ov-file#heaptrack_gui but can also display results as plain text.