intel / tiny-dpcpp-nn

SYCL implementation of Fused MLPs for Intel GPUs
BSD 3-Clause "New" or "Revised" License
36 stars 9 forks source link

Static Badge

Introduction

This repository implements a GPU-accelerated tiny neural network framework using Intel hardware, based on the original CUDA implementation. The implementation uses the Intel DPC++ compiler and relies on the SYCL language with optional ESIMD acceleration.

The network is optimized for loading both activation matrices and weights matrices into the GPU's fast L1 memory and registers. Computation of matrix multiplications is executed using Intel's joint_matrix extension, a high-level wrapper for systolic array operations.

Performance

We benchmarked the thoughput of our network in training and inference on both Intel Data Center GPU Max Series (Ponte Vecchio) and Intel Arc Series and compared our network with PyTorch.

To replicate the performance of the dpcpp code, please set BUILD_BENCHMARK=ON in tiny-dpcpp-nn/CMakeLists.txt, build benchmark-all and run the benchmark from the build/ folder using

I_MPI_DEBUG=3 I_MPI_OFFLOAD=1 I_MPI_OFFLOAD_DOMAIN=[1,2] mpirun -n 2 ./benchmarks/benchmark-all

To replicate the performance of the pytorch code, please run

cd python/ && python benchmark_pytorch.py

Finally, plot the results using

python benchmarks/plot_results.py

Performance on PVC

We reach up 60x to compared to PyTorch:

Training Throughput Comparison Inference Throughput Comparison

Performance on Arc 770

We reach up to 20x compared to PyTorch:

Training Throughput Comparison Inference Throughput Comparison

Features

Documentation

For a detailed documentation, please refer to tiny-dpcpp-nn documentation and for a detailed description of our fully-fused algorithm, please refer to our paper

Build

To build the tiny-nn librairy, you can clone the github repo on your machine and put your code in the source folder. After cloning, if you choose to use the pybindings, please recursive pull the pybind11 repositories via

git submodule update --init -- extern/pybind11

If you also want to pull the reference unittest data in test/tiny_dpcpp_data, which are ~500 MB of reference inputs, outputs, and weights, you can also run git submodule update --init. Note, that if BUILD_REF_TEST=ON in CMakeLists.txt, then test/tiny_dpcpp_data will be cloned as well.

Then you can build the library using :

source /opt/intel/oneapi/setvars.sh
mkdir build && cd build/
cmake -D<options>=<ON/OFF> ..
make

where are options that can be toggled on or off. See Build Options

Note: To make the use of the network, you have to disable the implicit scaling on PVC which can be done by uncommenting the portion of the code indicated in the sample when creating the queue.

PyTorch extension

Installation

We provide a pybind wrapper of our tiny-dpcpp-nn implementation for seamless integration into PyTorch. Please refer to tiny-dpcpp-nn pybind documentation for details.

Please recursively pull the pybind11 library:

git submodule update --init -- extern/pybind11

[Optional] - Load correct drivers, i.e., ensure that oneAPI and agama version match the required IPEX version

module load intel-comp-rt/agama-ci-devel/803.29 intel/oneapi/2024.1 cmake/3.26.0

[Optional] - Create a conda environment

conda create -n tiny-dpcpp-nn python=3.10 -y
conda activate tiny-dpcpp-nn

Install the latest ipex via

python -m pip install torch==2.1.0.post2 torchvision==0.16.0.post2 torchaudio==2.1.0.post2 intel-extension-for-pytorch==2.1.30+xpu oneccl_bind_pt==2.1.300+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/

Note please ensure that the IPEX version (2.1.30 in this example) is the same as used in IPEX_VERSION in tiny-dpcpp-nn/CMakeLists.txt

Install the module (if no TARGET_DEVICE is set, the target_device in setup.py is set to ARC. Currently PVC and ARC is supported):

cd dpcpp_bindings
TARGET_DEVICE=ARC pip install -e .

Finally, to test the sample scripts and tests, install the requirements:

cd python && pip install -r requirements.txt

Test the install

To test that the installation was successful, you can do the following four tests.

cd test/python/ && pytest

and run the two python sample scripts:

cd samples && python benchmark_pytorch.py
cd samples && python mlp_learning_an_image_pytorch.py

Tests

When setting the additional flag BUILD_REF_TEST=ON, additional data from tiny-dpcpp-data will be downloaded.

When setting the additional flag BUILD_TORCH_TEST=ON, the libtorch tests (tnn_api.h) will be built.

To have all tests, run:

cmake -DTARGET_DEVICE="PVC" -DBUILD_REF_TEST="ON" -DBUILD_TORCH_TEST="ON" ..

After all tests are build, you can run bash test/run_tests.sh to verfiy that the setup is correct. Please note that we provide tests for both the core dpcpp implementation and the libtorch wrapper implementation.

To test whether the pytorch bindings were installed correctly, please run

Acknowledgement

Citation

If you found this work useful, please consider citing this work as:

@software{tiny-dpcpp-nn,
    author = {Bauinger, Christoph and Yuan, Kai},
    license = {BSD-3-Clause},
    month = {3},
    title = {{tiny-dpcpp-nn}},
    url = {https://github.com/intel/tiny-dpcpp-nn/},
    version = {0.1},
    year = {2024}
}