NNPACK is an acceleration package for neural network computations. NNPACK aims to provide high-performance implementations of convnet layers for multi-core CPUs.
NNPACK is not intended to be directly used by machine learning researchers; instead it provides low-level performance primitives leveraged in leading deep learning frameworks, such as PyTorch, Caffe2, MXNet, tiny-dnn, Caffe, Torch, and Darknet.
Environment | Architecture | CPU requirements |
---|---|---|
Linux | x86-64 | AVX2 and 3-level cache hierarchy |
Linux | ARM | NEON |
Linux | ARM64 | |
macOS | x86-64 | AVX2 and 3-level cache hierarchy |
Android | ARM | NEON |
Android | ARM64 | |
Android | x86 | |
Android | x86-64 | |
iOS | ARM | |
iOS | ARM64 | |
Emscripten | Asm.js | |
Emscripten | WebAssembly |
nnp_convolution_inference
)nnp_convolution_output
)nnp_convolution_input_gradient
)nnp_convolution_kernel_gradient
)nnp_fully_connected_inference
and nnp_fully_connected_inference_f16f32
version for FP16 weights)nnp_fully_connected_output
)nnp_max_pooling_output
)nnp_relu_output
)nnp_relu_input_gradient
)nnp_softmax_output
)For most users, the recommended way to build NNPACK is through CMake:
mkdir build
cd build
cmake -G Ninja ..
ninja
Note: if ninja
is not available on your system, configure without -G Ninja
, and use make
instead of ninja
.
You can download and install NNPACK using the vcpkg dependency manager:
git clone https://github.com/Microsoft/vcpkg.git
cd vcpkg
./bootstrap-vcpkg.sh
./vcpkg integrate install
./vcpkg install NNPACK
The NNPACK port in vcpkg is kept up to date by Microsoft team members and community contributors. If the version is out of date, please create an issue or pull request on the vcpkg repository.
To cross-compile for Android, add extra configuration options for cmake
: -DCMAKE_TOOLCHAIN_FILE=$ANDROID_NDK/build/cmake/android.toolchain.cmake
(where $ANDROID_NDK
is the path to Android NDK directorory, e.g. /opt/android-ndk-r15c
) AND arguments from the table below
ABI | Extra cmake args | Restrictions |
---|---|---|
armeabi | -DANDROID_ABI=armeabi -DANDROID_TOOLCHAIN=gcc |
Requires CPU with ARM NEON |
armeabi-v7a | -DANDROID_ABI=armeabi-v7a -DANDROID_TOOLCHAIN=gcc |
Requires CPU with ARM NEON |
arm64-v8a | -DANDROID_ABI=arm64-v8a -DANDROID_TOOLCHAIN=clang |
Requires clang toolchain |
x86 | -DANDROID_ABI=x86 |
|
x86_64 | -DANDROID_ABI=x86_64 |
Notes:
nnp_initialize
will fail with nnp_status_unsupported_hardware
if the mobile CPU does not support ARM NEON. Don't set -DANDROID_ARM_NEON=1
for NNPACK compilation as it can make nnp_initialize
crash on CPUs without ARM NEON.nnpack-pr
branch in ajtulloch/caffe.The library is developed by Marat Dukhan of Georgia Tech with extensive advice from Nicolas Vasilache and Soumith Chintala of Facebook Artificial Intelligence Research. Andrew Tulloch of Facebook Artificial Intelligence Research contributed Caffe integration. We thank Andrew Lavin for fruitful discussions on Winograd transform-based implementations. NNPACK is a research project at Richard Vuduc's HPC Garage lab in the Georgia Institute of Technology, College of Computing, School of Computational Science and Engineering.
This material is based upon work supported by the U.S. National Science Foundation (NSF) Award Number 1339745. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of NSF.