This is a repo of the deep learning inference benchmark, called DLI. DLI is a benchmark for deep learning inference on various hardware. The goal of the project is to develop a software for measuring the performance of a wide range of deep learning models inferring on various popular frameworks and various hardware, as well as regularly publishing the obtained measurements.
The main advantage of DLI from the existing benchmarks is the availability of performance results for a large number of deep models inferred on Intel-platforms (Intel CPUs, Intel Processor Graphics, Intel Movidius Neural Compute Stick).
DLI supports inference using the following frameworks:
More information about DLI is available on the web-site (here (in Russian) or here (in English)) or on the Wiki page.
This project is licensed under the terms of the Apache 2.0 license.
Please consider citing the following papers.
Kustikova V., Vasilyev E., Khvatov A., Kumbrasiev P., Rybkin R., Kogteva N. DLI: Dee p Learning Inference Benchmark // Communications in Computer and Information Science. V.1129. 2019. P. 542-553.
Sidorova A.K., Alibekov M.R., Makarov A.A., Vasiliev E.P., Kustikova V.D. Automation of collecting performance indicators for the inference of deep neural networks in Deep Learning Inference Benchmark // Mathematical modeling and supercomputer technologies. Proceedings of the XXI International Conference (N. Novgorod, November 22–26, 2021). – Nizhny Novgorod: Nizhny Novgorod State University Publishing House, 2021. – 423 p. https://hpc-education.unn.ru/files/conference_hpc/2021/MMST2021_Proceedings.pdf. (In Russian)
Alibekov M.R., Berezina N.E., Vasiliev E.P., Kustikova V.D., Maslova Z.A., Mukhin I.S., Sidorova A.K., Suchkov V.N. Performance analysis methodology of deep neural networks inference on the example of an image classification problem // Russian Supercomputing Days (RSD-2023). - 2023. (In Russian)
Alibekov M.R., Berezina N.E., Vasiliev E.P., Vikhrev I.B., Kamelina Yu.D., Kustikova V.D., Maslova Z.A., Mukhin I.S., Sidorova A.K., Suchkov V.N. Performance analysis methodology of deep neural networks inference on the example of an image classification problem // Numerical Methods and Programming. - 2024. - Vol. 25(2). - P. 127-141. - https://num-meth.ru/index.php/journal/article/view/1332/1264. (In Russian)
demo
directory contains demos for different frameworks
and operating systems.
OpenVINO_DLDT
is directory that contains demos
for Intel® Distribution of OpenVINO™ Toolkit.docker
directory contains Dockerfiles.
Dockerfile
is the main Dockerfile.Caffe
is a directory of Dockerfiles for Intel® Optimization
for Caffe.MXNet
is a directory of Dockerfiles for MXNet.ONNXRuntime
is a directory of Dockerfiles for ONNX Runtime.OpenCV
is a directory of Dockerfiles for OpenCV.OpenVINO_DLDT
is a directory of Dockerfiles for Intel®
Distribution of OpenVINO™ Toolkit.PyTorch
is a directory of Dockerfiles for PyTorch.TVM
is a directory of Dockerfiles for Apache TVM.TensorFlow
is a directory of Dockerfiles for Intel® Optimizations
for TensorFlow.PaddlePaddle
is a directory of Dockerfiles for PaddlePaddle.docs
directory contains auxiliary documentation. Please, find
complete documentation at the Wiki page.
results
directory contains benchmarking and validation results.
accuracy
contains accuracy
results in html- and xslx-formats.
benchmarking
contains benchmarking
results in html- and xslx-formats.
validation
contains tables that confirms
correctness of inference implementation for the benchmarked models.
validation_results_caffe.md
is a table that confirms correctness of inference implementation
based on Intel® Optimization for Caffe for several public models.
validation_results_mxnet_gluon_modelzoo.md
is a table that confirms correctness of inference implementation
based on MXNet for GluonCV-models.
validation_results_ncnn.md
is a table that confirms correctness of inference implementation
based on ncnn for available models.
validation_results_onnxruntime.md
is a table that confirms correctness of inference implementation
based on ONNX Runtime.
validation_results_opencv.md
is a table that confirms correctness of inference implementation
based on OpenCV DNN.
validation_results_openvino_public_models.md
is a table that confirms correctness of inference implementation
based on Intel Distribution of OpenVINO™ toolkit for public models.
validation_results_openvino_intel_models.md
is a table that confirms correctness of inference implementation
based on Intel® Distribution of OpenVINO™ toolkit for models trained
by Intel engineers and available in Open Model Zoo.
validation_results_pytorch.md
is a table that confirms correctness of inference implementation
based on PyTorch for TorchVision.
validation_results_spektral.md
is a table that confirms correctness of inference implementation
based on Spektral.
validation_results_tensorflow.md
is a table that confirms correctness of inference implementation
based on Intel® Optimizations for TensorFlow for several public models.
validation_results_tflite.md
is a table that confirms correctness of inference implementation
based on TensorFlow Lite for public models.
validation_results_tvm.md
is a table that confirms correctness of inference implementation
based on Apache TVM for several public models.
mxnet_models_checklist.md
contains a list
of deep models inferred by MXNet checked in the DLI benchmark.
ncnn_models_checklist.md
contains a list
of deep models inferred by the ncnn framework checked in the DLI benchmark.
onnxruntime_models_checklist.md
contains a list
of deep models inferred by ONNX Runtime checked in the DLI benchmark.
opencv_models_checklist.md
contains a list
of deep models inferred by OpenCV DNN.
openvino_models_checklist.md
contains a list
of deep models inferred by the OpenVINO toolkit checked in the DLI benchmark.
pytorch_models_checklist.md
contains a list
of deep models inferred by PyTorch checked in the DLI benchmark.
tensorflow_models_checklist.md
contains a list
of deep models inferred by TensorFlow checked in the DLI benchmark.
tflite_models_checklist.md
contains a list
of deep models inferred by TensorFlow Lite checked in the DLI benchmark.
tvm_models_checklist.md
contains a list
of deep models inferred by Apache TVM checked in the DLI benchmark.
src
directory contains benchmark sources.
accuracy_checker
contains scripts to check deep model accuracy
using Accuracy Checker of Intel® Distribution of OpenVINO™ toolkit.benchmark
is a set of scripts to estimate inference
performance of different models at the single local computer.build_scripts
is a directory to build inference frameworks for different platforms.config_maker
contains GUI-application to make configuration files
of the benchmark components.configs
contains template configuration files.cpp_dl_benchmark
contains C++ tools that allow to measure
deep learning models inference performance with
ONNX Runtime, OpenCV DNN,
PyTorch and TensorFlow Lite in C++ API implementation.
This implementation inspired by OpenVINO Benchmark C++ tool
as a reference and stick to its measurement methodology,
thus provide consistent performance results.csv2html
is a set of scripts to convert performance and accuracy
tables from csv to html.csv2xlsx
is a set of scripts to convert performance and accuracy
tables from csv to xlsx.deployment
is a set of deployment tools.inference
contains python inference implementation.model_converters
contains converters of deep models.node_info
contains a set of functions to get information about
computational node.quantization
contains scripts to quantize model to INT8-precision
using Post-Training Optimization Tool (POT)
of Intel® Distribution of OpenVINO™ toolkit.remote_control
contains scripts to execute benchmark
remotely.tvm_autotuning
contains scripts to optimize Apache TVM models.utils
is a package of auxiliary utilities.test
contains smoke tests.
requirements.txt
is a list of special requirements for the DLI
benchmark without inference frameworks.
requirements_ci.txt
is a list of requirements for continuous
integration.
requirements_frameworks.txt
is a list of requirements to check
inference of deep neural networks using different frameworks
using smoke tests.
The latest documentation for the Deep Learning Inference Benchmark (DLI) is available here. This documentation contains detailed information about the DLI components and provides step-by-step guides to build and run the DLI benchmark on your own test infrastructure.
See the DLI Wiki to get more information.
See the DLI Wiki to get more information.
See the DLI Wiki to get more information.
See the DLI Wiki to get more information.
See the DLI Wiki to get more information about benchmaring results on available hardware.
Report questions, issues and suggestions, using: