hclhkbu / dlbench

Benchmarking State-of-the-Art Deep Learning Software Tools
http://dlbench.comp.hkbu.edu.hk/
MIT License
170 stars 47 forks source link
benchmark deep-learning

Deep learning software tools benchmark

A benchmark framework for measuring different deep learning tools. Please refer to http://dlbench.comp.hkbu.edu.hk/ for our testing results and more details. Benchmarking with newer versions of frameworks is on the way:

Tool Version
Caffe 1.0rc5(39f28e4)
CNTK 2.0Beta10(1ae666d)
MXNet 0.93(32dc3a2)
TensorFlow 1.0(4ac9c09)
Torch 7(748f5e3))

This project is licensed under MIT License.

Introduction

Overview of dlbench

Dirctory Description
configs/ Configuration files for running benchmark
network-configs/ Description of our tested models
synthetic/ Our benchmark tests with fake data
tools/ Contains running scripts and network configurations of each deep learning tool
logs/ Will be generated by running benchmark.py. Running logs should be put in here

Run benchmark

Prepare Data

Prepare data for the tools you want to run and put them under $HOME/data. Note that the name of each data directory should be the same as the name of the tool for convenience.
You can download data we used for our benchmark through following links:

For the synthetic data generation, please refer to scripts in the link: http://dlbench.comp.hkbu.edu.hk/s/html/v5/index.html.

Prepare .config file

There are some sample configuration files in configs/, you can choose one of them as example and change values of each item according to your needs and environment.

Run

To run benchmark test just execute

python benchmark.py -config configs/\.config

Add new tools

Follow the instructions in tools/Readsme.md preparing the running scripts and netowrk configurations. Note that training data should be put in $HOME/data/ so that we can test new tools in our machines and update benchmarking results to our website.

Update log

May 24, 2017: