XiaoMi / mobile-ai-bench

Benchmarking Neural Network Inference on Mobile Devices
Apache License 2.0
353 stars 57 forks source link

ARM Compute Library #3

Open bubbles1990 opened 6 years ago

bubbles1990 commented 6 years ago

Hi, thanks for this benchmark. Are you planning on including ARM Compute Library in future version?

llhe commented 6 years ago

We haven't tested ARM Compute Library well since it's not our focus. But contribution is strongly welcome.

Actually we found the benchmark task is not only time consuming, tedious but also challenging (any bugs, improper settings can lead to unfair benchmark results).

We hope this code base can be shared among AI engineers that ease the benchmark task.

psyhtest commented 6 years ago

@bubbles1990 We have recently released CK-NNTest, a dedicated suite of micro-tests for the Arm Compute Library (+ TensorFlow + Caffe CPU + Caffe GPU) based on the open-source Collective Knowledge framework (CK) and developed in collaboration with Arm. We have successfully used CK-NNTest to optimize the Arm Compute Library for their latest GPU architecture (Bifrost), achieving up to 10x kernel-level speedups, up to 5x operator-level speedups and up to 3x network-level speedups (e.g. see our ReQuEST@ASPLOS'18 paper). Moreover, we keep using it to detect performance anomalies and regressions. Or is your question about network-level benchmarking?

Actually we found the benchmark task is not only time consuming, tedious but also challenging (any bugs, improper settings can lead to unfair benchmark results).

@llhe I totally agree with that. That's why we are developing CK together with a growing community across industry and academia to avoid duplication of efforts and fragmentation. In this way, we can support many platforms, models and datasets, while improving common methodology.