idstein / ADAM

Advanced Driving Assistance on A Mobile
2 stars 0 forks source link

OpenCL backend on tflite #13

Closed scocoyash closed 4 years ago

scocoyash commented 4 years ago

Hi Paul, Sorry to raise an issue here, I had some doubts regarding enabling MACE OpenCL backend on Tflite. I found a comment of yours on the tensorflow repo that said using MACE as GPU backend improved around 30% speedup of execution. Can you show me how to enable the same for GPU pipeline or show me some repo where this has been done? Thanks a lot for your help :)

idstein commented 4 years ago

MACE is an entirely different inference engine for neural network models. This repo is not using any neural network model at all.

The benchmarks have been conducted January 2019 so before tflite open sourced its OpenCL gpu delegate. If you benchmark on the same device (which has OpenCL support so not any Google Pixel) tflite with OpenCL vs tflite OpenGL, you‘ll likely see the same effect.

scocoyash commented 4 years ago

Sorry to trouble you again, I raised an issue here because i did not find your contact details anywhere. May i ask where and how did you benchmark using MACE and tflite? What changes did you make in order to run the MACE backend on tflite instead of the standard provided GL delegate?

idstein commented 4 years ago

You can not use MACE in tflite. But you can benchmark how fast for instance mobilenet v2 is running on MACE or tflite. There exists multiple different neural network runtimes like MACE, Tensorflow, Caffe, PyTorch anf a couple of others (also one from Amazon or Baidu which looks promising).

scocoyash commented 4 years ago

Thanks a lot Paul :)