zldrobit / onnx_tflite_yolov3

A Conversion tool to convert YOLO v3 Darknet weights to TF Lite model (YOLO v3 PyTorch > ONNX > TensorFlow > TF Lite), and to TensorRT (YOLO v3 Pytorch > ONNX > TensorRT).
GNU General Public License v3.0
69 stars 26 forks source link

An Issue when loading OpenGL backend TFLite GPU delegate in Python #15

Closed rose-jinyang closed 3 years ago

rose-jinyang commented 3 years ago

🐛 Bug

A clear and concise description of what the bug is. I am going to use TFLite GPU runtime on aarch64 platform. So I built an OpenGL backend TFLite GPU delegate on Ubuntu 18.04 x86_64 PC. I tried to use the built OpenGL backend TFLite GPU delegate in Python on RK3399 Ubuntu 18.04 LTS aarch64. But I met the following issue when loading the delegate.

image

So I used the env variable LD_PRELOAD as the following. export LD_PRELOAD="/usr/lib/aarch64-linux-gnu/libEGL.so /usr/lib/aarch64-linux-gnu/libGLESv2.so" The first issue disappeared. But the second issue appeared as the following.

image

Environment

If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

Smartphone (please complete the following information):

To Reproduce

Steps to reproduce the behavior: I followed the following steps on my host PC.

sudo apt update sudo apt-get install software-properties-common sudo apt update sudo apt install git curl sudo apt install python3.7 python3.7-dev python3.7-venv python3.7-distutils sudo apt install mesa-common-dev libegl1-mesa-dev libgles2-mesa-dev

cd ~ python3.7 -m venv py37 source ~/py37/bin/activate pip install cython pip install wheel pip install numpy

git clone -b r2.4 https://github.com/tensorflow/tensorflow.git tensorflow_r2.4 cd tensorflow_r2.4 ./configure

image

bazel build -s -c opt --config=elinux_aarch64 --copt="-DMESA_EGL_NO_X11_HEADERS" --copt="-DEGL_NO_X11" tensorflow/lite/delegates/gpu:libtensorflowlite_gpu_gl.so

image

zldrobit commented 3 years ago

It seems that you have to add tflite_plugin_destroy_delegate function. Try adding

TFL_CAPI_EXPORT void tflite_plugin_destroy_delegate(TfLiteDelegate* delegate) {
  TfLiteGpuDelegateDelete(delegate);
}

to tensorflow/lite/delegates/gpu/gl_delegate.cc and recompile TFLite.

rose-jinyang commented 3 years ago

For OpenGL backend GPU delegate, The following issue appears on RK3399 aarch64 platform.

image

For OpenCL backend GPU delegate, it works well. But the inference time is almost the same as using CPU backend TFLite runtime with 2 threads. Thank you very much

zldrobit commented 3 years ago

Thanks for sharing the performance. I am closing this issue.