Open spaul13 opened 5 years ago
It depends on whether your GPU supports a deep learning framework.
Thanks a lot for replying me back. I have already run the tensorflow lite with GPU delegate example (https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/examples/android/app) and it can run on GPU as I can observe the GPU utilization but here when I am adding a GPU delegate as a TFlite option and recreate the interpreter and it's getting stuck at mInterpreter.run() function.
Can @wics1224 please tell me how to resolve this issue?
Hi, I ran TinyYolo-v3.tflite on GPU and I see the followings:
Have you seen the same results? Is there anything I can do to get better performance?
If you replace the resize=nearest with resize=bilinear, and remove the final reshape, it runs fine on the tflite GPU around 75-100ms. This does not appear to affect accuracy.
First of a really great project. Can you please tell me how to enable GPU support on Android platform as I saw there is no use of GPU and CPU is also very partially used so can u please tell/guide me what modifications I have to make for this project to enable GPU support on Android?
Have you solved your problem about enable GPU of YOLOV3 on android? I also encountered the same problem. I add GPU module as an option of tflite but when I run the code actually the detection results are totally wrong and also the GPU didn't speed up. However I can see "Created TensorFlow Lite delegate for GPU." in console. So now I don't know 1: Whether I have started the GPU on my android phone correctly? 2: If the GPU was started are there some ops that tflite-GPU not supported?
PS: I can run the tflite-gpu demo successfully.
I think when you convert it tells you if any of the nodes need to be run on the CPU. If you see any at all it will probably revert to as slow performance as you see on CPU only. I believe I needed to replace the final re-shape, and I switched the nearest-neighbor upsampling to bilinear (and did not notice a drop in accuracy). So sadly you may need to hack on converter code.
If you replace the resize=nearest with resize=bilinear, and remove the final reshape, it runs fine on the tflite GPU around 75-100ms. This does not appear to affect accuracy.
can you please explain how to accomplish that
It depends on how you are converting your weights. I used onnx2keras and had to make edits to the python code to skip and later nodes in the graph. Netron is practically essential for being able to visualize the graph. It's not easy or obvious, and you need to understand what those nodes are doing. Good luck!
It depends on how you are converting your weights. I used onnx2keras and had to make edits to the python code to skip and later nodes in the graph. Netron is practically essential for being able to visualize the graph. It's not easy or obvious, and you need to understand what those nodes are doing. Good luck!
Thank you for the clarification
First of a really great project. Can you please tell me how to enable GPU support on Android platform as I saw there is no use of GPU and CPU is also very partially used so can u please tell/guide me what modifications I have to make for this project to enable GPU support on Android?