Open gasgallo opened 5 years ago
I will check this later. Have you ever benchmarked another model like MobileNet? BTW, which backend of MNN did you use? OpenCL or Vulkan?
I will check this later. Have you ever benchmarked another model like MobileNet? BTW, which backend of MNN did you use? OpenCL or Vulkan?
Well, my model has a mobilenet backbone with just some feature extraction layers on the top.
I've tried both Vulkan and OpenCL backend on MNN, but OpenCL is faster in my case, so the time in the initial post is OpenCL one.
@lydoc have you had the chance to investigate yet?
Sorry for the late reply, is it convenient for you to share your model?
We are having the same issue:
SM-G960U | 10,31FPS (Samsung Galaxy S9 GLOBAL with Adreno 630) SM-N960F | 5,68FPS (Samsung Galaxy S9 EU with Mali-G72)
We have about half the FPS on Mali GPUs of the same phone model
Before you open an issue, please make sure you have tried the following steps:
System information
Model deploy file (*.yml)
Describe the problem
Inference time on MALI GPUs is very slow compared to other frameworks and a lot slower than the same model running on Adreno GPUs.
To Reproduce
Steps to reproduce the problem:
Error information / logs
Please include the full log and/or traceback here.
Additional context
For example, the model running with the above yml file takes: