Closed yxchng closed 1 year ago
@610265158 Because it is not fast enough on mobile phone. I am currently testing on Qualcomm 730 and it is running at about 20ms per face on CPU. It is problematic when there are many faces. Hopefully it can run faster on GPU. Ideally, <5 ms will be good.
Tflite is not fast, bro.
It is an engineering problem, please modify the model and then try MNN or NCNN.
Are MNN and NCNN fast on GPU? Or just faster on CPU?
The model structure is not designed so fast. The best way is to tune the model structure based on your device and your inference frame work. If you want to work with tflite , mobilenet is a better structure.
There is a mnn model runtime analysis, as it shows, the pack op in the shuffle operator cost the most time. Anybody who wants to deploy the model , please tunning the model( it is very easy), and then deploy it.
And later i will train a mbv3 model, then upload it.
Why do you want to run the tflite model on gpu?