tensorflow / models

Models and examples built with TensorFlow
Other
77.15k stars 45.75k forks source link

Increasing speed of detection in android #6339

Closed Amanpradhan closed 5 years ago

Amanpradhan commented 5 years ago

Please go to Stack Overflow for help and support:

http://stackoverflow.com/questions/tagged/tensorflow

Also, please understand that many of the models included in this repository are experimental and research-style code. If you open a GitHub issue, here is our policy:

  1. It must be a bug, a feature request, or a significant problem with documentation (for small docs fixes please send a PR instead).
  2. The form below must be filled out.

Here's why we have that policy: TensorFlow developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow.


System information

You can collect some of this information using our environment capture script:

https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh

You can obtain the TensorFlow version with

python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"

Describe the problem

I am running mobilelet ssd object detector and while its performance is fine, its speed is on the lower side. What can I do to increase the detection speed in android?

rootkitchao commented 5 years ago

Here are some suggestions that might be useful: 1.If your target device supports Android NNAPI and you can use SOC hardware acceleration, you can get higher speed by using Tensorflow Lite instead of Tensorflow Mobile.Quantizing to FP16 or INT8 models will lose some precision but get a great speed boost. 2.Use a newer model. SSD MobileNet V2 is faster than SSD MobileNet V1, and SSD MNASNet is faster than SSD MobileNet V2, with close accuracy. 3.Use SSDLite instead of SSD. 4.Setting a lower input resolution and depth_multiplier will reduce the accuracy, but also speed.

Amanpradhan commented 5 years ago

Can I quantize my model using Tensorflow Mobile instead of Tensorflow Lite?

derekjchow commented 5 years ago

Only the Tensorflow Lite run has support for running quantized operations. Tensorflow Mobile is slated for deprecation soon.

ymodak commented 5 years ago

Closing this issue since its resolved. Feel free to reopen if have any further questions. Thanks!

zychen2016 commented 4 years ago

Here are some suggestions that might be useful: 1.If your target device supports Android NNAPI and you can use SOC hardware acceleration, you can get higher speed by using Tensorflow Lite instead of Tensorflow Mobile.Quantizing to FP16 or INT8 models will lose some precision but get a great speed boost. 2.Use a newer model. SSD MobileNet V2 is faster than SSD MobileNet V1, and SSD MNASNet is faster than SSD MobileNet V2, with close accuracy. 3.Use SSDLite instead of SSD. 4.Setting a lower input resolution and depth_multiplier will reduce the accuracy, but also speed.

SSDLite speed will over ssd quantized model speed?