ultralytics / yolov5

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
50.24k stars 16.22k forks source link

*.tflite export #453

Closed Mostafa-elgendy closed 3 years ago

Mostafa-elgendy commented 4 years ago

❔Question

Additional context

I have followed the instruction to train your model on a custom dataset and got "last_yolov5s_results.pt" this weight file. My question is how to use this file to run the yolov5 model on an android device? for example how to convert it to tflite

github-actions[bot] commented 4 years ago

Hello @Mostafa-elgendy, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook Open In Colab, Docker Image, and Google Cloud Quickstart Guide for example environments.

If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom model or data training question, please note that Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:

For more information please visit https://www.ultralytics.com.

glenn-jocher commented 4 years ago

@Mostafa-elgendy the export pipeline is pytorch to onnx to tflite. You can export to onnx following the export tutorial: https://docs.ultralytics.com/yolov5

bartvollebregt commented 4 years ago

I tried to convert the ONNX file to TFlite. Using converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS] the org.tensorflow.lite.Interpreter function throws The TensorFlow library was compiled to use SSE instructions, but these aren't available on your machine.. (On a Pixel XL emulator running API 29).

When converting using converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS], it will throw an error: error: 'tf.AddV2' op is neither a custom op nor a flex op

These issues seem related:

glenn-jocher commented 4 years ago

@bartvollebregt suggest you raise an issue on the relevant repo, and supply them code to reproduce.

zldrobit commented 4 years ago

@bartvollebregt I wrote a PR to convert yolov5 to TFLite model with TensorFlow 2.3, try using https://github.com/ultralytics/yolov5/pull/959 if you are interested.

BernardinD commented 4 years ago

@zldrobit do you have a script of running the tflite graph that you could also share?

BernardinD commented 4 years ago

@zldrobit Did you get the ValueError: Didn't find op for builtin opcode 'RESIZE_NEAREST_NEIGHBOR' version '3' issue?

How did you get around it?

zldrobit commented 4 years ago

@BernardinD I ran into this problem when I use different TF versions for converting models and inference. Which version of TF are you using? The only TF version tested successfully with TFLite is 2.3 for the PR.

BernardinD commented 4 years ago

@zldrobit I used 1.15.3 in the hopes to avoid having to update my edgetpu version

zldrobit commented 4 years ago

@BernardinD , I haven't used an edgetpu. I fail to infer TFLite model on TF 1.15, which is converted with TF 2.3. With TF 1.15, saved_model and graph_def formats can be correctly inferred. Which format are you using, TF saved_model, graph_def or TFLite?

BernardinD commented 4 years ago

@BernardinD , I haven't used an edgetpu. I fail to infer TFLite model on TF 1.15, which is converted with TF 2.3. With TF 1.15, saved_model and graph_def formats can be correctly inferred. Which format are you using, TF saved_model, graph_def or TFLite?

I tried running the TF 2.3 tflite model on TF 1.x and also tried running the TF 1.15.3 tflite model using your detect script script and got the same op error with both

Update: turns out I was using the TF 2.3 graph in both attempt. I falsely assumed your conversion script overwrote the previous file but it actually threw an error during the tflite conversion due to fact that TF 1.15.3 doesn't have the from_keras_model function for the TFLiteConverter but instead the from_keras_model_file

Update: After editing your script to include tf.lite.TFLiteConverter.from_keras_model_file(path) I was able to export the tflite graph but when I try to run its detection I get:

RuntimeError: tensorflow/lite/kernels/strided_slice.cc ellipsis_mask is not implemented yet.Node number 211 (STRIDED_SLICE) failed to prepare.

zldrobit commented 4 years ago

@BernardinD I think this may be a bug of TFLite 1.15. I recommend you use TFLite 2.3 because TFLite 2.0, 2.1 and 2.2 GPU deletegates produce incorrect results for yolov3 (https://github.com/tensorflow/tensorflow/issues/40613).

github-actions[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

lijianyu1985 commented 2 years ago

@zldrobit Did you get the ValueError: Didn't find op for builtin opcode 'RESIZE_NEAREST_NEIGHBOR' version '3' issue?

How did you get around it?

did you fix this error?

zldrobit commented 2 years ago

@lijianyu1985 We have fixed this problem long ago. Plz use the latest YOLOv5 v6.1 version and try again.