hunglc007 / tensorflow-yolov4-tflite

YOLOv4, YOLOv4-tiny, YOLOv3, YOLOv3-tiny Implemented in Tensorflow 2.0, Android. Convert YOLO v4 .weights tensorflow, tensorrt and tflite
https://github.com/hunglc007/tensorflow-yolov4-tflite
MIT License
2.23k stars 1.24k forks source link

Clarify this Repo to official TensorFlow Object Detection API #459

Open Petros626 opened 1 year ago

Petros626 commented 1 year ago

Hello folks,

I would like to start a topic explaining questions that some of you have already asked. So far, in the OD API, only SSD model can be converted into a TensorFlow Lite model (according to the authors). These models must be fully quantised. The TF1 Model Zoo already offers quantised models for this, the TF2 Model Zoo does not yet. People have tried to train other models quantised with the _"graphrewriter" in TF1, some failed, others succeeded. For TF2 there is no quantized model and no one has managed to quantise there.

Now I came across this repo and a few others and see that other models can be quantised and converted into a TensorFlow Lite model. As I understand it, we should be able to convert talking trained model into a TensorFlow Lite model using the framework (note not OD API).

The question is:

  1. can any model in an IDE be quantised using the TensorFlow framework and converted into a TensorFlow Lite model.
  2. do I only need the .pb file or the weights of the model to do this.
  3. what is the pipeline for this repo: Train model in IDE -> Save weights or .pb file -> Convert to TensorFlow model (SavedModel or HDF5?) -> Convert to TFLite model
  4. can this approach be used here with a converted TFLite model in the OD API?

I hope for clarifying answers so that beginners, could understand what is possible :)