hunglc007 / tensorflow-yolov4-tflite

YOLOv4, YOLOv4-tiny, YOLOv3, YOLOv3-tiny Implemented in Tensorflow 2.0, Android. Convert YOLO v4 .weights tensorflow, tensorrt and tflite
https://github.com/hunglc007/tensorflow-yolov4-tflite
MIT License
2.23k stars 1.24k forks source link

yolov4 quantize float16 #436

Open programmer-huda123 opened 2 years ago

programmer-huda123 commented 2 years ago

is it important to quantize yolo4 into either float16 or int8 ? what is the difference between them ? and when I run the command ( python convert_tflite.py --weights ./checkpoints/yolov4-416 --output ./checkpoints/yolov4-416-fp16.tflite --quantize_mode float16 ) I got this error : OSError: SavedModel file does not exist at: ./checkpoints/yolov4-416/{saved_model.pbtxt|saved_model.pb}

Lin1007 commented 2 years ago
  1. You don't necessarily have to quantize the model. It just makes the inference faster. To understand the difference, see tensorflow post-training quantization doc
  2. Model doesn't exist error is due to the path that you are giving don't contain saved_model.pb, which is generated by python save_model.py --weights ./data/yolov4.weights --output ./checkpoints/yolov4-416 --input_size 416 --model yolov4 --framework tflite