YOLOv4, YOLOv4-tiny, YOLOv3, YOLOv3-tiny Implemented in Tensorflow 2.0, Android. Convert YOLO v4 .weights tensorflow, tensorrt and tflite
2.23k
stars
1.24k
forks
source link
Duplicated quantization flag assignment makes impossible to create a fully int8 quantized model #439
Open
juandoso opened 2 years ago
https://github.com/hunglc007/tensorflow-yolov4-tflite/blob/9f16748aa3f45ff240608da4bd9b1216a29127f5/convert_tflite.py#L39-L41
The second assignment to converter.target_spec.supported_ops supersedes the first that contains the int8 flag for full integer quantization