hunglc007 / tensorflow-yolov4-tflite

YOLOv4, YOLOv4-tiny, YOLOv3, YOLOv3-tiny Implemented in Tensorflow 2.0, Android. Convert YOLO v4 .weights tensorflow, tensorrt and tflite
https://github.com/hunglc007/tensorflow-yolov4-tflite
MIT License
2.23k stars 1.24k forks source link

Problem when quantizing models #485

Open CdAB63 opened 11 months ago

CdAB63 commented 11 months ago

Trying to use a quantized model returns:

$ python detect_video.py --video 0 --weights ./checkpoints/yolov4-tflite-416 --framework tflite

Weights: ./checkpoints/yolov4-tflite-416 Traceback (most recent call last): File "detect_video.py", line 125, in app.run(main) File "/home/ubuntu/.local/lib/python3.8/site-packages/absl/app.py", line 308, in run _run_main(main, args) File "/home/ubuntu/.local/lib/python3.8/site-packages/absl/app.py", line 254, in _run_main sys.exit(main(argv)) File "detect_video.py", line 40, in main interpreter = tf.lite.Interpreter(model_path=FLAGS.weights) File "/home/ubuntu/.local/lib/python3.8/site-packages/tensorflow/lite/python/interpreter.py", line 464, in init self._interpreter = _interpreter_wrapper.CreateWrapperFromFile( ValueError: Mmap of '4' at offset '0' failed with error '19'.

Weights set with:

$ python save_model.py --weights ./data/yolov4.weights --output ./checkpoints/yolov4-416 --input_size 416 --model yolov4

and then:

python convert_tflite.py --quantize_mode int8 --weights ./checkpoints/yolov4-416 --output ./checkpoints/yolov4-416-int8.tflite