PINTO0309 / PINTO_model_zoo

A repository for storing models that have been inter-converted between various frameworks. Supported frameworks are TensorFlow, PyTorch, ONNX, OpenVINO, TFJS, TFTRT, TensorFlowLite (Float32/16/INT8), EdgeTPU, CoreML.
https://qiita.com/PINTO
MIT License
3.62k stars 574 forks source link

SSD MobileNet V3 - Not loading inTFlite interpreter #274

Closed mrtpk123 closed 2 years ago

mrtpk123 commented 2 years ago

Issue Type

Support

OS

Ubuntu

OS architecture

x86_64

Programming Language

Python

Framework

TensorFlowLite

Model name and Weights/Checkpoints URL

Mobilenetv3 small trained on coco - full integer quantized.

https://github.com/PINTO0309/PINTO_model_zoo/blob/main/002_mobilenetv3-ssd/01_mobilenetv3_small/01_coco/04_full_integer_quantization/download.sh

Description

I'm trying to run the fully integer quantized SSD mobilenetv3 (from your model zoo) on tflite and could not load it. I have no clue where to start. If you could direct me on how to debug this, it would be great. Thank you for any help.

Relevant Log Output

Aborted (core dumped)

Sorry, this is the only log that I got. When I tried it on a different machine, the Python script was silently killed.

URL or source code for simple inference testing code

from tensorflow.lite.python.interpreter import Interpreter interpreter = Interpreter(model_path='ssd_mobilenet_v3_small_coco_full_integer_quant_sp.tflite') interpreter.allocate_tensors()

ghost commented 2 years ago

What version of tensorflow lite do you have? Models version 2 can't run in tensorflow lite version 1.

ghost commented 2 years ago

https://repo1.maven.org/maven2/org/tensorflow/tensorflow-lite/

mrtpk123 commented 2 years ago

Hi @DonkeySmall Thank you for commenting.

I have used TF 2.8. Is there a way to export the pretrained model so that I can run it on TF 2.8?

ghost commented 2 years ago

tflite models of versions 1 and 2 must be run in the tensorflow lite interpreter version 2, tflite models of version 2 will not run in the tensorflow lite interpreter version 1

Apparently the reason is something else

Sorry English is not my native language

PINTO0309 commented 2 years ago
import tensorflow as tf

print(f'TF Ver: {tf.__version__}')
interpreter = tf.lite.Interpreter(
    model_path='ssd_mobilenet_v3_small_coco_full_integer_quant.tflite',
    num_threads=4,
)
interpreter.allocate_tensors()
TF Ver: 2.9.0
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
mrtpk123 commented 2 years ago

Thank you for your help.

The code I used:

import tensorflow as tf

print(f'TF Ver: {tf.__version__}')
interpreter = tf.lite.Interpreter(model_path='ssd_mobilenet_v3_small_coco_full_integer_quant.tflite',num_threads=4,)
interpreter.allocate_tensors()
print("sucess!!")

System 1:

2022-08-01 11:59:06.584306: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /work/test_cortione/cortione/software/libs:/work/test_cortione/cortione/software/distribute/cortiapps/::/work/demo_test/corti_img_processing/lib
2022-08-01 11:59:06.584344: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
TF Ver: 2.6.2
Aborted (core dumped)

system2:

2022-08-01 17:26:28.951793: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2022-08-01 17:26:28.952062: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
TF Ver: 2.9.1
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.

See that the print statement at the end was not executed.

Thank you for looking into this.

PINTO0309 commented 2 years ago

https://github.com/PINTO0309/PINTO_model_zoo/blob/main/002_mobilenetv3-ssd/01_mobilenetv3_small/01_coco/01_float/download.sh

docker run --gpus all -it --rm \
-v `pwd`:/home/user/workdir \
ghcr.io/pinto0309/openvino2tensorflow:latest

pb_to_tflite \
--pb_file_path tflite_graph/tflite_graph.pb \
--inputs normalized_input_image_tensor \
--outputs raw_outputs/class_predictions,raw_outputs/box_encodings
import tensorflow as tf
print(f'TF Ver: {tf.__version__}')
interpreter = tf.lite.Interpreter(
    model_path='saved_model_from_pb/model_from_pb_float32.tflite',
    num_threads=4,
)
interpreter.allocate_tensors()
print("sucess!!")
TF Ver: 2.9.0
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
sucess!!
PINTO0309 commented 2 years ago
docker run --gpus all -it --rm \
-v `pwd`:/home/user/workdir \
ghcr.io/pinto0309/openvino2tensorflow:latest

pb_to_saved_model \
--pb_file_path tflite_graph/tflite_graph.pb \
--inputs normalized_input_image_tensor:0 \
--outputs raw_outputs/class_predictions:0,raw_outputs/box_encodings:0
saved_model_cli show --dir saved_model_from_pb --all

MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['serving_default']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['normalized_input_image_tensor'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 320, 320, 3)
        name: normalized_input_image_tensor:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['raw_outputs/box_encodings'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 2034, 4)
        name: raw_outputs/box_encodings:0
    outputs['raw_outputs/class_predictions'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 2034, 91)
        name: raw_outputs/class_predictions:0
  Method name is: tensorflow/serving/predict
saved_model_to_tflite \
--saved_model_dir_path saved_model_from_pb \
--output_no_quant_float32_tflite \
--output_dynamic_range_quant_tflite \
--output_weight_quant_tflite \
--output_float16_quant_tflite \
--output_integer_quant_tflite \
--output_full_integer_quant_tflite \
--output_integer_quant_type 'uint8' \
--string_formulas_for_normalization 'data / 255.0' \
--output_tfjs \
--output_coreml \
--output_onnx \
--onnx_opset 11 \
--output_edgetpu
import tensorflow as tf
print(f'TF Ver: {tf.__version__}')
interpreter = tf.lite.Interpreter(
    model_path='tflite_from_saved_model/model_full_integer_quant.tflite',
    num_threads=4,
)
interpreter.allocate_tensors()
print("sucess!!")
TF Ver: 2.9.0
sucess!!
mrtpk123 commented 2 years ago

Thank you for your advice and help. Really appreciate it.