PINTO0309 / PINTO_model_zoo

A repository for storing models that have been inter-converted between various frameworks. Supported frameworks are TensorFlow, PyTorch, ONNX, OpenVINO, TFJS, TFTRT, TensorFlowLite (Float32/16/INT8), EdgeTPU, CoreML.
https://qiita.com/PINTO
MIT License
3.49k stars 566 forks source link

question : what is the previous step to quantitize nanodet model ? #55

Closed wwdok closed 3 years ago

wwdok commented 3 years ago

Hi, @PINTO0309 , thank you for providing these great samples, now i am interested in using nanodet tflite model, you have provided the convertion python scripts, but i have a question, what is the file used to export tflite in saved_model_320x320 ? is it model_float32.pb ? if so, how to generate it? Could you please give me some instructions ? Many thanks !

PINTO0309 commented 3 years ago

If I try to explain it properly, it will take a long time. Please wait a moment while I prepare it.

wwdok commented 3 years ago

Oh, sorry for this will take you a long time, let me narrow down the scope of the the question, the model_float32.pb comes from format convertion or transfer retraining ? You can just explain the key points, i will spend time to dive into it, thanks !

PINTO0309 commented 3 years ago

Environment

  1. PyTorch
  2. ONNX Runtime
  3. OpenVINO 2021.1+
  4. tf-nightly

Conversion Workflow

PyTorch -> ONNX -> OpenVINO IR -> openvino2tensorflow -> TFLite

ONNX -> OpenVINO IR

$ python3 ${INTEL_OPENVINO_DIR}/deployment_tools/model_optimizer/mo.py \
 --input_model nanodet_320x320.onnx \
 --input i \
 --input_shape [1,3,320,320] \
 --output_dir openvino/nanodet_320x320/FP32 \
 --data_type FP32

Three types of files will be generated: .xml, .bin, and .mapping. Screenshot 2020-12-17 00:03:31

Edit and optimize the .xml to remove layers of meaningless garbage

1. Before

Screenshot 2020-12-17 00:09:29

2. After

Screenshot 2020-12-17 00:09:44

OpenVINO IR -> TFLite, saved_model, pb, quantized tflite, h5

https://github.com/PINTO0309/openvino2tensorflow.git

$ openvino2tensorflow \
  --model_path onnx/openvino/nanodet_320x320/FP32/nanodet_320x320.xml \
  --output_saved_model True \
  --output_h5 True \
  --output_pb True \
  --output_no_quant_float32_tflite True \
  --output_weight_quant_tflite True \
  --output_float16_quant_tflite True
PINTO0309 commented 3 years ago

@wwdok

the model_float32.pb comes from format convertion or transfer retraining ?

format convertion

--output_pb True
wwdok commented 3 years ago

Wow, the convertion process is longer than i thought, i am really thankful for this helpful guide !