PINTO0309 / PINTO_model_zoo

A repository for storing models that have been inter-converted between various frameworks. Supported frameworks are TensorFlow, PyTorch, ONNX, OpenVINO, TFJS, TFTRT, TensorFlowLite (Float32/16/INT8), EdgeTPU, CoreML.
https://qiita.com/PINTO
MIT License
3.59k stars 572 forks source link

Is there an easy way to convert ONNX or PB from (NCHW) to (NHWC)? #15

Closed AlexeyAB closed 3 years ago

AlexeyAB commented 4 years ago

@PINTO0309 Hi, Nice work with YOLOv4 / tiny!

As I see you use:

I have several questions:

PINTO0309 commented 3 years ago

@itsmasabdi

Since output_weight_quant_tflite only quantizes the weights, it cannot be converted to the edgetpu model. edgetpu needs to quantize all the OPs and perform Full Integer Quantization, so it must be converted according to the following procedure.

$ openvino2tensorflow \
  --model_path={model_path} \
  --output_saved_model True

def representative_dataset_gen(): for data in raw_test_data.take(10): image = data['image'].numpy() image = tf.image.resize(image, (416, 416)) image = image[np.newaxis,:,:,:] image = image / 255. yield [image]

raw_test_data, info = tfds.load(name="voc/2007", with_info=True, split="validation", data_dir="~/TFDS", download=True)

Full Integer Quantization - Input/Output=float32

converter = tf.lite.TFLiteConverter.from_saved_model('saved_model') converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8,tf.lite.OpsSet.SELECT_TF_OPS] converter.representative_dataset = representative_dataset_gen converter.inference_input_type = tf.uint8 converter.inference_output_type = tf.uint8 tflite_quant_model = converter.convert() with open('yolov4_416_full_integer_quant.tflite', 'wb') as w: w.write(tflite_quant_model) print("Full Integer Quantization complete! - yolov4_416_full_integer_quant.tflite")

```bash
$ python3 quantization.py
$ edgetpu_compiler -s yolov4_416_full_integer_quant.tflite

The normalization process of representative_dataset_gen should be adjusted by you.

itsmasabdi commented 3 years ago

Hi @PINTO0309

Thanks for your response. I am having issues converting the model, wondering if you can help?

This time I:

The error I get is:

Traceback (most recent call last):
  File "quant.py", line 21, in <module>
    converter = tf.lite.TFLiteConverter.from_saved_model('saved_model')
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/lite.py", line 1069, in from_saved_model
    saved_model = _load(saved_model_dir, tags)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/load.py", line 859, in load
    return load_internal(export_dir, tags, options)["root"]
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/load.py", line 909, in load_internal
    root = load_v1_in_v2.load(export_dir, tags)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/load_v1_in_v2.py", line 279, in load
    return loader.load(tags=tags)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/load_v1_in_v2.py", line 204, in load
    meta_graph_def = self.get_meta_graph_def_from_tags(tags)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/load_v1_in_v2.py", line 87, in get_meta_graph_def_from_tags
    tags)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/loader_impl.py", line 358, in get_meta_graph_def_from_tags
    "\navailable_tags: " + str(available_tags))
RuntimeError: MetaGraphDef associated with tags {'serve'} could not be found in SavedModel. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: `saved_model_cli`
available_tags: []

I tried different tensorflow version but can't get passed this error

I would appreciate your help

PINTO0309 commented 3 years ago

@itsmasabdi You should use the Docker environment. https://github.com/PINTO0309/openvino2tensorflow#4-setup

Fafa87 commented 2 years ago

Just info for someone that does not like openvino path: