PINTO0309 / PINTO_model_zoo

A repository for storing models that have been inter-converted between various frameworks. Supported frameworks are TensorFlow, PyTorch, ONNX, OpenVINO, TFJS, TFTRT, TensorFlowLite (Float32/16/INT8), EdgeTPU, CoreML.
https://qiita.com/PINTO
MIT License
3.49k stars 566 forks source link

MoveNet: error on loading model with Openvino on GPU #150

Closed fan2tamo closed 2 years ago

fan2tamo commented 2 years ago

1. OS you are using

WIndows10

2. OS Architecture

x86_64

3. Version of OpenVINO

openvino_2021.0.4.689

9. URL of the repository from which the transformed model was taken

https://github.com/PINTO0309/PINTO_model_zoo/tree/main/115_MoveNet

model : MoveNet thunder v4(FP16)

10. URL or source code for simple inference testing code


from openvino.inference_engine import IECore
import numpy as np
import cv2

XML_PATH = "thunder_v4_saved_model/openvino/FP16/saved_model.xml"
BIN_PATH = "thunder_v4_saved_model/openvino/FP16/saved_model.bin"

ie = IECore()
net = ie.read_network(model=XML_PATH, weights=BIN_PATH)
input_blob = next(iter(net.input_info))
output_blob = next(iter(net.outputs))
exec_net = ie.load_network(net, device_name='GPU', num_requests=1)  # <- Occur RuntimeError
inference_request = exec_net.requests[0]

11. Issue Details

Thank you for publishing a nice model.

I could use MoveNet(thunder_v4) on the CPU, but could not GPU. On the GPU, when run load_network() occur runtime error. It's error is following.

RuntimeError: Error has occured for: floor:floordiv
Requested activation is not supported for integer type.

is it related to this issue? https://github.com/PINTO0309/PINTO_model_zoo/issues/100

PINTO0309 commented 2 years ago

I dealt with the problem a long time ago. The problem arises because the OpenVINO IR file for PINTO_model_zoo was generated before the conversion tool was modified. If you convert it again from the tflite file, the error should no longer occur. This is a problem with the GPU or Myriad implementation of OpenVINO.

$ docker run -it --rm \
  -v `pwd`:/home/user/workdir \
  ghcr.io/pinto0309/tflite2tensorflow:latest

$ tflite2tensorflow \
  --model_path xxxx.tflite \
  --flatc_path ../flatc \
  --schema_path ../schema.fbs \
  --output_pb \
  --optimizing_for_openvino_and_myriad \
  --rigorous_optimization_for_myriad

$ tflite2tensorflow \
  --model_path xxxx.tflite \
  --flatc_path ../flatc \
  --schema_path ../schema.fbs \
  --output_openvino_and_myriad
fan2tamo commented 2 years ago

Thank you for your replying. Oh, I see. OpenVINO was wrong. I'm sorry I made a mistake.

I tried to convert according to the procedure you taught me, I ran this command on your docker and it failed.

tflite2tensorflow \
  --model_path model.tflite \
  --flatc_path ../flatc \
  --schema_path ../schema.fbs \
  --output_pb \
  --optimizing_for_openvino_and_myriad \
  --rigorous_optimization_for_myriad

The model used this. https://tfhub.dev/google/lite-model/movenet/singlepose/thunder/tflite/float16/4?lite-format=tflite

The errors that occurred are as follows.

ERROR: The name'serving_default_input: 0: 0'looks a like a Tensor name, but is not a valid one. Tensor names must be of the form "<op_name>: <output_index>".
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3926, in _as_graph_element_locked
    op_name, out_n = name.split (":")
ValueError: too many values ​​to unpack (expected 2)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/ usr / local / bin / tflite2tensorflow", line 5747, in main
    inputs = {re.sub (': 0 *','', t): graph.get_tensor_by_name (t) for t in input_node_names},
  File "/ usr / local / bin / tflite2tensorflow", line 5747, in <dictcomp>
    inputs = {re.sub (': 0 *','', t): graph.get_tensor_by_name (t) for t in input_node_names},
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 4071, in get_tensor_by_name
    return self.as_graph_element (name, allow_tensor = True, allow_operation = False)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3895, in as_graph_element
    return self._as_graph_element_locked (obj, allow_tensor, allow_operation)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3929, in _as_graph_element_locked
    raise ValueError ("The name% s looks a like a Tensor name, but is"
ValueError: The name'serving_default_input: 0: 0'looks a like a Tensor name, but is not a valid one. Tensor names must be of the form "<op_name>: <output_index>".

Please let me know if my usage is wrong. I used tflite2tensorlow for the first time, so I may have made a mistake in using it.

PINTO0309 commented 2 years ago

This is caused by the character :0 at the end of the input/output name of the tflite layer. Screenshot 2021-11-10 08:43:27

$ ls -l
total 12292
-rw-rw-r-- 1 xxx xxx 12584128 11 10 08:42 lite-model_movenet_singlepose_thunder_tflite_float16_4.tflite
$ docker run -it --rm \
-v `pwd`:/home/user/workdir \
ghcr.io/pinto0309/tflite2tensorflow:latest

[setupvars.sh] OpenVINO environment initialized
user@d0270a51fd2a:~/workdir$ 

$ ls -l
total 12292
-rw-rw-r-- 1 user user 12584128 Nov  9 23:42 lite-model_movenet_singlepose_thunder_tflite_float16_4.tflite

$ tflite2tensorflow \
--model_path lite-model_movenet_singlepose_thunder_tflite_float16_4.tflite \
--flatc_path ../flatc \
--schema_path ../schema.fbs \
--output_pb \
--optimizing_for_openvino_and_myriad \
--rigorous_optimization_for_myriad

saved_model / .pb output started ====================================================
ERROR: The name 'serving_default_input:0:0' looks a like a Tensor name, but is not a valid one. Tensor names must be of the form "<op_name>:<output_index>".
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3926, in _as_graph_element_locked
    op_name, out_n = name.split(":")
ValueError: too many values to unpack (expected 2)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/bin/tflite2tensorflow", line 5747, in main
    inputs= {re.sub(':0*', '', t): graph.get_tensor_by_name(t) for t in input_node_names},
  File "/usr/local/bin/tflite2tensorflow", line 5747, in <dictcomp>
    inputs= {re.sub(':0*', '', t): graph.get_tensor_by_name(t) for t in input_node_names},
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 4071, in get_tensor_by_name
    return self.as_graph_element(name, allow_tensor=True, allow_operation=False)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3895, in as_graph_element
    return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3929, in _as_graph_element_locked
    raise ValueError("The name %s looks a like a Tensor name, but is "
ValueError: The name 'serving_default_input:0:0' looks a like a Tensor name, but is not a valid one. Tensor names must be of the form "<op_name>:<output_index>".

Open lite-model_movenet_singlepose_thunder_tflite_float16_4.json with text editor. Screenshot 2021-11-10 08:50:33 Screenshot 2021-11-10 08:52:36

Remove all :0. Then, overwrite and save the json. Screenshot 2021-11-10 08:53:33

Change the input layer type from UINT8 to FLOAT32. From:

  "subgraphs": [
    {
      "tensors": [
        {
          "shape": [
            1,
            256,
            256,
            3
          ],
          "type": "UINT8",
          "buffer": 1,
          "name": "serving_default_input:0",
          "quantization": {
            "details_type": "NONE",
            "quantized_dimension": 0
          },
          "is_variable": false
        },

To:

  "subgraphs": [
    {
      "tensors": [
        {
          "shape": [
            1,
            256,
            256,
            3
          ],
          "type": "FLOAT32",
          "buffer": 1,
          "name": "serving_default_input",
          "quantization": {
            "details_type": "NONE",
            "quantized_dimension": 0
          },
          "is_variable": false
        },

Delete lite-model_movenet_singlepose_thunder_tflite_float16_4.tflite. Screenshot 2021-11-10 08:57:36 Regenerate .tflite from JSON.

$ ../flatc -o . -b ../schema.fbs lite-model_movenet_singlepose_thunder_tflite_float16_4.json

Screenshot 2021-11-10 08:58:27 Screenshot 2021-11-10 09:55:12

Delete lite-model_movenet_singlepose_thunder_tflite_float16_4.json. Screenshot 2021-11-10 09:01:33

Rerun.

$ tflite2tensorflow \
--model_path lite-model_movenet_singlepose_thunder_tflite_float16_4.tflite \
--flatc_path ../flatc \
--schema_path ../schema.fbs \
--output_pb \
--optimizing_for_openvino_and_myriad \
--rigorous_optimization_for_myriad

  : 
  :
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: CONCATENATION
{'builtin_options': {'axis': 3, 'fused_activation_function': 'NONE'},
 'builtin_options_type': 'ConcatenationOptions',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [315, 314],
 'opcode_index': 16,
 'outputs': [316]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
outputs:
{'dtype': <class 'numpy.float32'>,
 'index': 316,
 'name': 'StatefulPartitionedCall',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([ 1,  1, 17,  3], dtype=int32),
 'shape_signature': array([ 1,  1, 17,  3], dtype=int32),
 'sparsity_parameters': {}}
TensorFlow/Keras model building process complete!
saved_model / .pb output started ====================================================
saved_model / .pb output complete!

Screenshot 2021-11-10 09:03:24 Screenshot 2021-11-10 09:03:32

$ tflite2tensorflow \
--model_path lite-model_movenet_singlepose_thunder_tflite_float16_4.tflite \
--flatc_path ../flatc \
--schema_path ../schema.fbs \
--output_openvino_and_myriad

  :
  :
Model Optimizer version:    2021.4.0-3839-cd81789d294-releases/2021/4
[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /home/user/workdir/saved_model/openvino/FP16/saved_model.xml
[ SUCCESS ] BIN file: /home/user/workdir/saved_model/openvino/FP16/saved_model.bin
[ SUCCESS ] Total execution time: 15.68 seconds. 
[ SUCCESS ] Memory consumed: 1035 MB. 
It's been a while, check for a new version of Intel(R) Distribution of OpenVINO(TM) toolkit here https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html?cid=other&source=prod&campid=ww_2021_bu_IOTG_OpenVINO-2021-4-LTS&content=upg_all&medium=organic or on the GitHub*

OpenVINO IR FP16 convertion complete! - saved_model/openvino/FP16
Myriad Inference Engine blob convertion started ============================================
Inference Engine: 
    IE version ......... 2021.4.0
    Build ........... 2021.4.0-3839-cd81789d294-releases/2021/4
[Warning][VPU][Config] Deprecated option was used : VPU_MYRIAD_PLATFORM
Done

Myriad Inference Engine blob convertion complete! - saved_model/openvino/myriad

Screenshot 2021-11-10 09:07:04 Screenshot 2021-11-10 09:07:12 Screenshot 2021-11-10 09:07:18 Screenshot 2021-11-10 09:07:36

  1. For use with GPU or Myriad
    $ rm saved_model.xml
    $ cp saved_model_myriad.xml saved_model.xml
  2. For use with CPU
    $ rm saved_model.xml
    $ cp saved_model_vino.xml saved_model.xml
fan2tamo commented 2 years ago

Thank you for teaching. The process was successful and it worked on the GPU!

Thank you for your polite explanation. If a similar issue has already been post, I'm sorry I could not notice.