PINTO0309 / PINTO_model_zoo

A repository for storing models that have been inter-converted between various frameworks. Supported frameworks are TensorFlow, PyTorch, ONNX, OpenVINO, TFJS, TFTRT, TensorFlowLite (Float32/16/INT8), EdgeTPU, CoreML.
https://qiita.com/PINTO
MIT License
3.49k stars 566 forks source link

"091_gaze-estimation-adas-0002" failed #341

Closed HzHzHzHzHz closed 1 year ago

HzHzHzHzHz commented 1 year ago

Issue Type

Bug

OS

Windows

OS architecture

x86_64

Programming Language

Python

Framework

TensorFlow, OpenVINO

Model name and Weights/Checkpoints URL

FP32/gaze-estimation-adas-0002.xml、FP32/gaze-estimation-adas-0002.bin

Description

openvino2tensorflow failed

openvino2tensorflow \ --model_path openvino/gaze-estimation-adas-0002/gaze-estimation-adas-0002.xml \ --output_saved_model True \ --output_weight_quant_tflite True \ --string_formulas_for_normalization 'data / 255.0'

Relevant Log Output

ERROR: ConcatOp : Ranks of all input tensors should match: shape[0] = [1,1] vs. shape[1] = [1] [Op:ConcatV2] name: concat
ERROR: model_path  : E:\Htestp\PINTO_model_zoo-main\091_gaze-estimation-adas-0002\H\FP32\gaze-estimation-adas-0002.xml
ERROR: weights_path: E:\Htestp\PINTO_model_zoo-main\091_gaze-estimation-adas-0002\H\FP32\gaze-estimation-adas-0002.bin
ERROR: layer_id    : 109
ERROR: input_layer0 layer_id=107: tf.Tensor([[1]], shape=(1, 1), dtype=int64)
ERROR: input_layer1 layer_id=108: Const(ndarray).shape  (1,)
array([-1])
ERROR: The trace log is below.

URL or source code for simple inference testing code

No response

PINTO0309 commented 1 year ago
pip install -U onnx2tf onnxruntime onnx==1.13.1 psutil

onnx2tf \
-i gaze_estimation_adas_0002_zero_remove.onnx \
-kat -kat right_eye_image left_eye_image head_pose_angles \
-osd \
-coion \
-oiqt \
-qt per-tensor
HzHzHzHzHz commented 1 year ago
pip install -U onnx2tf onnxruntime onnx==1.13.1 psutil

onnx2tf \
-i gaze_estimation_adas_0002_zero_remove.onnx \
-kat -kat right_eye_image left_eye_image head_pose_angles \
-osd \
-coion \
-oiqt \
-qt per-tensor

May I ask how this onnx was converted