PaddlePaddle / Paddle2ONNX

ONNX Model Exporter for PaddlePaddle
Apache License 2.0
729 stars 172 forks source link

中文表格识别模型ch_ppstructure_mobile_v2.0_SLANet_infer ONNX半精度推理报错 #1347

Closed JIANG3330 closed 4 weeks ago

JIANG3330 commented 3 months ago

请将下面信息填写完整,便于我们快速解决问题,谢谢!

问题描述 请在此处详细的描述报错信息

中文表格识别模型ch_ppstructure_mobile_v2.0_SLANet_infer https://paddleocr.bj.bcebos.com/ppstructure/models/slanet/ch_ppstructure_mobile_v2.0_SLANet_infer.tar 使用paddle2onnx工具将模型转换为fp32的onnx可以正常推理,但是如果导出fp16模型--export_fp16_model=True onnxruntime推理会有如下报错:

    ort_session = ort.InferenceSession(model_path, providers=['CPUExecutionProvider'])
  File "E:\anaconda3\envs\onnx\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "E:\anaconda3\envs\onnx\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 472, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from SLANetv1_fp16.onnx failed:Node (p2o.Loop.0) Op (Loop) [TypeInferenceError] Graph attribute inferencing failed: Node:p2o.Loop.0 Tensor element type mismatch. 1 != 10

如果我先将fp32的onnx模型导出,然后用下面的代码转换为fp16的onnx模型,onnxruntime推理报错如下:

  ort_session = ort.InferenceSession(model_path, providers=['CPUExecutionProvider'])
  File "E:\anaconda3\envs\onnx\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "E:\anaconda3\envs\onnx\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 472, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from ch_ppstructure_mobile_v2_SLANet_fp16.onnx failed:Node (p2o.Resize.0) Op (Resize) [ShapeInferenceError] Either `sizes` or `scales` must be provided, but not both of them

fp32转fp16代码:

import onnx
from onnxconverter_common import float16

model = onnx.load("ch_ppstructure_mobile_v2.0_SLANet_infer.onnx")
model_fp16 = float16.convert_float_to_float16(model=model)
onnx.save(model_fp16, "ch_ppstructure_mobile_v2.0_SLANet_infer_fp16.onnx")

推理代码:

import onnxruntime as ort
import numpy as np

model_path = 'ch_ppstructure_mobile_v2.0_SLANet_infer_fp16.onnx'

input_data = np.random.randn(1, 3, 488, 488).astype(np.float16)

ort_session = ort.InferenceSession(model_path, providers=['CPUExecutionProvider'])
input_name = ort_session.get_inputs()[0].name
output_name = ort_session.get_outputs()[0].name
pred_onnx = ort_session.run([output_name], {input_name: input_data})

print(pred_onnx)

更多信息 :

报错截图

其他信息

不知道这个问题是onnxruntime的问题还是paddle2onnx的问题,我在onnxruntime的仓库下看到了相似的问题: https://github.com/microsoft/onnxruntime/discussions/17210

请相关的大佬帮忙看看,感谢~

Zheng-Bicheng commented 4 weeks ago

已经更新支持 while 算子,等待 CI 结束后 安装 Paddle2ONNX v1.2.10 使用:

Zheng-Bicheng commented 4 weeks ago

先关闭这个Issues,有问题请再打开并 @Zheng-Bicheng