PaddlePaddle / Paddle2ONNX

ONNX Model Exporter for PaddlePaddle
Apache License 2.0
717 stars 166 forks source link

support fp16 resnet #1157

Closed Zheng-Bicheng closed 8 months ago

Zheng-Bicheng commented 11 months ago

1 已解决问题

2 需要讨论的问题

2.1 模型缓冲区溢出问题(已解决)

uie-x-base已经能成功转为ONNX模型,且运行模型检查无错误。但是读取模型阶段出现以下错误:

Traceback (most recent call last):
  File "onnx_inference_nlp.py", line 24, in <module>
    predict_by_onnx(create_input())
  File "onnx_inference_nlp.py", line 17, in predict_by_onnx
    sess = rt.InferenceSession('inference_fp16.onnx', providers=['CPUExecutionProvider'])
  File "/Users/zhengbicheng/miniconda3/envs/paddle2onnx/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/Users/zhengbicheng/miniconda3/envs/paddle2onnx/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 463, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/generator/constant_of_shape_base.h:125 void onnxruntime::ConstantOfShapeBase<onnxruntime::TypeList<long long, onnxruntime::MLFloat16, float, double, signed char, short, int, unsigned char, unsigned short, unsigned int, unsigned long long, bool>>::SetValueFromTensorProto(const onnx::TensorProto &) [EnabledOutputTypeList = onnxruntime::TypeList<long long, onnxruntime::MLFloat16, float, double, signed char, short, int, unsigned char, unsigned short, unsigned int, unsigned long long, bool>] [ONNXRuntimeError] : 1 : FAIL : endian_utils.cc:57 CopyLittleEndian source and destination buffer size mismatch

@jiangjiajun 之前有碰到这个错误吗?

Zheng-Bicheng commented 11 months ago

2.2 问题: ONNX FP16 算子精度和 Paddle Inference不对齐

通过裁剪Resnet18模型,可以得到一个只包含一个卷积层的模型,如下图:

image

通过第一个commit可以使用Paddle2ONNX把这个模型转换成ONNX模型

但是经过测试模型的精度(绝对误差在0.005以内)无法对齐,而在FP32下,PaddleInference转为ONNX模型后可以实现绝对误差在0.000001以内的对齐,要如何排查是什么原因导致的误差呢?

Not equal to tolerance rtol=1e-07, atol=0.005

Mismatched elements: 23 / 802816 (0.00286%)
Max absolute difference: 0.007812
Max relative difference: 584.
 x: array([[[[-6.4160e-01, -2.7441e+00, -2.9570e+00, ..., -2.3359e+00,
          -2.3594e+00, -3.6172e+00],
         [ 1.9912e+00,  4.6924e-01, -1.4661e-01, ...,  2.6660e-01,...
 y: array([[[[-6.4160e-01, -2.7441e+00, -2.9570e+00, ..., -2.3359e+00,
          -2.3594e+00, -3.6172e+00],
         [ 1.9912e+00,  4.6899e-01, -1.4661e-01, ...,  2.6587e-01,...

类似这样的误差在Resnet模型内基本可以忽略不计,但是在NLP模型内可能由于误差的累积导致最后模型的误差非常大。