linghu8812 / tensorrt_inference

702 stars 206 forks source link

RetinaFace mnet.25模型的问题 #75

Open pango99 opened 3 years ago

pango99 commented 3 years ago

你好,我用你的方法把RetinaFace mnet.25模型转换为onnx格式,然后载入TensorRT,但parseFromFile()调用失败了,TRT库有报告如下的信息:

TensorRT_WARNING: onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. TensorRT_ERROR: builtin_op_importers.cpp:2593 In function importResize: [8] Assertion failed: (mode != "nearest" || nearest_mode == "floor") && "This version of TensorRT only supports floor nearest_mode!"

请问你这里有遇到这个问题吗?你是用哪个版本TRT来测试,我是用最新的v7.2.3.4版测试,碰到了以上问题;

linghu8812 commented 3 years ago

转ONNX模型opset是多少? 我用的TensorRT版本是7.1.3.4

pango99 commented 3 years ago

转ONNX模型opset是多少? 我用的TensorRT版本是7.1.3.4


Input filename: G:\AI\PretrainedModel\InsightFace_Models\mnet.25\mnet.25.onnx ONNX IR version: 0.0.7 Opset version: 13 Producer name: Producer version: Domain: Model version: 0 Doc string:

是这些信息吗?

pango99 commented 3 years ago

还有请问下,以下这条警告信息会影响onnx模型的推理精度吗?因为trt把64位的权重值降为32位

TensorRT_WARNING: onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.

HYL-Dave commented 3 years ago

请问你有测试过转换出的onnx吗?我中间想尝试使用onnx推论比对结果,但报错

File "onnx_checker.py", line 50, in <module>
    ort_session = ort.InferenceSession(onnx_file)
  File "/home/hyl/.local/bin/.virtualenvs/mxnet_copy/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 280, in __init__
    self._create_inference_session(providers, provider_options)
  File "/home/hyl/.local/bin/.virtualenvs/mxnet_copy/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 307, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from mnet12.onnx failed:/onnxruntime_src/onnxruntime/core/graph/model_load_utils.h:47 void onnxruntime::model_load_utils::ValidateOpsetForDomain(const std::unordered_map<std::basic_string<char>, int>&, const onnxruntime::logging::Logger&, bool, const string&, int) ONNX Runtime only *guarantees* support for models stamped with official released onnx opset versions. Opset 14 is under development and support for this is limited. The operator schemas and or other functionality may change before next ONNX release and in this case ONNX Runtime will not guarantee backward compatibility. Current official support for domain ai.onnx is till opset 13.
pango99 commented 3 years ago

请问你有测试过转换出的onnx吗?我中间想尝试使用onnx推论比对结果,但报错

File "onnx_checker.py", line 50, in <module>
    ort_session = ort.InferenceSession(onnx_file)
  File "/home/hyl/.local/bin/.virtualenvs/mxnet_copy/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 280, in __init__
    self._create_inference_session(providers, provider_options)
  File "/home/hyl/.local/bin/.virtualenvs/mxnet_copy/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 307, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from mnet12.onnx failed:/onnxruntime_src/onnxruntime/core/graph/model_load_utils.h:47 void onnxruntime::model_load_utils::ValidateOpsetForDomain(const std::unordered_map<std::basic_string<char>, int>&, const onnxruntime::logging::Logger&, bool, const string&, int) ONNX Runtime only *guarantees* support for models stamped with official released onnx opset versions. Opset 14 is under development and support for this is limited. The operator schemas and or other functionality may change before next ONNX release and in this case ONNX Runtime will not guarantee backward compatibility. Current official support for domain ai.onnx is till opset 13.

我没有测试过导出的onnx模型,因为trt无法加载,我估计是机器上安装的onnx版本不对,不过我现在没用该模型做人脸检测了,所以不管了

HYL-Dave commented 3 years ago

请问你有测试过转换出的onnx吗?我中间想尝试使用onnx推论比对结果,但报错

File "onnx_checker.py", line 50, in <module>
    ort_session = ort.InferenceSession(onnx_file)
  File "/home/hyl/.local/bin/.virtualenvs/mxnet_copy/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 280, in __init__
    self._create_inference_session(providers, provider_options)
  File "/home/hyl/.local/bin/.virtualenvs/mxnet_copy/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 307, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from mnet12.onnx failed:/onnxruntime_src/onnxruntime/core/graph/model_load_utils.h:47 void onnxruntime::model_load_utils::ValidateOpsetForDomain(const std::unordered_map<std::basic_string<char>, int>&, const onnxruntime::logging::Logger&, bool, const string&, int) ONNX Runtime only *guarantees* support for models stamped with official released onnx opset versions. Opset 14 is under development and support for this is limited. The operator schemas and or other functionality may change before next ONNX release and in this case ONNX Runtime will not guarantee backward compatibility. Current official support for domain ai.onnx is till opset 13.

我没有测试过导出的onnx模型,因为trt无法加载,我估计是机器上安装的onnx版本不对,不过我现在没用该模型做人脸检测了,所以不管了

谢谢,换到版本1.5.0后,转换的onnx可以使用onnxruntime推理了。 因为output shape是(1, 16800, 15)那我想output[0, :, 0]应该就是分数?但输出值都非常低,我想知道是否有可能的原因?或是我搞错什么部份? 然后请问大佬现在都用什么模型作人脸检测啊?

pango99 commented 3 years ago

请问你有测试过转换出的onnx吗?我中间想尝试使用onnx推论比对结果,但报错

File "onnx_checker.py", line 50, in <module>
    ort_session = ort.InferenceSession(onnx_file)
  File "/home/hyl/.local/bin/.virtualenvs/mxnet_copy/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 280, in __init__
    self._create_inference_session(providers, provider_options)
  File "/home/hyl/.local/bin/.virtualenvs/mxnet_copy/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 307, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from mnet12.onnx failed:/onnxruntime_src/onnxruntime/core/graph/model_load_utils.h:47 void onnxruntime::model_load_utils::ValidateOpsetForDomain(const std::unordered_map<std::basic_string<char>, int>&, const onnxruntime::logging::Logger&, bool, const string&, int) ONNX Runtime only *guarantees* support for models stamped with official released onnx opset versions. Opset 14 is under development and support for this is limited. The operator schemas and or other functionality may change before next ONNX release and in this case ONNX Runtime will not guarantee backward compatibility. Current official support for domain ai.onnx is till opset 13.

我没有测试过导出的onnx模型,因为trt无法加载,我估计是机器上安装的onnx版本不对,不过我现在没用该模型做人脸检测了,所以不管了

谢谢,换到版本1.5.0后,转换的onnx可以使用onnxruntime推理了。 因为output shape是(1, 16800, 15)那我想output[0, :, 0]应该就是分数?但输出值都非常低,我想知道是否有可能的原因?或是我搞错什么部份? 然后请问大佬现在都用什么模型作人脸检测啊?

我没看到你写的代码所以我也不知道出了什么问题,你可以试试tensorrt_inference项目的代码先;我一般用MTCNN做大规模人脸检测,在嵌入式等运算量不足的机器用Ultra-Light-Fast-Generic-Face-Detector-1MB模型

HYL-Dave commented 3 years ago

我没看到你写的代码所以我也不知道出了什么问题,你可以试试tensorrt_inference项目的代码先;我一般用MTCNN做大规模人脸检测,在嵌入式等运算量不足的机器用Ultra-Light-Fast-Generic-Face-Detector-1MB模型

谢谢,我是打算试试的,但我发现docker build项目里的dockerfile会失败。

E: Unable to locate package libjasper-dev
The command '/bin/sh -c rm /etc/apt/sources.list.d/cuda.list /etc/apt/sources.list.d/nvidia-ml.list &&     apt-get update && apt-get -y upgrade && apt-get -y install ssh vim build-essential cmake git libgtk2.0-dev pkg-config     libavcodec-dev libavformat-dev libswscale-dev python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev     libjasper-dev libdc1394-22-dev qtbase5-dev qtdeclarative5-dev python3-pip zip' returned a non-zero code: 100

现在也没带GPU的空环境可以试tensorrt直接安装环境,才想先试试onnx,代码上非常单纯,我透过netron看到转出的model都已经concat了。所以就直接输出,代码概括如下,因为只是测试并没有特地写成class

import onnx
import cv2
import onnxruntime as ort
import numpy as np

image_path = '975.jpg'
img2 = cv2.imread(image_path)
# print(img2.shape)  # (1280, 720, 3) 人脸置中就不做padding
img2 = img2[240:960, :]
img2 = cv2.resize(img2, dsize=(640, 640))
img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2RGB)
img2 = np.transpose(img2, (2, 0, 1))  # HWC->CHW
ort_session = ort.InferenceSession(onnx_file)
input_name = ort_session.get_inputs()[0].name  # 'data'
outputs = ort_session.get_outputs()[0].name
input_blob = np.expand_dims(img2, axis=0).astype(np.float32)  # NCHW
out = ort_session.run([outputs], input_feed={input_name: input_blob})
print(out[0].shape)
print(out[0][0, :, 0]) # 我想第一项应该就是分数,实际上也都是正的,但最大的还小于0.05

输出

(1, 16800, 15)
[0.00039523 0.00053454 0.00060787 ... 0.00087765 0.00100299 0.00085114]

请问选用MTCNN做部署的原因是什么?

HYL-Dave commented 3 years ago

抱歉问的太发散,其实是都concat起来后,对怎么做postprocess弄不清楚?

cuongngm commented 3 years ago

i had a same issue, can anyone suggest me a solution? (onnx opset version 14, tensorrt 7)

cuongngm commented 3 years ago

i fixed it by downgrade onnx version to 1.5.0

TalhaUsuf commented 3 years ago

i had a same issue, can anyone suggest me a solution? (onnx opset version 14, tensorrt 7)

I was also having issues but I found it works only with onnx version 1.5.0 so try using the stated version of ONNX Plus I also built it with tensort 8.0.1.6. It works perfectly fine.