Open DataXujing opened 3 years ago
@DataXujing 请确认一下config.yaml文件中的ONNX文件路径对不对,写个绝对路径试试。
myuser@ubuntu:~/xujing/tensorRT_apply/scaleyolov4/build$ ./ScaledYOLOv4_trt ../config-p7.yaml ../samples/
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:604] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:81] The total number of bytes read was 1144532591
----------------------------------------------------------------
Input filename: /home/myuser/xujing/tensorRT_apply/scaleyolov4/best.onnx
ONNX IR version: 0.0.6
Opset version: 12
Producer name: pytorch
Producer version: 1.6
Domain:
Model version: 0
Doc string:
----------------------------------------------------------------
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:604] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:81] The total number of bytes read was 1144532591
ERROR: ModelImporter.cpp:92 In function parseGraph:
[8] Assertion failed: convertOnnxWeights(initializer, &weights, ctx)
[12/08/2020-15:39:42] [E] Failure while parsing ONNX file
start building engine
[12/08/2020-15:39:42] [E] [TRT] Network must have at least one output
[12/08/2020-15:39:42] [E] [TRT] Network validation failed.
build engine done
ScaledYOLOv4_trt: /home/myuser/xujing/tensorRT_apply/scaleyolov4/./includes/common/common.hpp:138: void onnxToTRTModel(const string&, const string&, nvinfer1::ICudaEngine*&, const int&): Assertion `engine' failed.
Aborted (core dumped)
大佬,依然出现这个错误
@DataXujing 请问一下用的是哪个版本的TensorRT?请试一下这个onnx:https://pan.baidu.com/s/1Sp-sOT_mYYVXgXE9uN_ShQ,提取码:hytj
我使用的是tensorRT7,0.0.11.
导出ONNX模型时,试一试opset=10,7,0.0.11不支持opset=12
@linghu8812 the same error in yolov5s.onnx, any suggestions?
导出ONNX模型时,试一试opset=10,7,0.0.11及以下不支持opset=12
@linghu8812 试过了 好像还是没效果
@sporterman 试一下下面代码测试yolov5s.onnx是否正确
import onnxruntime
import numpy as np
sess_options = onnxruntime.SessionOptions()
sess = onnxruntime.InferenceSession('./yolov5s.onnx', sess_options)
data = [np.random.rand(1, 3, 640, 640).astype(np.float32)]
input_names = sess.get_inputs()
feed = zip(sorted(i_.name for i_ in input_names), data)
result = sess.run(None, dict(feed))
print(result[0].shape)
输出结果应为
(1, 25200, 85)
@linghu8812 试过了 我弄错文件了 应该是export_onnx文件,但是一直复制的是文件开头的python export.py 导致出来的onnx文件是不对的 ,感谢大佬的纠正
@linghu8812 你好, 同样的错误出现在retinaface(mxnet)下转出的onnx模型做解析的时候, 想问一下有什么思路嘛
@sporterman 试一下下面代码测试yolov5s.onnx是否正确
import onnxruntime import numpy as np sess_options = onnxruntime.SessionOptions() sess = onnxruntime.InferenceSession('./yolov5s.onnx', sess_options) data = [np.random.rand(1, 3, 640, 640).astype(np.float32)] input_names = sess.get_inputs() feed = zip(sorted(i_.name for i_ in input_names), data) result = sess.run(None, dict(feed)) print(result[0].shape)
输出结果应为
(1, 25200, 85)
@IGnoredBird 按着这个方法测试onnx文件的输出是否正确
Traceback (most recent call last):
File "test.py", line 1349, in
@linghu8812 模型是用export_onnx.py转的, 没有报错. 跑刚刚那个oonx的测试出错了,如上.
@IGnoredBird which tensorrt and onnx versions are you using? Also, while exporting, what is the opset value did you set?
@chandu1263 tensorrt 7.1.3.4 . onnx 1.8.0. when i test the onnx model, I get the message : 2021-01-27 13:56:34.505019382 [W:onnxruntime:Default, upsample.h:73 UpsampleBase] tf_half_pixel_for_nn
is deprecated since opset 13, yet this opset 13 model uses the deprecated attribute.
but i have not found how to set opset now. the export_onnx.py script uses mxnet.contrib's api export_model to export onnx model. thanks
@IGnoredBird 与pytorch不同,修改mxnet导出的onnx模型opset,需要修改mxnet contrib内部的代码,试试使用onnx==1.5.0
@chandu1263 @linghu8812 thanks , problem solved with onnx==1.5.0
tensorrt:7.0.0.11,onnx 1.5.0和1.6.0都试过 都不行,用onnx runtime能够得到输出结果,报错信息如下:
ERROR: ModelImporter.cpp:92 In function parseGraph: [8] Assertion failed: convertOnnxWeights(initializer, &weights, ctx) [02/26/2021-10:04:39] [E] Failure while parsing ONNX file start building engine [02/26/2021-10:04:39] [E] [TRT] Network must have at least one output [02/26/2021-10:04:39] [E] [TRT] Network validation failed. build engine done ScaledYOLOv4_trt: /home/work/deep_learning/detection/inference/tensorrt_inference/ScaledYOLOv4/../includes/common/common.hpp:138: void onnxToTRTModel(const string&, const string&, nvinfer1::ICudaEngine*&, const int&): Assertion `engine' failed. 已放弃 (核心已转储)
@linghu8812
@deep-practice 7.0.0.11 use opset=10 when export onnx models
@deep-practice 7.0.0.11 use opset=10 when export onnx models
onnx版本有什么要求吗?
导出ONNX模型时,试一试opset=10,7,0.0.11及以下不支持opset=12
请问tensorrt版本和支持的Opset具体在哪里看呢?
@DaChaoXc TensorRT 7.1以上支持opset12,6.0和7.0支持opset10,高版本的onnx可以支持低opset
@deep-practice 7.0.0.11 use opset=10 when export onnx models
is right!!
@sporterman try the following code to test whether yolov5s.onnx is correct
import onnxruntime import numpy as np sess_options = onnxruntime.SessionOptions() sess = onnxruntime.InferenceSession('./yolov5s.onnx', sess_options) data = [np.random.rand(1, 3, 640, 640).astype(np.float32)] input_names = sess.get_inputs() feed = zip(sorted(i_.name for i_ in input_names), data) result = sess.run(None, dict(feed)) print(result[0].shape)
The output result should be
(1, 25200, 85)
I have created the ONNX file using the export_onnx.py
and even tested the output using this and it is correct and yet I still receive the error `engine->getNbBindings() == 2' failed
. I am using a Jetson Nano device and trying on yolov5s any advice?
While parsing node number 81 [Resize]: ERROR: ModelImporter.cpp:124 In function parseGraph: [5] Assertion failed: ctx->tensors().count(inputName) [08/02/2022-21:57:55] [E] Failure while parsing ONNX file start building engine [08/02/2022-21:57:55] [E] [TRT] Network must have at least one output [08/02/2022-21:57:55] [E] [TRT] Network validation failed. build engine done mmpose_trt: /home/pcb/Algorithm/tensorrt_inference/code/src/model.cpp:46: void Model::OnnxToTRTModel(): Assertion `engine' failed. 已放弃 (核心已转储)
我尝试使用opset=10,但是mmpose显示只支持opset 11
python3 tools/deployment/pytorch2onnx.py /home/pcb/Algorithm/tensorrt_inference/project/mmpose/mmpose-master/configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w48_coco_256x192.py hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth --output-file hrnet_w48_coco_256x192.onnx
Traceback (most recent call last):
File "tools/deployment/pytorch2onnx.py", line 134, in
按照ScaledYOLOv4的配置方式成功后,运行YOLO v4-p7出现如题所示的错误!