Melody-Zhou / tensorRT_Pro-YOLOv8

This repository is based on shouxieai/tensorRT_Pro, with adjustments to support YOLOv8.
MIT License
195 stars 33 forks source link

make yolo error #16

Open WeiZixu-HIT opened 4 months ago

WeiZixu-HIT commented 4 months ago

tensorrt不支持64位精度的ONNX模型: make yolo时报错: [2024-04-01 20:28:03][error][trt_infer.cpp:23]:NVInfer: src/tensorRT/onnx_parser/ModelImporter.cpp:739: --- End node --- [2024-04-01 20:28:03][error][trt_infer.cpp:23]:NVInfer: src/tensorRT/onnx_parser/ModelImporter.cpp:741: ERROR: src/tensorRT/onnx_parser/builtin_op_importers.cpp:3248 In function importRange: [8] Assertion failed: inputs.at(0).isInt32() && "For range operator with dynamic inputs, this version of TensorRT only supports INT32!" [2024-04-01 20:28:03][error][trt_builder.cpp:609]:Can not parse OnnX file: yolov9c_modified.onnx [2024-04-01 20:28:03][error][yolo.cpp:201]:Engine yolov9c_modified.FP32.trtmodel load failed [2024-04-01 20:28:03][error][app_yolo.cpp:53]:Engine is nullptr

Melody-Zhou commented 4 months ago

似乎是 TensorRT 中算子解析的问题,方便提供下你的各个库的版本吗?源码中默认使用的是 TensorRT-8.x 的版本

WeiZixu-HIT commented 4 months ago

CUDA 12.2 TensorRT 8.6.1.6 cudnn: cudnn-linux-x86_64-8.9.7.29_cuda12-archive protobuf 3.11.4 torch 2.2.1 torchaudio 2.2.1 torchvision 0.17.1 感谢大佬回复!

Melody-Zhou commented 4 months ago

各个软件版本看起来没有什么问题,你可以尝试以下几个步骤:

  1. 使用 onnx-simpler 优化下你的 yolov9c_modified.onnx 然后重新生成下 engine 看能否成功,参考代码如下:
# pip install onnxsim

import onnx
from onnxsim import simplify

onnx_model = onnx.load("yolov9c_modified.onnx")
model_simp, check = simplify(onnx_model)
assert check, "Simplified ONNX model could not be Validated"
onnx.save(model_simp, "yolov9c_modified.sim.onnx")
  1. 如果上述步骤失败,更新下 onnx-parser 解析器,指令如下:
cd tensorRT_Pro-YOLOv8
bash onnx_parser/use_tensorrt_8.6.sh
  1. 如果上述步骤失败,尝试导出静态 batch 的 onnx 模型