Monday-Leo / YOLOv7_Tensorrt

A simple implementation of Tensorrt YOLOv7
108 stars 15 forks source link

onnx 转 engine 报错 #9

Open simple123456T opened 1 year ago

simple123456T commented 1 year ago

cuda10.2 cudnn7.6.5 trt7.0.0.11 pytorch:1.8.2.LTS

报错信息如下: D:\ProgramFiles\TensorRT-7.0.0.11.Windows10.x86_64.cuda-10.2.cudnn7.6\TensorRT-7.0.0.11\bin>trtexec --onnx=./yolov7.onnx --saveEngine=./yolov7_fp16.engine --fp16 --workspace=200 &&&& RUNNING TensorRT.trtexec # trtexec --onnx=./yolov7.onnx --saveEngine=./yolov7_fp16.engine --fp16 --workspace=200 [11/02/2022-10:53:02] [I] === Model Options === [11/02/2022-10:53:02] [I] Format: ONNX [11/02/2022-10:53:02] [I] Model: ./yolov7.onnx [11/02/2022-10:53:02] [I] Output: [11/02/2022-10:53:02] [I] === Build Options === [11/02/2022-10:53:02] [I] Max batch: 1 [11/02/2022-10:53:02] [I] Workspace: 200 MB [11/02/2022-10:53:02] [I] minTiming: 1 [11/02/2022-10:53:02] [I] avgTiming: 8 [11/02/2022-10:53:02] [I] Precision: FP16 [11/02/2022-10:53:02] [I] Calibration: [11/02/2022-10:53:02] [I] Safe mode: Disabled [11/02/2022-10:53:02] [I] Save engine: ./yolov7_fp16.engine [11/02/2022-10:53:02] [I] Load engine: [11/02/2022-10:53:02] [I] Inputs format: fp32:CHW [11/02/2022-10:53:02] [I] Outputs format: fp32:CHW [11/02/2022-10:53:02] [I] Input build shapes: model [11/02/2022-10:53:02] [I] === System Options === [11/02/2022-10:53:02] [I] Device: 0 [11/02/2022-10:53:02] [I] DLACore: [11/02/2022-10:53:02] [I] Plugins: [11/02/2022-10:53:02] [I] === Inference Options === [11/02/2022-10:53:02] [I] Batch: 1 [11/02/2022-10:53:02] [I] Iterations: 10 [11/02/2022-10:53:02] [I] Duration: 3s (+ 200ms warm up) [11/02/2022-10:53:02] [I] Sleep time: 0ms [11/02/2022-10:53:02] [I] Streams: 1 [11/02/2022-10:53:02] [I] ExposeDMA: Disabled [11/02/2022-10:53:02] [I] Spin-wait: Disabled [11/02/2022-10:53:02] [I] Multithreading: Disabled [11/02/2022-10:53:02] [I] CUDA Graph: Disabled [11/02/2022-10:53:02] [I] Skip inference: Disabled [11/02/2022-10:53:02] [I] Input inference shapes: model [11/02/2022-10:53:02] [I] Inputs: [11/02/2022-10:53:02] [I] === Reporting Options === [11/02/2022-10:53:02] [I] Verbose: Disabled [11/02/2022-10:53:02] [I] Averages: 10 inferences [11/02/2022-10:53:02] [I] Percentile: 99 [11/02/2022-10:53:02] [I] Dump output: Disabled [11/02/2022-10:53:02] [I] Profile: Disabled [11/02/2022-10:53:02] [I] Export timing to JSON file: [11/02/2022-10:53:02] [I] Export output to JSON file: [11/02/2022-10:53:02] [I] Export profile to JSON file: [11/02/2022-10:53:02] [I]

Input filename: ./yolov7.onnx ONNX IR version: 0.0.6 Opset version: 12 Producer name: pytorch Producer version: 1.8 Domain: Model version: 0 Doc string:

[11/02/2022-10:53:04] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [11/02/2022-10:53:04] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [11/02/2022-10:53:04] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [11/02/2022-10:53:04] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [11/02/2022-10:53:04] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [11/02/2022-10:53:04] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [11/02/2022-10:53:04] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [11/02/2022-10:53:04] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [11/02/2022-10:53:04] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [11/02/2022-10:53:04] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [11/02/2022-10:53:04] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [11/02/2022-10:53:04] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [11/02/2022-10:53:04] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. While parsing node number 167 [Resize]: ERROR: ModelImporter.cpp:124 In function parseGraph: [5] Assertion failed: ctx->tensors().count(inputName) [11/02/2022-10:53:05] [E] Failed to parse onnx file [11/02/2022-10:53:05] [E] Parsing model failed [11/02/2022-10:53:05] [E] Engine creation failed [11/02/2022-10:53:05] [E] Engine set up failed &&&& FAILED TensorRT.trtexec # trtexec --onnx=./yolov7.onnx --saveEngine=./yolov7_fp16.engine --fp16 --workspace=200

Monday-Leo commented 1 year ago

可以先用QQ群里的ONNX试一下,排除是tensorrt环境的问题。