onnx / onnx-tensorrt

ONNX-TensorRT: TensorRT backend for ONNX
Apache License 2.0
2.94k stars 544 forks source link

While parsing node number 7 [Loop]: ERROR: ModelImporter.cpp:92 In function parseGraph: #633

Open vilmara opened 3 years ago

vilmara commented 3 years ago

Description

I have converted the TF Zoo pre-trained model Faster-RCNN to ONNX for later on run it with TensorRT, but got the error

ERROR: ModelImporter.cpp:92 In function parseGraph:
[8] Assertion failed: convertOnnxWeights(initializer, &weights, ctx)

Environment

TensorRT Version: 7.0 ONNX-TensorRT Version / Branch: using onnx/tensorflow-onnx master branch 1.9 GPU Type: T4 Nvidia Driver Version: 450.51.06 CUDA Version: 10.2 CUDNN Version: 7.6.5 Operating System + Version: Ubuntu 18.04 Python Version (if applicable): 3.6 TensorFlow + TF2ONNX Version (if applicable): TF-1.15.2 | tf2onnx=1.9.0/72fb20 PyTorch Version (if applicable): N/A Baremetal or Container (if container which image + tag): nvcr.io/nvidia/tensorflow:20.02-tf1-py3

Steps To Reproduce

1- Git clone the onnx/tensorflow-onnx repo

$ git clone https://github.com/onnx/tensorflow-onnx.git
$ cd tensorflow-onnx

2- Parse TF model to ONNX $ python tests/run_pretrained_models.py --tests faster_rcnn_inception_v2_coco --opset 11 –debug

3- Patch the parsed ONNX model to change the input data type ("onnx expects input image to be INT8 but TensorRT use Float32") as explained in this post https://forums.developer.nvidia.com/t/exporting-tensorflow-models-to-jetson-nano/154185/10?u=virsg

4- Run the updated model with trtexec $ trtexec --onnx=/workspace/triton_blog/faster_rcnn_inception_v2_coco_updated.onnx --explicitBatch

Trace log error:

&&&& RUNNING TensorRT.trtexec # trtexec --onnx=/workspace/faster_rcnn_inception_v2_coco.onnx --explicitBatch
[01/22/2021-19:37:38] [I] === Model Options ===
[01/22/2021-19:37:38] [I] Format: ONNX
[01/22/2021-19:37:38] [I] Model: /workspace/faster_rcnn_inception_v2_coco_guenther_updated_opset11_onnx1.6.onnx
[01/22/2021-19:37:38] [I] Output:
[01/22/2021-19:37:38] [I] === Build Options ===
[01/22/2021-19:37:38] [I] Max batch: explicit
[01/22/2021-19:37:38] [I] Workspace: 16 MB
[01/22/2021-19:37:38] [I] minTiming: 1
[01/22/2021-19:37:38] [I] avgTiming: 8
[01/22/2021-19:37:38] [I] Precision: FP32
[01/22/2021-19:37:38] [I] Calibration:
[01/22/2021-19:37:38] [I] Safe mode: Disabled
[01/22/2021-19:37:38] [I] Save engine:
[01/22/2021-19:37:38] [I] Load engine:
[01/22/2021-19:37:38] [I] Inputs format: fp32:CHW
[01/22/2021-19:37:38] [I] Outputs format: fp32:CHW
[01/22/2021-19:37:38] [I] Input build shapes: model
[01/22/2021-19:37:38] [I] === System Options ===
[01/22/2021-19:37:38] [I] Device: 0
[01/22/2021-19:37:38] [I] DLACore:
[01/22/2021-19:37:38] [I] Plugins:
[01/22/2021-19:37:38] [I] === Inference Options ===
[01/22/2021-19:37:38] [I] Batch: Explicit
[01/22/2021-19:37:38] [I] Iterations: 10
[01/22/2021-19:37:38] [I] Duration: 3s (+ 200ms warm up)
[01/22/2021-19:37:38] [I] Sleep time: 0ms
[01/22/2021-19:37:38] [I] Streams: 1
[01/22/2021-19:37:38] [I] ExposeDMA: Disabled
[01/22/2021-19:37:38] [I] Spin-wait: Disabled
[01/22/2021-19:37:38] [I] Multithreading: Disabled
[01/22/2021-19:37:38] [I] CUDA Graph: Disabled
[01/22/2021-19:37:38] [I] Skip inference: Disabled
[01/22/2021-19:37:38] [I] Inputs:
[01/22/2021-19:37:38] [I] === Reporting Options ===
[01/22/2021-19:37:38] [I] Verbose: Disabled
[01/22/2021-19:37:38] [I] Averages: 10 inferences
[01/22/2021-19:37:38] [I] Percentile: 99
[01/22/2021-19:37:38] [I] Dump output: Disabled
[01/22/2021-19:37:38] [I] Profile: Disabled
[01/22/2021-19:37:38] [I] Export timing to JSON file:
[01/22/2021-19:37:38] [I] Export output to JSON file:
[01/22/2021-19:37:38] [I] Export profile to JSON file:
[01/22/2021-19:37:38] [I]
----------------------------------------------------------------
Input filename:   /workspace/faster_rcnn_inception_v2_coco.onnx
ONNX IR version:  0.0.6
Opset version:    11
Producer name:
Producer version:
Domain:
Model version:    0
Doc string:
----------------------------------------------------------------
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:222: One or more weights outside the range of INT32 was clamped
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:222: One or more weights outside the range of INT32 was clamped
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:222: One or more weights outside the range of INT32 was clamped
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
[01/22/2021-19:37:38] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cst down to INT32.
While parsing node number 7 [Loop]:
ERROR: ModelImporter.cpp:92 In function parseGraph:
[8] Assertion failed: convertOnnxWeights(initializer, &weights, ctx)
[01/22/2021-19:37:38] [E] Failed to parse onnx file
[01/22/2021-19:37:38] [E] Parsing model failed
[01/22/2021-19:37:38] [E] Engine creation failed
[01/22/2021-19:37:38] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # trtexec --onnx=/workspace/faster_rcnn_inception_v2_coco.onnx --explicitBatch
kevinch-nv commented 3 years ago

Thanks for the report. We'll take a look at this issue

zxcvbml commented 2 years ago

Have you solved it?I met the same question. Thank you for your answer