linghu8812 / tensorrt_inference

699 stars 205 forks source link

Retinaface: Invalid Node - face_rpn_cls_score_reshape_stride32 #119

Closed cyrusbehr closed 2 years ago

cyrusbehr commented 2 years ago

Followed instructions for converting mxnet retinaface model to onnx format. I built the RetinaFace_trt executable, linking against TensorRT-8.0.3.4.

When I try to run the exe, I get the following crash:

(base)  cyrus:~/work/repos/tensorrt_inference/RetinaFace/build[master !] ./RetinaFace_trt ../config.yaml ../samples
[11/17/2021-17:55:09] [I] [TRT] [MemUsageChange] Init CUDA: CPU +533, GPU +0, now: CPU 539, GPU 885 (MiB)
[11/17/2021-17:55:09] [I] [TRT] ----------------------------------------------------------------
[11/17/2021-17:55:09] [I] [TRT] Input filename:   ../R50.onnx
[11/17/2021-17:55:09] [I] [TRT] ONNX IR version:  0.0.8
[11/17/2021-17:55:09] [I] [TRT] Opset version:    15
[11/17/2021-17:55:09] [I] [TRT] Producer name:    
[11/17/2021-17:55:09] [I] [TRT] Producer version: 
[11/17/2021-17:55:09] [I] [TRT] Domain:           
[11/17/2021-17:55:09] [I] [TRT] Model version:    0
[11/17/2021-17:55:09] [I] [TRT] Doc string:       
[11/17/2021-17:55:09] [I] [TRT] ----------------------------------------------------------------
[11/17/2021-17:55:09] [W] [TRT] onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/17/2021-17:55:09] [E] [TRT] ModelImporter.cpp:720: While parsing node number 189 [Reshape -> "face_rpn_cls_score_reshape_stride32"]:
[11/17/2021-17:55:09] [E] [TRT] ModelImporter.cpp:721: --- Begin node ---
[11/17/2021-17:55:09] [E] [TRT] ModelImporter.cpp:722: input: "face_rpn_cls_score_stride32"
input: "reshape_attr_tensor485"
output: "face_rpn_cls_score_reshape_stride32"
name: "face_rpn_cls_score_reshape_stride32"
op_type: "Reshape"

[11/17/2021-17:55:09] [E] [TRT] ModelImporter.cpp:723: --- End node ---
[11/17/2021-17:55:09] [E] [TRT] ModelImporter.cpp:726: ERROR: ModelImporter.cpp:162 In function parseGraph:
[6] Invalid Node - face_rpn_cls_score_reshape_stride32
Attribute not found: allowzero
[11/17/2021-17:55:09] [E] Failure while parsing ONNX file
start building engine
[11/17/2021-17:55:09] [E] [TRT] 4: [network.cpp::validate::2410] Error Code 4: Internal Error (Network must have at least one output)
build engine done
RetinaFace_trt: /home/cyrus/work/repos/tensorrt_inference/RetinaFace/../includes/common/common.hpp:138: void onnxToTRTModel(const string&, const string&, nvinfer1::ICudaEngine*&, const int&): Assertion `engine' failed.
Aborted (core dumped)
cyrusbehr commented 2 years ago

When I use version TensorRT-7.1.3.4 of tensorrt, I get the following crash message:

(base)  cyrus:~/work/repos/tensorrt_inference/RetinaFace/build[master !]  ./RetinaFace_trt ../config.yaml ../samples
----------------------------------------------------------------
Input filename:   ../R50.onnx
ONNX IR version:  0.0.8
Opset version:    15
Producer name:    
Producer version: 
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
[11/17/2021-18:10:53] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
ERROR: builtin_op_importers.cpp:2523 In function importResize:
[8] Assertion failed: (mode != "nearest" || nearest_mode == "floor") && "This version of TensorRT only supports floor nearest_mode!"
[11/17/2021-18:10:53] [E] Failure while parsing ONNX file
start building engine
[11/17/2021-18:10:53] [E] [TRT] Network must have at least one output
[11/17/2021-18:10:53] [E] [TRT] Network validation failed.
build engine done
RetinaFace_trt: /home/cyrus/work/repos/tensorrt_inference/RetinaFace/../includes/common/common.hpp:138: void onnxToTRTModel(const string&, const string&, nvinfer1::ICudaEngine*&, const int&): Assertion `engine' failed.
Aborted (core dumped)
linghu8812 commented 2 years ago

https://github.com/linghu8812/tensorrt_inference/blob/master/RetinaFace/requirements.txt#L2, pls use onnx==1.5.0

cyrusbehr commented 2 years ago

That resolved the issue. Thank you.

buaaxiejun commented 2 years ago

Hi~ How did you end up solving this problem?

cyrusbehr commented 1 year ago

@buaaxiejun the issue is resolved by ensuring you have onnx version 1.5.0 installed before running the export_onnx.py script.