Open rahulsharma11 opened 2 years ago
The TensorRT version must be unified
im having the same problem I was running the conversation in the same environment docker: nvcr.io/nvidia/tensorrt:22.05-py3
[09/12/2022-09:00:16] [E] [TRT] parsers/onnx/ModelImporter.cpp:791: While parsing node number 279 [Resize -> "onnx::Concat_510"]:
[09/12/2022-09:00:16] [E] [TRT] parsers/onnx/ModelImporter.cpp:792: --- Begin node ---
[09/12/2022-09:00:16] [E] [TRT] parsers/onnx/ModelImporter.cpp:793: input: "input.300"
input: "onnx::Resize_509"
input: "onnx::Resize_969"
output: "onnx::Concat_510"
name: "Resize_279"
op_type: "Resize"
attribute {
name: "coordinate_transformation_mode"
s: "asymmetric"
type: STRING
}
attribute {
name: "cubic_coeff_a"
f: -0.75
type: FLOAT
}
attribute {
name: "mode"
s: "nearest"
type: STRING
}
attribute {
name: "nearest_mode"
s: "floor"
type: STRING
}
[09/12/2022-09:00:16] [E] [TRT] parsers/onnx/ModelImporter.cpp:794: --- End node ---
[09/12/2022-09:00:16] [E] [TRT] parsers/onnx/ModelImporter.cpp:796: ERROR: parsers/onnx/builtin_op_importers.cpp:3526 In function importResize:
[8] Assertion failed: scales.is_weights() && "Resize scales must be an initializer!"
[09/12/2022-09:00:16] [E] Failed to parse onnx file
[09/12/2022-09:00:16] [I] Finish parsing network model
[09/12/2022-09:00:16] [E] Parsing model failed
[09/12/2022-09:00:16] [E] Failed to create engine from model.
[09/12/2022-09:00:16] [E] Engine set up failed
180
Im not sure why you posted that issue, I tried both 8.0 and 8.2 tensorrt version and im still unable to convert
Hi, I tried the suggested command to convert the .pt model to .trt but its giving error-
ONNX export success, saved as yolov5s-face.onnx
Export complete (5.96s). Visualize with https://github.com/lutzroeder/netron. pred's shape is (1, 25200, 16) max(|torch_pred - onnx_pred|) = 0.0016937256
Starting TensorRT... onnx_model_path yolov5s-face.onnx [08/19/2022-05:37:41] [TRT] [W] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. Traceback (most recent call last): File "export.py", line 111, in
ONNX_to_TRT(onnx_model_path=f,trt_engine_path=f.replace('.onnx', '.trt'),fp16_mode=opt.fp16_trt)
File "/opt/yolov5-face/torch2trt/trt_model.py", line 31, in ONNX_to_TRT
assert parser.parse(model.read())
AssertionError
Any suggestion? Thanks.