Closed 1icas closed 3 years ago
Hello @1icas , sorry for the delay response,
seems you have not call addOptimizationProfile
in the buildPredictionEngine
, could you take a check? thanks!
Close since no activity for more than 3 weeks, please reopen if you still have question, thanks!
@ttyio @rajeevsrao , i meet the same problem
[09/07/2021-15:54:19] [W] [TRT] /home/nvidia/TensorRT/parsers/onnx/onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[09/07/2021-15:54:19] [W] [TRT] Tensor DataType is determined at build time for tensors not marked as input or output.
[09/07/2021-15:54:19] [E] [TRT] Network has dynamic or shape inputs, but no optimization profile has been defined.
[09/07/2021-15:54:19] [E] [TRT] Network validation failed.
[09/07/2021-15:54:19] [E] Engine creation failed
[09/07/2021-15:54:19] [E] Engine set up failed
however, the same model works well on pc, but on tx2 and nx, it has problem with trtexec
.this is my model.
https://drive.google.com/file/d/1s36DFVYA0xftf_ihDaVhsxZjEXIaf7g-/view?usp=sharing
Hello @chenjun2hao , what's your command line to run this model? try
trtexec --onnx=*.onnx --best --optShapes='inputx':1x3x480x640
@ttyio , the command line is:
./trtexec --onnx=DDRNet23_OCR_17Class_stable_BN2_dynamic.onnx --saveEngine=DDRNet23_OCR_17Class_stable_BN2_dynamicf16.trt --workspace=64 --minShapes=inputx:1x3x480x640 --optShapes=inputx:16x3x480x640 --maxShapes=inputx:32x3x480x640 --fp16
and it works fine on pc with 3080+tensorrt7.2.2.3
but on tx2 and nx, i tried the same model and command, it print the problem:
Network has dynamic or shape inputs, but no optimization profile has been defined.
and i check the source code, it seems has config the profile.
@ttyio i just test your command on tx2. it has the same problem...
@chenjun2hao , I am not sure if these is bug in the trtexec, TRT is 7.1.3 in the jetson, have you tried TRT API to build the engine? thanks.
@ttyio , i only use the trtexec not use the TRT API. and how to use the TRT API to build the engine from onnx? thanks.
@chenjun2hao
I have just tried TX2 and cannot repro the failure. So maybe you are using old JetPack version, there was once a bug need --explicitBatch
to WAR the bug. Could you try add --explicitBatch
in your command?thanks!
@ttyio also the same error! this is my error and my command:
[09/09/2021-16:02:40] [W] [TRT] /home/nvidia/TensorRT/parsers/onnx/onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[09/09/2021-16:02:40] [W] [TRT] Tensor DataType is determined at build time for tensors not marked as input or output.
[09/09/2021-16:02:40] [E] [TRT] Network has dynamic or shape inputs, but no optimization profile has been defined.
[09/09/2021-16:02:40] [E] [TRT] Network validation failed.
[09/09/2021-16:02:40] [E] Engine creation failed
[09/09/2021-16:02:40] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # ./trtexec_debug --onnx=/home/nvidia/chenjun/model/DDRNet23_OCR_17Class_stable_BN2_dynamic.onnx --saveEngine=/home/nvidia/chenjun/model/DDRNet23_OCR_17Class_stable_BN2_dynamicf16.trt --minShapes=inputx:1x3x244x244 --optShapes=inputx:16x3x244x244 --maxShapes=inputx:32x3x244x244 --workspace=32 --explicitBatch
my jetpack version is:jetpack 4.4 [L4T 32.4.3]
.
@chenjun2hao , sorry I cannot tell from your log, Could you try the trtexec from the system instead of your debug version? If that still failed, maybe you need upgrade your JetPack?
@ttyio . OK, i will try.
@ttyio ,i have solved this problem.
the solve is: just use the trtexec
in usr/src/tensorrt/bin
folder on tx2 or nx. I just build the project by my own, and i have the error.
Environment
TensorRT Version: 7.2.2.3 NVIDIA GPU: 1080TI NVIDIA Driver Version: 450.102.04 CUDA Version: 11.0 CUDNN Version: 11.0 Operating System: ubuntu Python Version (if applicable): 3.8 Tensorflow Version (if applicable): PyTorch Version (if applicable): 1.8 Baremetal or Container (if so, version):
Relevant Files
Steps To Reproduce and Description
I modify the code from https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleDynamicReshape . I dont know why the code have a bug. [03/12/2021-17:53:57] [E] [TRT] Network has dynamic or shape inputs, but no optimization profile has been defined. [03/12/2021-17:53:57] [E] [TRT] Network validation failed. [03/12/2021-17:53:57] [E] Prediction engine build failed.
ps: detect_test.onnx is generate use the follow code:
And i can ust the detect_test.onnx in python tensorrt for dynamic input. But the c++ tensorrt is not ok. I am very confusing.