dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
https://developer.nvidia.com/embedded/twodaystoademo
MIT License
7.8k stars 2.98k forks source link

Dynamic Shapes Issue on Running ONNX model in detectnet #1301

Closed dig19998 closed 1 year ago

dig19998 commented 2 years ago

I am facing an issue in inferencing a custom ONNX file using the detect-net command. The model used was obtained from https://www.customvision.ai/ after exporting the prepared dataset from Roboflow.

The error I am getting is as follows `[TRT] ModelImporter.cpp:119: Searching for input: layer8_conv [TRT] ModelImporter.cpp:119: Searching for input: convolution8_W [TRT] ModelImporter.cpp:119: Searching for input: convolution8_B [TRT] ModelImporter.cpp:125: convolution8 [Conv] inputs: [layer8_conv -> (-1, 512, 13, 13)], [convolution8_W -> (30, 512, 1, 1)], [convolution8_B -> (30)], [TRT] builtin_op_importers.cpp:450: Convolution input dimensions: (-1, 512, 13, 13) [TRT] ImporterContext.hpp:141: Registering layer: convolution8 for ONNX node: convolution8 [TRT] builtin_op_importers.cpp:533: Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 30 [TRT] builtin_op_importers.cpp:534: Convolution output dimensions: (-1, 30, 13, 13) [TRT] ImporterContext.hpp:116: Registering tensor: model_outputs0_1 for ONNX tensor: model_outputs0 [TRT] ModelImporter.cpp:179: convolution8 [Conv] outputs: [model_outputs0 -> (-1, 30, 13, 13)], [TRT] ModelImporter.cpp:507: Marking model_outputs0_1 as output: model_outputs0 ----- Parsing of ONNX model data/ONNX-Exported-CustomVision-Float16/model.onnx is Done ---- [TRT] device GPU, configuring network builder [TRT] device GPU, building FP16: ON [TRT] device GPU, building INT8: OFF [TRT] device GPU, workspace size: 536870912 [TRT] device GPU, building CUDA engine (this may take a few minutes the first time a network is loaded)

[TRT] Network has dynamic or shape inputs, but no optimization profile has been defined. [TRT] Network validation failed. [TRT] device GPU, failed to build CUDA engine [TRT] device GPU, failed to load data/ONNX-Exported-CustomVision-Float16/model.onnx [TRT] detectNet -- failed to initialize. detectnet: failed to load detectNet model`

How to solve this?

dusty-nv commented 2 years ago

This model uses dynamic shapes, which jetson-inference doesn't really use. Further, jetson-inference isn't really intended to support any and all ONNX model persay - you would need to make sure the pre/post-processing in c/detectNet.cpp also matches what the model expects.

The ONNX support in detectNet is setup for the models created with pytorch-ssd and train_ssd.py from the Hello AI World tutorial.

So either you can use train_ssd.py to train your model, or modify jetson-inference to support your customvision.ai model, or code your own TensorRT program for your customvision.ai model (i.e. using the TensorRT Python API)

dig19998 commented 2 years ago

Okay will do this, Thanks