Closed anoushsepehri closed 1 year ago
Hi @anoushsepehri, my segNet models were trained with PyTorch and not TensorFlow. It would seem the ONNX model that TensorFlow exports isn't compatible with the version of TensorRT you are on - you might want to try the latest TensorRT 7.1 from JetPack 4.4.
There is also the issue of the pre/post-processing code from segNet.cpp - right now it is setup to support PyTorch and caffe models and the way those expect their input tensors (e.g. NCHW format, colorspace, mean pixel subtraction, ect) and also the interpretation of the output tensors.
BTW this was the PyTorch code I used: https://github.com/dusty-nv/pytorch-segmentation
Hi Dusty;
I went through the entire tutorial and have not had any problems running preloaded and pretrained models. When I try to run my own ONNX model using the segnet-camera script, I get the following error:
sudo segnet-camera.py --camera=/dev/video0 --model=custom_model/model.onnx --labels=custom_model/labels.txt --colors=custom_model/colours.txt --input_blob input --output_blob output jetson.inference.init.py jetson.inference -- initializing Python 2.7 bindings... jetson.inference -- registering module types... jetson.inference -- done registering module types jetson.inference -- done Python 2.7 binding initialization jetson.utils.init.py jetson.utils -- initializing Python 2.7 bindings... jetson.utils -- registering module functions... jetson.utils -- done registering module functions jetson.utils -- registering module types... jetson.utils -- done registering module types jetson.utils -- done Python 2.7 binding initialization jetson.inference -- PyTensorNet_New() jetson.inference -- PySegNet_Init() jetson.inference -- segNet loading network using argv command line params jetson.inference -- segNet.init() argv[0] = '/usr/local/bin/segnet-camera.py' jetson.inference -- segNet.init() argv[1] = '--camera=/dev/video0' jetson.inference -- segNet.init() argv[2] = '--model=custom_model/model.onnx' jetson.inference -- segNet.init() argv[3] = '--labels=custom_model/labels.txt' jetson.inference -- segNet.init() argv[4] = '--colors=custom_model/colours.txt' jetson.inference -- segNet.init() argv[5] = '--input_blob' jetson.inference -- segNet.init() argv[6] = 'input' jetson.inference -- segNet.init() argv[7] = '--output_blob' jetson.inference -- segNet.init() argv[8] = 'output'
segNet -- loading segmentation network model from: -- prototxt: (null) -- model: custom_model/model.onnx -- labels: custom_model/labels.txt -- colors: custom_model/colours.txt -- input_blob '' -- output_blob '' -- batch_size 1
[TRT] TensorRT version 6.0.1 [TRT] loading NVIDIA plugins... [TRT] Plugin Creator registration succeeded - GridAnchor_TRT [TRT] Plugin Creator registration succeeded - GridAnchorRect_TRT [TRT] Plugin Creator registration succeeded - NMS_TRT [TRT] Plugin Creator registration succeeded - Reorg_TRT [TRT] Plugin Creator registration succeeded - Region_TRT [TRT] Plugin Creator registration succeeded - Clip_TRT [TRT] Plugin Creator registration succeeded - LReLU_TRT [TRT] Plugin Creator registration succeeded - PriorBox_TRT [TRT] Plugin Creator registration succeeded - Normalize_TRT [TRT] Plugin Creator registration succeeded - RPROI_TRT [TRT] Plugin Creator registration succeeded - BatchedNMS_TRT [TRT] Could not register plugin creator: FlattenConcat_TRT in namespace: [TRT] completed loading NVIDIA plugins. [TRT] detected model format - ONNX (extension '.onnx') [TRT] desired precision specified for GPU: FASTEST [TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8 [TRT] native precisions detected for GPU: FP32, FP16 [TRT] selecting fastest native precision for GPU: FP16 [TRT] attempting to open engine cache file custom_model/model.onnx.1.1.GPU.FP16.engine [TRT] cache file not found, profiling network model on device GPU [TRT] device GPU, loading /usr/bin/ custom_model/model.onnx
Input filename: custom_model/model.onnx ONNX IR version: 0.0.6 Opset version: 11 Producer name: tf2onnx Producer version: 1.6.0 Domain:
Model version: 0 Doc string:
WARNING: ONNX model has a newer ir_version (0.0.6) than this parser was built against (0.0.3). [TRT] StatefulPartitionedCall/model_7/model_6/Conv1/Conv2D__8:0:Transpose -> (3, 256, 512) While parsing node number 1 [Conv]: ERROR: ModelImporter.cpp:296 In function importModel: [5] Assertion failed: tensors.count(input_name) [TRT] failed to parse ONNX model 'custom_model/model.onnx' [TRT] device GPU, failed to load custom_model/model.onnx segNet -- failed to initialize. jetson.inference -- segNet failed to load built-in network 'fcn-resnet18-voc' jetson.inference -- PySegNet_Dealloc() Traceback (most recent call last): File "/usr/local/bin/segnet-camera.py", line 51, in
net = jetson.inference.segNet(opt.network, sys.argv)
Exception: jetson.inference -- segNet failed to load network
I am training and designing the model in TensorFlow 2.1.0 and exported it using tf2onnx and opset 11 Any help would be greatly appreciated.