Software: Jetpack 5.0.2 GA [L4T 35.1.0]
NV Power Mode: MODE_20W_6CORE - Type: 8
jtop:
Version: 4.1.5
Service: Active
Libraries:
CUDA: 11.4.239
cuDNN: 8.4.1.50
TensorRT: 5.0.2
VPI: 2.1.6
Vulkan: 1.3.203
OpenCV: 4.5.4 - with CUDA: NO
Project Name
pj_tensorrt_seg_paddleseg_cityscapessota
Issue Details
I'm trying the run the inference of pj_tensorrt_seg_paddleseg_cityscapessota after successfully built, unfortunately I encountered some problems after running ./main. Plus, I have built and successfully run [pj_tensorrt_cls_mobilenet_v2] and [_tensorrt_det_yolov7] well with the same device and environment.
How to Reproduce
Steps to reproduce the behavior. Please include your cmake command.
In this step, I have done exactly the same as the instruction.
cd pj_tensorrt_seg_paddleseg_cityscapessota/
mkdir -p build && cd build
cmake ..
make
./main
Error Log
[InferenceHelper][117] Use TensorRT
[03/03/2023-10:31:58] [I] [TRT] [MemUsageChange] Init CUDA: CPU +186, GPU +0, now: CPU 209, GPU 7830 (MiB)
[03/03/2023-10:32:03] [I] [TRT] Loaded engine size: 142 MiB
[03/03/2023-10:32:05] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +261, GPU +230, now: CPU 742, GPU 8232 (MiB)
[03/03/2023-10:32:05] [I] [TRT] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +2, GPU +141, now: CPU 2, GPU 141 (MiB)
[03/03/2023-10:32:05] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 506, GPU 8108 (MiB)
[03/03/2023-10:32:05] [I] [TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +2, GPU +113, now: CPU 4, GPU 254 (MiB)
[InferenceHelperTensorRt][345] num_of_in_out = 2
[InferenceHelperTensorRt][348] tensor[0]->name: x
[InferenceHelperTensorRt][349] is input = 1
[InferenceHelperTensorRt][353] dims.d[0] = 1
[InferenceHelperTensorRt][353] dims.d[1] = 3
[InferenceHelperTensorRt][353] dims.d[2] = 180
[InferenceHelperTensorRt][353] dims.d[3] = 320
[InferenceHelperTensorRt][357] data_type = 0
[InferenceHelperTensorRt][348] tensor[1]->name: tmp_520
[InferenceHelperTensorRt][349] is input = 0
[InferenceHelperTensorRt][353] dims.d[0] = 1
[InferenceHelperTensorRt][353] dims.d[1] = 19
[InferenceHelperTensorRt][353] dims.d[2] = 180
[InferenceHelperTensorRt][353] dims.d[3] = 320
[InferenceHelperTensorRt][357] data_type = 0
[03/03/2023-10:32:05] [E] [TRT] 3: Cannot find binding of given name: tf.identity
[ERR: InferenceHelperTensorRt][222] Output tensor doesn't exist in the model (tf.identity)
[ERR: SegmentationEngine][99] Inference helper is not created
Initialization Error
Environment (Hardware)
Project Name
pj_tensorrt_seg_paddleseg_cityscapessota
Issue Details
I'm trying the run the inference of pj_tensorrt_seg_paddleseg_cityscapessota after successfully built, unfortunately I encountered some problems after running ./main. Plus, I have built and successfully run [pj_tensorrt_cls_mobilenet_v2] and [_tensorrt_det_yolov7] well with the same device and environment.
How to Reproduce
Steps to reproduce the behavior. Please include your cmake command.
In this step, I have done exactly the same as the instruction.
cd pj_tensorrt_seg_paddleseg_cityscapessota/ mkdir -p build && cd build cmake .. make ./main
Error Log
Additional Information
Add any other context about the problem here.