NVIDIA / TensorRT

NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
https://developer.nvidia.com/tensorrt
Apache License 2.0
10.52k stars 2.1k forks source link

Unsupported operation _MultilevelCropAndResize_TRT #1999

Closed MovsisyanM closed 2 years ago

MovsisyanM commented 2 years ago

Description

I'm trying to run PeopleSegNet (from Tao toolkit with Deepstream), I got this error.

wsadmin@AIML1001:/opt/nvidia/deepstream/deepstream/samples/configs/tao_pretrained_models$ sudo deepstream-app -c deepstream_app_source1_segmentation.txt

 *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***

gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is ON
[NvMultiObjectTracker] Initialized
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1482 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.1/samples/configs/tao_pretrained_models/../../models/tao_pretrained_models/peopleSegNet/V2/peoplesegnet_resnet50.etlt_b1_gpu0_int8.engine open error
0:00:00.814698376 541174 0x559916138120 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1888> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.1/samples/configs/tao_pretrained_models/../../models/tao_pretrained_models/peopleSegNet/V2/peoplesegnet_resnet50.etlt_b1_gpu0_int8.engine failed
0:00:00.829914802 541174 0x559916138120 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1993> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.1/samples/configs/tao_pretrained_models/../../models/tao_pretrained_models/peopleSegNet/V2/peoplesegnet_resnet50.etlt_b1_gpu0_int8.engine failed, try rebuild
0:00:00.829937044 541174 0x559916138120 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: UffParser: Validator error: pyramid_crop_and_resize_box: Unsupported operation _MultilevelCropAndResize_TRT
parseModel: Failed to parse UFF model
ERROR: tlt/tlt_decode.cpp:358 Failed to build network, error in model parsing.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:723 Failed to create network using custom network creation function
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:789 Failed to get cuda engine from custom library API
0:00:01.283208569 541174 0x559916138120 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
Segmentation fault

Environment

TensorRT Version: 8.2.5.1-1+cuda11.4 NVIDIA GPU: GeForce RTX 3090 NVIDIA Driver Version: 470.103.01 CUDA Version: 11.4 CUDNN Version: 8.4.0.27-1+cuda11.6 Operating System: Ubuntu 20.04 Python Version (if applicable): 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0] Tensorflow Version (if applicable): 2.7.0 PyTorch Version (if applicable): Baremetal or Container (if so, version):

Relevant Files

Config file

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=1

[tiled-display]
enable=1
rows=1
columns=1
width=960
height=540
gpu-id=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=2
num-sources=1
uri=file://../../streams/sample_qHD.mp4
gpu-id=0

[streammux]
gpu-id=0
batch-size=1
batched-push-timeout=40000
## Set muxer output width and height
width=960
height=540

[sink0]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0

[osd]
enable=1
gpu-id=0
border-width=3
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
display-mask=1
display-bbox=0
display-text=0

[primary-gie]
enable=1
gpu-id=0
# Modify as necessary
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
gie-unique-id=1
# Replace the infer primary config file when you need to
# use other detection models
# model-engine-file=../../models/tao_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt_b1_gpu0_int8.engine
config-file=config_infer_primary_peopleSegNet.txt

[sink1]
enable=1
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
bitrate=2000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
output-file=out.mp4
source-id=0

[sink2]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=4
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
bitrate=4000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400

[tracker]
enable=1
# For NvDCF and DeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=../deepstream-app/config_tracker_IOU.yml
ll-config-file=../deepstream-app/config_tracker_NvDCF_perf.yml
# ll-config-file=../deepstream-app/config_tracker_NvDCF_accuracy.yml
# ll-config-file=../deepstream-app/config_tracker_DeepSORT.yml
gpu-id=0
enable-batch-process=1
enable-past-frame=1
display-tracking-id=1

[tests]
file-loop=0

Steps To Reproduce

sudo deepstream-app -c deepstream_app_source1_segmentation.txt

zerollzeng commented 2 years ago

Post it on https://forums.developer.nvidia.com/c/accelerated-computing/intelligent-video-analytics/deepstream-sdk/15 would be a better choice.

MovsisyanM commented 2 years ago

This issue got solved by solving this one. TLDR: one needs to build TensorRT and do some file copy pasting to get this to work