NVIDIA-AI-IOT / deepstream_tao_apps

Sample apps to demonstrate how to deploy models trained with TAO on DeepStream
MIT License
361 stars 95 forks source link

TLT Models:yolov3 run error (jetson TX2) #24

Open zahidzqj opened 3 years ago

zahidzqj commented 3 years ago

when I run :./deepstream-custom -c pgie_yolov3_tlt_config.txt -i $DS_SRC_PATH/samples/streams/sample_720p.h264

Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead. Now playing: pgie_yolov3_tlt_config.txt Opening in BLOCKING MODE Opening in BLOCKING MODE 0:00:00.206458755 10991 0x559c739b00 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() [UID = 1]: Trying to create engine from model files ERROR: [TRT]: UffParser: Validator error: FirstDimTile_2: Unsupported operation _BatchTilePlugin_TRT parseModel: Failed to parse UFF model ERROR: failed to build network since parsing model errors. ERROR: Failed to create network using custom network creation function ERROR: Failed to get cuda engine from custom library API 0:00:01.417451184 10991 0x559c739b00 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() [UID = 1]: build engine file failed Segmentation fault (core dumped)

What is the reason for this problem and how to solve it?

mchi-zg commented 3 years ago

Unsupported operation _BatchTilePlugin_TRT Please check the README to build the TRT-OSS

zahidzqj commented 3 years ago

ok, I checked the README , and solved the problem.

MadhurimaGhose05 commented 3 years ago

Hi @mchi-zg and @zahidzqj , Even I am facing the same issue, have you resolved it? Environment: Device: Tesla T4 Cuda Version: 10.2 Tensorrt version: 7.0 Docker image: docker pull nvcr.io/nvidia/deepstream:5.0-20.07-triton

Command: ./ds-tlt -c /opt/nvidia/deepstream/deepstream-5.0/samples/deepstream_tlt_apps/configs/yolov3_tlt/pgie_yolov3_tlt_config.txt -i /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264 -b 2

Output Error: WARNING: Overriding infer-config batch-size (1) with number of sources (2) Now playing: /opt/nvidia/deepstream/deepstream-5.0/samples/deepstream_tlt_apps/configs/yolov3_tlt/pgie_yolov3_tlt_config.txt 0:00:00.737391560 396 0x563947a11610 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() [UID = 1]: Trying to create engine from model files ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: UffParser: Validator error: FirstDimTile_2: Unsupported operation _BatchTilePlugin_TRT parseModel: Failed to parse UFF model ERROR: tlt/tlt_decode.cpp:274 failed to build network since parsing model errors. ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API 0:00:00.944565439 396 0x563947a11610 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() [UID = 1]: build engine file failed Segmentation fault (core dumped) benchmarking_error_yoloV3_1_Crop