NVIDIA-AI-IOT / deepstream_tao_apps

Sample apps to demonstrate how to deploy models trained with TAO on DeepStream
MIT License
380 stars 97 forks source link

Uff parsing failed of Model #35

Open MadhurimaGhose05 opened 3 years ago

MadhurimaGhose05 commented 3 years ago

Environment: Device: Tesla T4 Cuda Version: 10.2 Tensorrt version: 7.0 Docker image: docker pull nvcr.io/nvidia/deepstream:5.0-20.07-triton Models downloaded from: https://nvidia.box.com/shared/static/i1cer4s3ox4v8svbfkuj5js8yqm3yazo.zip

Command: ./ds-tlt -c /opt/nvidia/deepstream/deepstream-5.0/samples/deepstream_tlt_apps/configs/yolov3_tlt/pgie_yolov3_tlt_config.txt -i /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264 -b 2

Output Error: WARNING: Overriding infer-config batch-size (1) with number of sources (2) Now playing: /opt/nvidia/deepstream/deepstream-5.0/samples/deepstream_tlt_apps/configs/yolov3_tlt/pgie_yolov3_tlt_config.txt 0:00:00.737391560 396 0x563947a11610 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() [UID = 1]: Trying to create engine from model files ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: UffParser: Validator error: FirstDimTile_2: Unsupported operation _BatchTilePlugin_TRT parseModel: Failed to parse UFF model ERROR: tlt/tlt_decode.cpp:274 failed to build network since parsing model errors. ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API 0:00:00.944565439 396 0x563947a11610 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() [UID = 1]: build engine file failed Segmentation fault (core dumped)

benchmarking_error_yoloV3_1_Crop

I have also followed: https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/tree/master/TRT-OSS/Jetson

But still the error persists: ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-imagedata-multistream/yolov3_resnet18.etlt_b1_gpu0_fp16.engine open error 0:00:11.954855314 2771 0x22e4ac0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-imagedata-multistream/yolov3_resnet18.etlt_b1_gpu0_fp16.engine failed 0:00:11.954914143 2771 0x22e4ac0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-imagedata-multistream/yolov3_resnet18.etlt_b1_gpu0_fp16.engine failed, try rebuild 0:00:11.954933439 2771 0x22e4ac0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() [UID = 1]: Trying to create engine from model files ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: UffParser: Could not read buffer. parseModel: Failed to parse UFF model ERROR: tlt/tlt_decode.cpp:274 failed to build network since parsing model errors. ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API 0:00:12.046943291 2771 0x22e4ac0 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() [UID = 1]: build engine file failed Segmentation fault (core dumped)

Please help.