NVIDIA-AI-IOT / deepstream_triton_model_deploy

How to deploy open source models using DeepStream and Triton Inference Server
Apache License 2.0
74 stars 15 forks source link

Problem with deepstream-triton docker #17

Open bangpc opened 3 years ago

bangpc commented 3 years ago

I follow the instruction with nvcr.io/nvidia/deepstream:5.1-21.02-triton, run change dim and run the app but I got this error

2021-07-06 08:31:11.413549: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.2 I0706 08:31:12.450158 4142 metrics.cc:164] found 1 GPUs supporting NVML metrics I0706 08:31:12.455850 4142 metrics.cc:173] GPU 0: NVIDIA Quadro P4000 I0706 08:31:12.456117 4142 server.cc:120] Initializing Triton Inference Server I0706 08:31:12.534433 4142 server_status.cc:55] New status tracking for model 'centerface' I0706 08:31:12.534802 4142 model_repository_manager.cc:680] loading: centerface:1 I0706 08:31:12.538328 4142 onnx_backend.cc:203] Creating instance centerface_0_0_gpu0 on GPU 0 (6.1) using model.onnx I0706 08:31:13.018687 4142 model_repository_manager.cc:837] successfully loaded 'centerface' version 1 INFO: infer_trtis_backend.cpp:206 TrtISBackend id:1 initialized model: centerface 0:00:01.979633096 4142 0x562898ea3ef0 ERROR nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger: nvinferserver[UID 1]: Error in specifyBackendDims() [UID = 1]: failed to create trtis backend on model:centerface because tensor:input.1 input-dims is not correct 0:00:01.979667124 4142 0x562898ea3ef0 ERROR nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger: nvinferserver[UID 1]: Error in createNNBackend() [UID = 1]: failed to specify trtis backend input dims for model:centerface, nvinfer error:NVDSINFER_CONFIG_FAILED I0706 08:31:13.020304 4142 model_repository_manager.cc:708] unloading: centerface:1 I0706 08:31:13.020987 4142 model_repository_manager.cc:816] successfully unloaded 'centerface' version 1 I0706 08:31:13.021180 4142 server.cc:179] Waiting for in-flight inferences to complete. I0706 08:31:13.021192 4142 server.cc:194] Timeout 30: Found 0 live models and 0 in-flight requests 0:00:01.980676621 4142 0x562898ea3ef0 ERROR nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger: nvinferserver[UID 1]: Error in initialize() [UID = 1]: create nn-backend failed, check config file settings, nvinfer error:NVDSINFER_CONFIG_FAILED 0:00:01.980687041 4142 0x562898ea3ef0 WARN nvinferserver gstnvinferserver_impl.cpp:439:start: error: Failed to initialize InferTrtIsContext 0:00:01.980691218 4142 0x562898ea3ef0 WARN nvinferserver gstnvinferserver_impl.cpp:439:start: error: Config file path: /opt/nvidia/deepstream/deepstream-5.0/samples/trtis_model_repo/deepstream_triton_model_deploy/centerface/config/centerface.txt 0:00:01.980768517 4142 0x562898ea3ef0 WARN nvinferserver gstnvinferserver.cpp:460:gst_nvinfer_server_start: error: gstnvinferserver_impl start failed ** ERROR: : Failed to set pipeline to PAUSED Quitting ERROR from primary_gie: Failed to initialize InferTrtIsContext Debug info: gstnvinferserver_impl.cpp(439): start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInferServer:primary_gie: Config file path: /opt/nvidia/deepstream/deepstream-5.0/samples/trtis_model_repo/deepstream_triton_model_deploy/centerface/config/centerface.txt ERROR from primary_gie: gstnvinferserver_impl start failed Debug info: gstnvinferserver.cpp(460): gst_nvinfer_server_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInferServer:primary_gie App run failed