NVIDIA-AI-IOT / deepstream_triton_model_deploy

How to deploy open source models using DeepStream and Triton Inference Server
Apache License 2.0
73 stars 15 forks source link

triton can not load centerface onnx model #5

Closed xavier-xiadia closed 3 years ago

xavier-xiadia commented 3 years ago

thank you for your code about triton in deepstream: some error occured when i run

PROBLEM LOG: 2021-01-20 09:37:30.259459: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.2 I0120 09:37:32.215074 192 metrics.cc:164] found 1 GPUs supporting NVML metrics I0120 09:37:32.220711 192 metrics.cc:173] GPU 0: GeForce RTX 2080 Ti I0120 09:37:32.221101 192 server.cc:120] Initializing Triton Inference Server **ERROR: infer_trtis_server.cpp:617 TRTIS: failed to load model centerface, trtis_err_str:INTERNAL, err_msg:failed to load 'centerface', no version is available ERROR: infer_trtis_backend.cpp:42 failed to load model: centerface, nvinfer error:NVDSINFER_TRTIS_ERROR ERROR: infer_trtis_backend.cpp:184 failed to initialize backend while ensuring model:centerface ready, nvinfer error:NVDSINFER_TRTIS_ERROR 0:00:02.651921510 192 0x7fdf00002380 ERROR nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<primary_gie> nvinferserver[UID 1]: Error in createNNBackend() <infer_trtis_context.cpp:223> [UID = 1]: failed to initialize trtis backend for model:centerface, nvinfer error:NVDSINFER_TRTIS_ERROR** I0120 09:37:32.374438 192 server.cc:179] Waiting for in-flight inferences to complete. I0120 09:37:32.374455 192 server.cc:194] Timeout 30: Found 0 live models and 0 in-flight requests 0:00:02.652044816 192 0x7fdf00002380 ERROR nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<primary_gie> nvinferserver[UID 1]: Error in initialize() <infer_base_context.cpp:78> [UID = 1]: create nn-backend failed, check config file settings, nvinfer error:NVDSINFER_TRTIS_ERROR 0:00:02.652056954 192 0x7fdf00002380 WARN nvinferserver gstnvinferserver_impl.cpp:439:start:<primary_gie> error: Failed to initialize InferTrtIsContext 0:00:02.652063559 192 0x7fdf00002380 WARN nvinferserver gstnvinferserver_impl.cpp:439:start:<primary_gie> error: Config file path: /root/deepstream_triton_model_deploy/centerface/config/centerface.txt 0:00:02.652150478 192 0x7fdf00002380 WARN nvinferserver gstnvinferserver.cpp:460:gst_nvinfer_server_start:<primary_gie> error: gstnvinferserver_impl start failed ** ERROR: <main:655>: Failed to set pipeline to PAUSED Quitting ERROR from primary_gie: Failed to initialize InferTrtIsContext Debug info: gstnvinferserver_impl.cpp(439): start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInferServer:primary_gie: Config file path: /root/deepstream_triton_model_deploy/centerface/config/centerface.txt ERROR from primary_gie: gstnvinferserver_impl start failed Debug info: gstnvinferserver.cpp(460): gst_nvinfer_server_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInferServer:primary_gie App run failed

the log only told me nvinfer error:NVDSINFER_TRTIS_ERROR, but not any other information

config: 2080ti docker : nvcr.io/nvidia/deepstream:5.0.1-20.09-triton https://github.com/NVIDIA-AI-IOT/deepstream_triton_model_deploy.git 0208221 onnx centernet from https://github.com/Star-Clouds/CenterFace/raw/master/models/onnx/centerface.onnx

xavier-xiadia commented 3 years ago

and i still dont know how triton in deepstream find model. i remember in triton docker models are in :/model, how deepstream find them

mjhuria commented 3 years ago

Hi Xavier,

Sorry for replying late. Please check the repo now, I have updated the directory structure. You need to keep the model at centerface/centerface/1/model.onnx Thanks!