NVIDIA-AI-IOT / deepstream_triton_model_deploy

How to deploy open source models using DeepStream and Triton Inference Server
Apache License 2.0
73 stars 15 forks source link

an error about loading model by using triton #11

Open littletree123 opened 3 years ago

littletree123 commented 3 years ago

hi, i use triton's docker to run your model,and my step is

  1. cp centerface to model_repository/

  2. because of the network, i download the model from browser and run 'python3 change_dim.py' and the sha256sum of my centerface.onnx is 77e394b51108381b4c4f7b4baf1c64ca9f4aba73e5e803b2636419578913b5fe

  3. docker run --rm -p8000:8000 -p8001:8001 -p8002:8002 -v/path/to/model_repository:/models nvcr.io/nvidia/tritonserver:20.09-py3 tritonserver --model-repository=/models --model-control-mode=explicit --load-model centerface

and i got an error: model_repository_manager.cc:899] failed to load 'centerface' version 1: Invalid argument: model 'centerface', tensor '537': the model expects 4 dimensions (shape [1,1,-1,-1]) but the model configuration specifies 4 dimensions (shape [1,1,120,160])

i don't know what the problem is. Thank you so much!

monjha commented 3 years ago

Hi,

Please run this application with deepstream docker :https://ngc.nvidia.com/catalog/containers/nvidia:deepstream/tags and use the 5.1-21.02-triton tag.

ghost commented 3 years ago

@littletree123 were you able to get the model working with DeepStream 5.1-21.02 and Triton?