NVIDIA-AI-IOT / deepstream_triton_model_deploy

How to deploy open source models using DeepStream and Triton Inference Server
Apache License 2.0
73 stars 15 forks source link

Using Centernet model with batching in Deepstream #6

Open akashmanna opened 3 years ago

akashmanna commented 3 years ago

Hi, What changes should I make to use the centernet model with nvinferserver with batch size more than 1?

monjha commented 3 years ago

Hi, If your model can support the batching then you can update the batch size in the configuration files. Update config.pbtxt and centerface.txt for the BS you want to use.

PiPiNam commented 3 years ago

Hi~ Have you successfully deployed CenterNet model in Deepstream? I meet a problem when I deploy trt model converted from CenterNet on Deepstream, because I do not know how Deepstream pre-process the input image and post-process the output of model. Looking forward to your kind reply~

jaemin93 commented 2 years ago

https://github.com/mAy-I/deepstream_centernet I made it here !