Closed ExcaliburKG closed 1 year ago
I quickly browse https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_using_custom_model.html
I think you probably need to implement Custom Output Parsing
The libmmdeploy_tesorrt_ops.so
is actually IPlugin Implementation
Since I am not familiar with NVIDIA DeepStream, please allow me to take some time to investigate it. I'll get back to you as soon as I figure it out
Hi, I've had a similar issue using a yolox head in deepstream. The solution is to add a simple parser to libmmdeploy_tensorrt_ops.so which is similar to mmdetparser. This needs to be in the same .so file as Deepstream only seems to be able to load one library/plugin. As the parser is only needed in DeepStream and not other TensorRT applications would it make sense to create a libmmdeploy_deepstream_ops.so which includes the tensorRT ops as well as additional parsers?
@lvhan028 If this something you can help with? I'm like to add my parser to MMDeploy.
Hi, @ziggy84
mmdeploy
focuses on how to deploy PyTorch models in various devices. We would like to leave mmdeploy integration to the community's repo.
It will be our honor to note down your repo to awesome works based on mmdeploy
if you would like to opensource your own repo addressing integrating mmdeploy to deepstream or triton.
This issue is marked as stale because it has been marked as invalid or awaiting response for 7 days without any further response. It will be closed in 5 days if the stale label is not removed or if there is no further response.
@lvhan028 could you share your parser ? and yes, it will be nice if you can add yours to mmdeploy, a PR would be very helpful. I am struggling to do this as well for faster-rcnn.
@ziggy84 could you share your parser ?
Hi, I'm trying to use mmdeploy to convert a model to TRT engine and use it in a nvidia deepstream pipeline.
Main config atss detector: https://github.com/open-mmlab/mmdetection/blob/master/configs/atss/atss_r101_fpn_1x_coco.py Weights: https://download.openmmlab.com/mmdetection/v2.0/atss/atss_r101_fpn_1x_coco/atss_r101_fpn_1x_20200825-dfcadd6f.pth Next, I use mmdeploy script:
Conversion is done without critical issues, I get engine and onnx files, but when I run deepstream pipeline I get the following error:
Please clarify should one of the following files contain parseBoundingBox function that model needs to work properly:
So after the conversion I do not need to write any additional code for an onxx model to work in deepstream pipeline. My pgie config is: