NVIDIA-AI-IOT / yolo_deepstream

yolo model qat and deploy with deepstream&tensorrt
Apache License 2.0
533 stars 135 forks source link

YOLOv7 EfficientNMS - Num Classes #50

Open levipereira opened 1 year ago

levipereira commented 1 year ago

I am currently working on integrating YOLOv7 with DeepStream and Triton Server. I have been using the NvDsInferParseCustomEfficientNMS function from /opt/nvidia/deepstream/deepstream-6.1/sources/libs/nvdsinfer_customparser/nvdsinfer_custombboxparser.cpp in my setup.

Deepstream / Triton Server - YOLOv7

Now, I'm looking to transition to the implementation provided by NVIDIA in the repository https://github.com/NVIDIA-AI-IOT/yolo_deepstream/tree/main/deepstream_yolo/nvdsinfer_custom_impl_Yolo. However, I noticed that the code in this repository has a hardcoded value NUM_CLASSES_YOLO, which is not present in NvDsInferParseCustomEfficientNMS function.

static const int NUM_CLASSES_YOLO = 80;
#define OBJECTLISTSIZE 25200
#define BLOCKSIZE  1024
thrust::device_vector<NvDsInferParseObjectInfo> objects_v(OBJECTLISTSIZE);

extern "C" bool NvDsInferParseCustomYoloV7_cuda( 

As I have multiple YOLOv7 models with primary and secondary inference, I am concerned about having to compile a separate NvDsInferParseCustomYoloV7_cuda for each model.

Could you kindly advise if there is a way to avoid compiling individual NvDsInferParseCustomYoloV7_cuda for each model and instead make it more dynamic or configurable to support multiple models?

Thank you for your assistance and guidance. Any help you can provide will be greatly appreciated.