NVIDIA-AI-IOT / yolo_deepstream

yolo model qat and deploy with deepstream&tensorrt
Apache License 2.0
533 stars 135 forks source link

during inference i get below error on custom yolov4 model . Could someone help #25

Closed h9945394143 closed 1 year ago

h9945394143 commented 2 years ago

python3: nvdsparsebbox_Yolo.cpp:139: bool NvDsInferParseCustomYoloV4(const std::vector&, const NvDsInferNetworkInfo&, const NvDsInferParseDetectionParams&, std::vector&): Assertion `boxes.inferDims.numDims == 3' failed.

mchi-zg commented 1 year ago

I uploaded the sample onnx model under - https://drive.google.com/drive/folders/18TXX3c7_Of16zVeWrfCkyT4ooVMz44oW?usp=sharing

Harish1810 commented 1 year ago

python3: nvdsparsebbox_Yolo.cpp:139: bool NvDsInferParseCustomYoloV4(const std::vector&, const NvDsInferNetworkInfo&, const NvDsInferParseDetectionParams&, std::vector&): Assertion `boxes.inferDims.numDims == 3' failed.

I had the same issue. I solved it by using the custom bounding box parser from /opt/nvidia/deepstream/deepstream-6.1/sources/libs/nvdsinfer_customparser/ , Use the function "NvDsInferParseCustomBatchedNMSTLT"