NVIDIA-AI-IOT / yolo_deepstream

yolo model qat and deploy with deepstream&tensorrt
Apache License 2.0
549 stars 139 forks source link

[Feature Request] Add support for Deepstream-6.0 (maybe?) #20

Open DusKing1 opened 3 years ago

DusKing1 commented 3 years ago

Hey devs, deepstream-6.0 now releases and the old makefile does not work any more as it will give some error on CFLAGS, can you add support for the new version?

shekarneo commented 3 years ago

++

hlvtung2001 commented 3 years ago

++

mfoglio commented 2 years ago

Do you have this error too? https://github.com/NVIDIA-AI-IOT/yolov4_deepstream/issues/22

mfoglio commented 2 years ago

Finally, I have been able to build the model and start a pipeline. However it seems that no object is detected. Are you having the same issue too? I am also noticing about 100 FPS on a T4 with FP16 which seems pretty low since I used from 1 to 6 times faster speed using FP16 with yoloV5s with resolution 608x608.

Harish1810 commented 2 years ago

Finally, I have been able to build the model and start a pipeline. However it seems that no object is detected. Are you having the same issue too? I am also noticing about 100 FPS on a T4 with FP16 which seems pretty low since I used from 1 to 6 times faster speed using FP16 with yoloV5s with resolution 608x608.

Hi @mfoglio I have the same issue with Deepstream6.1. The pipeline does not output any bounding boxes.

mfoglio commented 2 years ago

Hi @Harish1810 . I ended up using another model. I didn't find a way to get this one to work.

Harish1810 commented 2 years ago

When I used the custom parser from /opt/nvidia/deepstream/deepstream-5.1/sources/libs/nvdsinfer_customparser this model worked for me in DS6 @mfoglio