marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.4k stars 345 forks source link

Degradated detection results after integrating yolov5 model into deepstream #125

Closed wartek69 closed 2 years ago

wartek69 commented 2 years ago

Hello Thank you for this amazing repo! For my usecase I'm using YoloV5 with deepstream and followed the steps mentioned in this repo to convert the default yolov5n, yolov5s, yolov5m and yolov5l into .wts and .cfg files using python3 gen_wts_yoloV5.py -w yolov5s.pt command. The conversion succeeds and I can run the model using deepstream. To run it in deepstream I've used the provided deepstream_app_config.txt and changed the reference in the inference plugin to config_infer_primary_yoloV5.txt as described in the tutorial. When running I noticed that the detections are not good at all, when running with the same default model (eg. yolov5s.pt) in pytorch natively I do get good results and more objects are detected. So basically, when converting the model from pytorch to deepstream I have a detection performance loss, take note that for these experiments I'm just using the weight files from ultralytics and the configs provided in this repo. Any clue why this might be happening and how it can be resolved? I suspect that other people must have had the same issue since I'm not doing anything fancy? I did try to change the dimensions of the streammux to 640x640 since that is the default resolution yolov5 uses for its inference but the detections are still not good. I'm using deepstream6.0 with the newest yolo release running on a jetson xavier agx. For reference I'll provide the deepstream_app_config.txt and config_infer_primary_yoloV5.txt.

deepstream_app_config.txt ``` [application] enable-perf-measurement=1 perf-measurement-interval-sec=5 [tiled-display] enable=1 rows=1 columns=1 width=1280 height=960 gpu-id=0 nvbuf-memory-type=0 [source0] enable=1 type=3 #uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 uri=rtsp://user:pass@192.168.69.69 num-sources=1 gpu-id=0 cudadec-memtype=0 [sink0] enable=1 type=2 sync=0 gpu-id=0 nvbuf-memory-type=0 [osd] enable=1 gpu-id=0 border-width=1 text-size=15 text-color=1;1;1;1; text-bg-color=0.3;0.3;0.3;1 font=Serif show-clock=0 clock-x-offset=800 clock-y-offset=820 clock-text-size=12 clock-color=1;0;0;0 nvbuf-memory-type=0 [streammux] gpu-id=0 live-source=1 batch-size=1 batched-push-timeout=40000 width=640 height=640 enable-padding=0 nvbuf-memory-type=0 [primary-gie] enable=1 gpu-id=0 gie-unique-id=1 nvbuf-memory-type=0 config-file=config_infer_primary_yoloV5.txt [tests] file-loop=0 ```
config_infer_primary_yoloV5.txt ``` [property] gpu-id=0 net-scale-factor=0.0039215697906911373 model-color-format=0 #custom-network-config=alex_models/yolov5sbest0701.cfg custom-network-config=default_weights_deepstream/yolov5m.cfg #model-file=alex_models/yolov5sbest0701.wts model-file=default_weights_deepstream/yolov5m.wts model-engine-file=model_b1_gpu0_fp32.engine #int8-calib-file=calib.table #labelfile-path=labels_custom.txt labelfile-path=labels.txt batch-size=1 network-mode=0 #num-detected-classes=4 num-detected-classes=80 interval=0 gie-unique-id=1 process-mode=1 network-type=0 cluster-mode=4 maintain-aspect-ratio=1 parse-bbox-func-name=NvDsInferParseYolo custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so engine-create-func-name=NvDsInferYoloCudaEngineGet [class-attrs-all] pre-cluster-threshold=0.25 ```

Looking forward to your reply! Kind regards Alex

marcoslucianops commented 2 years ago

I'm facing this issue and I'm working on it. I will update when I have fixed it.

wartek69 commented 2 years ago

Thank you! Let me know if you would need anything from me, I'm happy to help!

roddylab commented 2 years ago

Is that global for the whole deepstream when using yolov5 models, or just a specific case for this exact repo build? Any updates on how we can fix that and get better accuracy results with configs / build options?

wartek69 commented 2 years ago

@marcoslucianops any update on this issue? Did your investigation already pinpoint where the issue is arising?

marcoslucianops commented 2 years ago

@wartek69 @roddylab Repo updated and fixed the performance (FP16 mode) and accuracy of YOLOv5 models. Please use the new file to convert the model and check it again.

wartek69 commented 2 years ago

Thanks for the update @marcoslucianops ! First tests indicate that the issue is indeed resolved and the results are better, thank you for the update! The coming next week I'll be doing more extensive testing, if I notice any issues I'll reopen this issue.