marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.38k stars 343 forks source link

Issues with DLA Core Execution on Jetson Device Post JetPack and DeepStream Update #495

Closed angeelalg closed 7 months ago

angeelalg commented 7 months ago

Description

I've been using the tools in this repository for real-time inference on my Jetson device with a custom YOLOv5n model. Previously, with gen_wts_yoloV5.py, I generated .wts and .cfg files from a .pt model, on DeepStream 6.0 and JetPack 4.6 (CUDA 10.2).

Post-updating to JetPack 5.1 and DeepStream 6.2, and recompiling the library for CUDA 11.4, I faced an issue. When generating the .engine file, I received "File /home/.../yolov5.wts is not supported."

Switching to the new .pt to .onnx conversion method, I successfully generated the .engine file. However, previously, almost the entire model (except some layers) ran on a DLA core, which is no longer the case. Now, none of the model layers are executed on DLA; all fall back to GPU.

Questions

Any advice would be appreciated.

Best regards,

marcoslucianops commented 7 months ago

I will check.

angeelalg commented 7 months ago

I have been conducting more tests and found a rather silly mistake in my work. It works great as before. I have closed the issue, and I want to thank you for your repository. Thank you very much for your help, best regards.