Closed angeelalg closed 7 months ago
I will check.
I have been conducting more tests and found a rather silly mistake in my work. It works great as before. I have closed the issue, and I want to thank you for your repository. Thank you very much for your help, best regards.
Description
I've been using the tools in this repository for real-time inference on my Jetson device with a custom YOLOv5n model. Previously, with
gen_wts_yoloV5.py
, I generated.wts
and.cfg
files from a.pt
model, on DeepStream 6.0 and JetPack 4.6 (CUDA 10.2).Post-updating to JetPack 5.1 and DeepStream 6.2, and recompiling the library for CUDA 11.4, I faced an issue. When generating the
.engine
file, I received "File /home/.../yolov5.wts is not supported."Switching to the new
.pt
to.onnx
conversion method, I successfully generated the.engine
file. However, previously, almost the entire model (except some layers) ran on a DLA core, which is no longer the case. Now, none of the model layers are executed on DLA; all fall back to GPU.Questions
Any advice would be appreciated.
Best regards,