dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
https://developer.nvidia.com/embedded/twodaystoademo
MIT License
7.86k stars 2.98k forks source link

Newer object detection architecture #1762

Closed matanj83 closed 11 months ago

matanj83 commented 11 months ago

Hi Dusty, Thanks very much for your amazing work and clear guides! I added custom OD to my C++ application on the Jetson Xavier NX using this useful repo. However, I have concerns that the SSD arch may limit the performance in some scenarios (e.g. small objects) and also not sure I want to commit to this arch for the long run.

Is adding a newer object detection architecture part of the roadmap? Alternatively, is the Detectnet supposed to be just an example and there is relatively simple way to use TensorRT with C++ app and some custom ONNX model I have?

Many thanks! Matan

dusty-nv commented 11 months ago

Thanks @matanj83, you can train SSD-Mobilenet at higher 512x512 resolution, but yea if you're on Xavier NX you can go with a higher-end DNN architecture - I'd probably recommend the one that PeopleNet/ect are trained on with TAO Toolkit: https://github.com/dusty-nv/jetson-inference/blob/master/docs/detectnet-tao.md

Those TAO models can also take advantage of INT8 which Xavier supports for improved performance. I don't plan to add YOLO models to this repo as there are too many variants and the support is too much to keep up with.

There are various samples of using the TensorRT API directly under /usr/src/tensorrt. For C++ I would not say it's 'simple' persay. The Python API is a bit less code and has other tools for it like torch2trt and many github projects for it.

matanj83 commented 11 months ago

Thanks very much for your answer! it gives several promising paths!