NVIDIA-ISAAC-ROS / isaac_ros_dnn_inference

NVIDIA-accelerated DNN model inference ROS 2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU
https://developer.nvidia.com/isaac-ros-gems
Apache License 2.0
104 stars 17 forks source link
ai deep-learning deeplearning dnn gpu jetson nvidia ros ros2 ros2-humble tao tensorrt tensorrt-inference triton triton-inference-server

Isaac ROS DNN Inference

NVIDIA-accelerated DNN model inference ROS 2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU.

bounding box for people detection segementation mask for people detection

Webinar Available

Learn how to use this package by watching our on-demand webinar: Accelerate YOLOv5 and Custom AI Models in ROS with NVIDIA Isaac


Overview

Isaac ROS DNN Inference contains ROS 2 packages for performing DNN inference, providing AI-based perception for robotics applications. DNN inference uses a pre-trained DNN model to ingest an input Tensor and output a prediction to an output Tensor.

image

Above is a typical graph of nodes for DNN inference on image data. The input image is resized to match the input resolution of the DNN; the image resolution may be reduced to improve DNN inference performance ,which typically scales directly with the number of pixels in the image. DNN inference requires input Tensors, so a DNN encoder node is used to convert from an input image to Tensors, including any data pre-processing that is required for the DNN model. Once DNN inference is performed, the DNN decoder node is used to convert the output Tensors to results that can be used by the application.

TensorRT and Triton are two separate ROS nodes to perform DNN inference. The TensorRT node uses TensorRT to provide high-performance deep learning inference. TensorRT optimizes the DNN model for inference on the target hardware, including Jetson and discrete GPUs. It also supports specific operations that are commonly used by DNN models. For newer or bespoke DNN models, TensorRT may not support inference on the model. For these models, use the Triton node.

The Triton node uses the Triton Inference Server, which provides a compatible frontend supporting a combination of different inference backends (e.g. ONNX Runtime, TensorRT Engine Plan, TensorFlow, PyTorch). In-house benchmark results measure little difference between using TensorRT directly or configuring Triton to use TensorRT as a backend.

Some DNN models may require custom DNN encoders to convert the input data to the Tensor format needed for the model, and custom DNN decoders to convert from output Tensors into results that can be used in the application. Leverage the DNN encoder and DNN decoder node(s) for image bounding box detection and image segmentation, or your own custom node(s).

[!Note] DNN inference can be performed on different types of input data, including audio, video, text, and various sensor data, such as LIDAR, camera, and RADAR. This package provides implementations for DNN encode and DNN decode functions for images, which are commonly used for perception in robotics. The DNNs operate on Tensors for their input, output, and internal transformations, so the input image needs to be converted to a Tensor for DNN inferencing.

Isaac ROS NITROS Acceleration

This package is powered by NVIDIA Isaac Transport for ROS (NITROS), which leverages type adaptation and negotiation to optimize message formats and dramatically accelerate communication between participating nodes.

Performance

Sample Graph

Input Size

AGX Orin

Orin NX

Orin Nano 8GB

x86_64 w/ RTX 4060 Ti

x86_64 w/ RTX 4090

TensorRT Node


DOPE

VGA



48.1 fps


24 ms @ 30Hz

17.9 fps


56 ms @ 30Hz

13.1 fps


82 ms @ 30Hz

98.3 fps


13 ms @ 30Hz

296 fps


5.1 ms @ 30Hz

Triton Node


DOPE

VGA



47.2 fps


23 ms @ 30Hz

20.4 fps


540 ms @ 30Hz

14.4 fps


790 ms @ 30Hz

94.2 fps


12 ms @ 30Hz

254 fps


4.6 ms @ 30Hz

TensorRT Node


PeopleSemSegNet

544p



460 fps


4.1 ms @ 30Hz

348 fps


6.1 ms @ 30Hz

238 fps


7.0 ms @ 30Hz

685 fps


2.9 ms @ 30Hz

675 fps


3.0 ms @ 30Hz

Triton Node


PeopleSemSegNet

544p



304 fps


4.8 ms @ 30Hz

206 fps


6.5 ms @ 30Hz





677 fps


2.2 ms @ 30Hz

619 fps


1.9 ms @ 30Hz

DNN Image Encoder Node



VGA



522 fps


12 ms @ 30Hz

330 fps


12 ms @ 30Hz





811 fps


6.6 ms @ 30Hz

822 fps


6.4 ms @ 30Hz


Documentation

Please visit the Isaac ROS Documentation to learn how to use this repository.


Packages

Latest

Update 2024-09-26: Update for Isaac ROS 3.1