NVIDIA-ISAAC-ROS / isaac_ros_visual_slam

Visual SLAM/odometry package based on NVIDIA-accelerated cuVSLAM
https://developer.nvidia.com/isaac-ros-gems
Apache License 2.0
862 stars 139 forks source link

Isaac_ROS and Inferencing #99

Open Fibo27 opened 1 year ago

Fibo27 commented 1 year ago

Hello: I have a robot that can stream video as an rtp network stream. I have tested the system on Jetson Nano and an RPi. To stream the video I have created a ros2 node. This node works on Jetson Nano (Ubuntu 18.04 + Foxy) and Rpi 8GB (Rpi Os Bullseye + Iron). I have been able to use Inferencing using Jetson-Inference and Ros_Deep_Learning created by @dusty-nv (https://github.com/dusty-nv/ros_deep_learning) on Jetson-Orin-Nano and then publish the inferenced result as a Ros2 Image. So basically the robot streams using jpeg coding and i decode it on the Orin Nano and then publish the same. However, I am now challenged by the limited compute power on Orin Nano and also the Ubuntu Distribution of Jetpack 5.1.1.

My goal is to have Rviz running displaying all the nodes (Laser+Sonar+Imu+Camera) sensors and then using the Nav2 package to move the robot. I have tested out running RViz on Orin Nano but it is compute challenged.

I have a x86 with Ubuntu 22.04 + Nvidia RTX GPU where I want to do the heavy computing with the data being sent by the sensors on the Robot (Note: both the Jetson Nano + Rpi4 based Robots have identical sensors and they can both transmit the same ROS2 topics). While going through the documentation of ISaac ROS, I am not clear if the same functionality developed by @dusty-nv is available i.e. accept an rtp stream as input codec =mjpeg and then use that to carry out inferencing based on DNN based models. FYI:

the robot stream video using this command
pipeline = Gst.parse_launch('nvarguscamerasrc ! video/x-raw(memory:NVMM),width=1280,height=720,framerate=120/1 ! nvjpegenc ! rtpjpegpay ! udpsink host=ipaddress port=#') and the inferencing node uses ros2 launch ros_deep_learning detectnet.ros2.launch input_codec:=mjpeg input:=rtp://@# output:=display://0

I will appreciate if you can suggest way forward on how to address this issue.

Note: I have tried various packages using picamaera2 based on Libcamera for RPi but to no avail. Ubuntu on Rpi has issues with camera. Jetson Nano is not that great a SOC board as RPi4 when you have to handle multiple sensors that need multi-threading and the Nvidia Orin (Agx) and above are too expensive and power demanding for trying out what i am trying to achieve. Thank You