dusty-nv / ros_deep_learning

Deep learning inference nodes for ROS / ROS2 with support for NVIDIA Jetson and TensorRT
862 stars 258 forks source link

Ros Deep Learning on x86 #124

Closed Fibo27 closed 1 year ago

Fibo27 commented 1 year ago

Hi @dusty-nv

Thank you for your prompt feedback on my comments on https://github.com/dusty-nv/ros_deep_learning/issues/123 - i have created a new thread as it matters to another issue. I have the following set-ups:

  1. Jetson-Orin-Nano - with JetPack 5.1.1

    • I have built a docker image using jetson inference + ros=humble Desktop - thank you now for updating the repo with the changes, although the ROS version is still Base. This set-up works fine. In this set-up, I can receive camera images on rtp and am able to run the ROS module to infer the images received and publish them as ROS image. Given the limited compute resource on the Orin Nano, i am unable to run Rviz or RQT efficiently as I am also publishing laser scan, sonar scan and other sensor data and use rqt_robot_steering to steer the robot remotely.
    • The above set-up has also been created directly on the Orin-Nano. For the benefit of everyone, if you want to use ROS2, the Cmake.txt file needs to be updated as mentioned in my earlier post at https://github.com/dusty-nv/ros_deep_learning/issues/121, otherwise it does not compile.
  2. x86: Ubuntu 22.04, Cuda: 12,1, CuDNN: 8.9.2, Tensorrt: 8.6, GPU: RTX 2080

    • As a first i deployed ros=Iron. With the detection images published by the Jetson Orin Nano, I am able to view them on RVIZ (all my devices are on the same network) - however there is a lag and the set-up hangs up as I now have 3 hardware sending data back and forth on the same network (Robot with Sensors, Orin Nano running the inferencing and publishing them as a ROS topic, x86 doing the heavy lifting Nav computing and display rendering.
    • You have created the dockerfile to deploy Jetson-Inference on x_86 and it works very well - Thank You again!
    • You have stated in many of your responses that building Jetson-Inference from source on x86 is something that you do not support and I appreciate that. After multiple attempts, I have been able to compile Jetson-Inference on my x_86 set-up (as above) and it works well. So you can note that this repo builds with the latest versions of Ubuntu, Cuda, CuDNN and Tensorrt
  3. Two Robots: one on Rpi 8GB and another of Jetson Nano 4GB. Both have Lidar, Sonar, Imu, Rpi Camera (CSI). The ros nodes transmit video data using RTP and my above set-ups of Jetson-Inference capture the transmitted data as an input source on RTP

My issue: I want to use the compute ability of my x_86 set-up to carry out all the inferencing, rendering on Rviz and Navigation

Thanks again for the incredible work!

Cheers

PS: It has been quite a journey getting to this stage including waiting for months to get my hands on a Orin Nano to try out the detection repos! FYI, I have also looked at using ISAAC-ROS as that supports x_86. However, there is no discussion of receiving video/image data on RTP. I have posted a message regarding that https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_visual_slam/issues/99 but clearly they are not as efficient as you! If they really want the developer community to be engaged then they need to be responsive.

Fibo27 commented 1 year ago

Another point that I missed, which has been raised in the past but just for the sake of completeness if someone needs....

While compiling, Jetson Inference on x-86, the following files need to be updated: https://github.com/dusty-nv/jetson-utils/blob/f0bff5c502f9ac6b10aa2912f1324797df94bc2d/python/CMakeLists.txt https://github.com/dusty-nv/jetson-inference/blob/master/python/CMakeLists.txt

I have Python 3.10 installed, and i changed the below :

i_f(LSB_RELEASE_CODENAME MATCHES "focal") set(PYTHON_BINDING_VERSIONS 3.8) else() set(PYTHON_BINDINGVERSIONS 2.7 3.6 3.7) endif()

to

_if(LSB_RELEASE_CODENAME MATCHES "focal") set(PYTHON_BINDING_VERSIONS 3.8) else() set(PYTHON_BINDINGVERSIONS 3.10) endif()

dusty-nv commented 1 year ago

@Fibo27 I'm quite impressed that you've gotten that far with this code on x86, great work! 👍

  • Post compilation, the video_viewer launch file works fine but the none of the detection programs are working as I get this message: -[detectnet-2] [TRT] failed to find model manifest file 'networks/models.json' [detectnet-2] [TRT] couldn't find built-in detection model 'ssd-mobilenet-v2'

This may have the same root cause as https://github.com/dusty-nv/ros_deep_learning/issues/123 , which is the /jetson-inference/data folder not being mounted into the container. First, clone jetson-inference on your machine if you haven't already. Then when you start your container with docker run, add this flag:

--volume /host/path/to/jetson-inference/data:/jetson-inference/data

I have also looked at using ISAAC-ROS as that supports x_86. However, there is no discussion of receiving video/image data on RTP.

Yes I would recommend considering ISAAC ROS, and I haven't tried this yet but you could in theory just continue using my video_source and/or video_output nodes (which support RTP/RTSP/WebRTC/ect)

Fibo27 commented 1 year ago

Thanks @dusty-nv for your feedback. Couple of points: 1) My above request was incidentally to use jetson-inference for the ros_deep_learning package built from source without docker - i am not very good at docker (something to learn for the future). So while I am able to build the jetson-inference repo from source and ros_deep_learning node also from source on my x86 setup without using docker, I am running into the same issue of the jetson-inference/data folder not being accessible by the script. I am not sure how to fix this - your response above is relevant if my ros_deep_learning package was on a docker but at this moment I have built this directly on my set-up. Any suggestion from your end will be helpful (again i am mindful that you do not support x_86 setup). 2) On the other issue where you have made changes to the ros_deep_learning script i.e. running it inside jetson-inference - I confirm that now it works. Thank You. 3) There is still the issue of codec in the launch file wherein you have committed a new build (https://github.com/dusty-nv/jetson-inference/commit/c6602dd46fd9a5fd46934db8933cb54b18665bae) - however when I ran the container using docker/run.sh --ros=humble, this change didn't flow through - FYI, I cloned the updated jetson-inference repo (the master branch). Not sure what is going on.