Closed Fibo27 closed 1 year ago
Another point that I missed, which has been raised in the past but just for the sake of completeness if someone needs....
While compiling, Jetson Inference on x-86, the following files need to be updated: https://github.com/dusty-nv/jetson-utils/blob/f0bff5c502f9ac6b10aa2912f1324797df94bc2d/python/CMakeLists.txt https://github.com/dusty-nv/jetson-inference/blob/master/python/CMakeLists.txt
I have Python 3.10 installed, and i changed the below :
i_f(LSB_RELEASE_CODENAME MATCHES "focal") set(PYTHON_BINDING_VERSIONS 3.8) else() set(PYTHON_BINDINGVERSIONS 2.7 3.6 3.7) endif()
to
_if(LSB_RELEASE_CODENAME MATCHES "focal") set(PYTHON_BINDING_VERSIONS 3.8) else() set(PYTHON_BINDINGVERSIONS 3.10) endif()
@Fibo27 I'm quite impressed that you've gotten that far with this code on x86, great work! 👍
- Post compilation, the video_viewer launch file works fine but the none of the detection programs are working as I get this message: -[detectnet-2] [TRT] failed to find model manifest file 'networks/models.json' [detectnet-2] [TRT] couldn't find built-in detection model 'ssd-mobilenet-v2'
This may have the same root cause as https://github.com/dusty-nv/ros_deep_learning/issues/123 , which is the /jetson-inference/data
folder not being mounted into the container. First, clone jetson-inference on your machine if you haven't already. Then when you start your container with docker run
, add this flag:
--volume /host/path/to/jetson-inference/data:/jetson-inference/data
I have also looked at using ISAAC-ROS as that supports x_86. However, there is no discussion of receiving video/image data on RTP.
Yes I would recommend considering ISAAC ROS, and I haven't tried this yet but you could in theory just continue using my video_source and/or video_output nodes (which support RTP/RTSP/WebRTC/ect)
Thanks @dusty-nv for your feedback. Couple of points: 1) My above request was incidentally to use jetson-inference for the ros_deep_learning package built from source without docker - i am not very good at docker (something to learn for the future). So while I am able to build the jetson-inference repo from source and ros_deep_learning node also from source on my x86 setup without using docker, I am running into the same issue of the jetson-inference/data folder not being accessible by the script. I am not sure how to fix this - your response above is relevant if my ros_deep_learning package was on a docker but at this moment I have built this directly on my set-up. Any suggestion from your end will be helpful (again i am mindful that you do not support x_86 setup). 2) On the other issue where you have made changes to the ros_deep_learning script i.e. running it inside jetson-inference - I confirm that now it works. Thank You. 3) There is still the issue of codec in the launch file wherein you have committed a new build (https://github.com/dusty-nv/jetson-inference/commit/c6602dd46fd9a5fd46934db8933cb54b18665bae) - however when I ran the container using docker/run.sh --ros=humble, this change didn't flow through - FYI, I cloned the updated jetson-inference repo (the master branch). Not sure what is going on.
Hi @dusty-nv
Thank you for your prompt feedback on my comments on https://github.com/dusty-nv/ros_deep_learning/issues/123 - i have created a new thread as it matters to another issue. I have the following set-ups:
Jetson-Orin-Nano - with JetPack 5.1.1
x86: Ubuntu 22.04, Cuda: 12,1, CuDNN: 8.9.2, Tensorrt: 8.6, GPU: RTX 2080
Two Robots: one on Rpi 8GB and another of Jetson Nano 4GB. Both have Lidar, Sonar, Imu, Rpi Camera (CSI). The ros nodes transmit video data using RTP and my above set-ups of Jetson-Inference capture the transmitted data as an input source on RTP
My issue: I want to use the compute ability of my x_86 set-up to carry out all the inferencing, rendering on Rviz and Navigation
Thanks again for the incredible work!
Cheers
PS: It has been quite a journey getting to this stage including waiting for months to get my hands on a Orin Nano to try out the detection repos! FYI, I have also looked at using ISAAC-ROS as that supports x_86. However, there is no discussion of receiving video/image data on RTP. I have posted a message regarding that https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_visual_slam/issues/99 but clearly they are not as efficient as you! If they really want the developer community to be engaged then they need to be responsive.