Closed pedroVigano closed 1 month ago
Hi @pedroVigano You would not need to install the MIPI driver when using a D435f as the fix in the MIPI driver is for using JetPack 6 with an IMU-equipped camera such as D435i or D455. D435f does not have an IMU.
In the installation_jetson.md instructions, Intel strongly recommend enabling the barrel jack power connector on Nano type Jetson boards using the instructions at the link below. Have you enabled your Orin Nano's barrel jack, please?
https://jetsonhacks.com/2019/04/10/jetson-nano-use-more-power/
Does the problem still occur if you use a lower resolution and FPS for depth and color?
ros2 launch realsense2_camera rs_launch.py depth_module.depth_profile:=640x480x15 rgb_camera.color_profile:=640x480x15 pointcloud.enable:=true
If I understood correctly, this Intel recommendation is for Jetson Nano boards, not Jetson Orin Nano boards. I'm using the 19V power supply that comes with the board and power mode 0, which means 15W. I've tried lowering the resolution as you suggested, but it didn't work.
Like the original Nano, Orin Nano boards can benefit from having the barrel jack enabled for extra power.
Which method did you use to install the librealsense SDK on your Nano, please?
If you installed from packages using the installation_jetson.md instructions then the SDK will automatically have pointcloud acceleration enabled. This offloads processing of the pointcloud from the Jetson's CPU onto its Nvidia graphics GPU.
If you built the SDK from source code with CMake though then the flag -DBUILD_WITH_CUDA=ON needs to be included in the CMake build instruction to enable CUDA support and process pointclouds on the GPU, otherwise enabling pointclouds will impose a very high processing burden on the CPU.
I'm using a barrel jack (the native power supply that comes with the board) and not a micro-USB connector, so I think extra power is not the problem. I've installed the librealsense SDK following step 4. Install with Debian Packages: https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_jetson.md#4-install-with-debian-packages. That means I've run the following commands:
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-key F6E65AC044F831AC80A06380C8B3A55A6F3EFCDE || sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-key F6E65AC044F831AC80A06380C8B3A55A6F3EFCDE
sudo add-apt-repository "deb https://librealsense.intel.com/Debian/apt-repo $(lsb_release -cs) main" -u
sudo apt-get install librealsense2-utils sudo apt-get install librealsense2-dev
However, I think what you said may be the problem. The following is the output of nvidia-smi
and htop
when running realsense-viewer
:
And the following is the output when running ros2 launch realsense2_camera rs_launch.py depth_module.depth_profile:=640x480x15 rgb_camera.color_profile:=640x480x15 pointcloud.enable:=true
:
It seems the GPU is not being used at all, for both realsense-viewer and the ROS2 wrapper. How should I proceed? Should I uninstall librealsense and reinstall it using the instructions under Building from Source using RSUSB Backend? It's written in the https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_jetson.md that the Building from Source using Native Backend is incompatible for the Jetson Orin™ board with JetPack 6.0.
The easiest way to proceed would be to use the libuvc backend procedure at the link below, which works the same as RSUSB, and build librealsense from source code using the provided libuvc_installation.sh build script.
https://github.com/IntelRealSense/librealsense/blob/master/doc/libuvc_installation.md
You should change the CMake build instruction on line 46 of the script to this:
cmake ../ -DFORCE_LIBUVC=true -DCMAKE_BUILD_TYPE=release -DBUILD_EXAMPLES=TRUE -DBUILD_GRAPHICAL_EXAMPLES=TRUE -DBUILD_WITH_CUDA=ON
https://github.com/IntelRealSense/librealsense/blob/master/scripts/libuvc_installation.sh#L46
First, I uninstalled all librealsense and ros-humble-realsense packages with apt remove
. Then I did exactly as you said, changed line 46 of the script and ran ./libuvc_installation.sh
. It worked normally, after some minutes it returned Librealsense script completed
. After that, I reinstalled the ROS2 wrapper with sudo apt install ros-humble-realsense2-*
. However, nothing changed... It keeps working in realsense-viewer but not with the ROS2 wrapper. Running nvidia-smi
still shows that the GPU is not being used in both cases.
The message No stream match for pointcloud chosen texture Process - Color
indicates that RGB color frames are being dropped, likely because of the poor performance of color and depth when the pointcloud is enabled.
Can you confirm whether resetting the camera at launch by adding the command initial_reset:=true
to your launch instruction makes a difference?
Is the pointcloud published if you launch with ros2 run instead of ros2 launch?
ros2 run realsense2_camera realsense2_camera_node --ros-args -p pointcloud.enable:=true
Adding the command initial_reset:=true
in the launch instruction doesn't change anything.
If I run ros2 run realsense2_camera realsense2_camera_node --ros-args -p pointcloud.enable:=true
, the pointcloud works but the color image stops. If I run ros2 run realsense2_camera realsense2_camera_node --ros-args -p pointcloud.enable:=true -p depth_enable:=false
, then the pointcloud stops and the color image comes back.
I was trying to check GPU usage with tegrastats
and, if I don't launch RVIZ2, running the realsense2_camera
node via ros2 launch
or ros2 run
doesn't use any percentage of the GPU. However, when running with realsense-viewer
, the GPU seems to be correctly used (although I don't know if it's because of the pointcloud or the viewer itself).
If the ROS wrapper is built from source with 'colcon build' then there is an alternative to CUDA for accelerating pointclouds with the GPU. To activate it, the wrapper should be colcon built with the instruction below.
colcon build --cmake-args '-DBUILD_ACCELERATE_GPU_WITH_GLSL=ON'
When using the launch file after building the wrapper in this way, including the parameter accelerate_gpu_with_glsl:=true
or setting the parameter to true in the rs_launch.py launch file (its default state is false) should activate the GLSL pointcloud acceleration.
Finally it worked! Thank you @MartyG-RealSense for your assistance. I followed these last instructions and it started running perfectly!
You are very welcome, @pedroVigano - it's great to hear that GLSL resolved your problem. Thanks very much for the update!
Case closed due to solution achieved and no further comments received.
Issue Description
My issue has already been described in other versions of ROS and Ubuntu on Jetson devices. It's exactly the same as pointed out here https://github.com/IntelRealSense/realsense-ros/issues/2575 or here https://github.com/IntelRealSense/realsense-ros/issues/1967. When I try to enable the pointcloud, RGB image stops and Depth FPS decreases dramatically. I've tried running this command:
ros2 launch realsense2_camera rs_launch.py depth_module.depth_profile:=1280x720x30 pointcloud.enable:=true
and this was the result:`[realsense2_camera_node-1] [INFO] [1727705895.446213916] [camera.camera]: RealSense ROS v4.55.1 [realsense2_camera_node-1] [INFO] [1727705895.446492898] [camera.camera]: Built with LibRealSense v2.55.1 [realsense2_camera_node-1] [INFO] [1727705895.446541347] [camera.camera]: Running with LibRealSense v2.55.1 [realsense2_camera_node-1] [INFO] [1727705895.495078843] [camera.camera]: Device with serial number 234322070131 was found. [realsense2_camera_node-1] [realsense2_camera_node-1] [INFO] [1727705895.495249119] [camera.camera]: Device with physical ID /sys/devices/platform/bus@0/3610000.usb/usb2/2-1/2-1.2/2-1.2:1.0/video4linux/video0 was found. [realsense2_camera_node-1] [INFO] [1727705895.495282559] [camera.camera]: Device with name Intel RealSense D435 was found. [realsense2_camera_node-1] [INFO] [1727705895.495718249] [camera.camera]: Device with port number 2-1.2 was found. [realsense2_camera_node-1] [INFO] [1727705895.495752234] [camera.camera]: Device USB type: 3.2
[realsense2_camera_node-1] [INFO] [1727705895.496435993] [camera.camera]: JSON file is not provided [realsense2_camera_node-1] [INFO] [1727705895.496480858] [camera.camera]: Device Name: Intel RealSense D435 [realsense2_camera_node-1] [INFO] [1727705895.496506363] [camera.camera]: Device Serial No: 234322070131 [realsense2_camera_node-1] [INFO] [1727705895.496523451] [camera.camera]: Device physical port: /sys/devices/platform/bus@0/3610000.usb/usb2/2-1/2-1.2/2-1.2:1.0/video4linux/video0 [realsense2_camera_node-1] [INFO] [1727705895.496540507] [camera.camera]: Device FW version: 5.16.0.1 [realsense2_camera_node-1] [INFO] [1727705895.496555612] [camera.camera]: Device Product ID: 0x0B07 [realsense2_camera_node-1] [INFO] [1727705895.496570012] [camera.camera]: Sync Mode: Off [realsense2_camera_node-1] [WARN] [1727705895.665337798] [camera.camera]: re-enable the stream for the change to take effect. [realsense2_camera_node-1] [INFO] [1727705895.666759494] [camera.camera]: Set ROS param depth_module.infra_profile to default: 848x480x30 [realsense2_camera_node-1] [WARN] [1727705895.674869274] [camera.camera]: Could not set param: rgb_camera.power_line_frequency with 3 Range: [0, 2]: parameter 'rgb_camera.power_line_frequency' could not be set: Parameter {rgb_camera.power_line_frequency} doesn't comply with integer range. [realsense2_camera_node-1] [INFO] [1727705895.687740280] [camera.camera]: Set ROS param rgb_camera.color_profile to default: 640x480x30 [realsense2_camera_node-1] [INFO] [1727705895.699209912] [camera.camera]: Stopping Sensor: Depth Module [realsense2_camera_node-1] [INFO] [1727705895.699686786] [camera.camera]: Stopping Sensor: RGB Camera [realsense2_camera_node-1] [INFO] [1727705895.722822021] [camera.camera]: Starting Sensor: Depth Module [realsense2_camera_node-1] [INFO] [1727705895.735504991] [camera.camera]: Open profile: stream_type: Depth(0), Format: Z16, Width: 1280, Height: 720, FPS: 30 [realsense2_camera_node-1] [INFO] [1727705895.742369176] [camera.camera]: Starting Sensor: RGB Camera [realsense2_camera_node-1] [INFO] [1727705895.752405239] [camera.camera]: Open profile: stream_type: Color(0), Format: RGB8, Width: 640, Height: 480, FPS: 30 [realsense2_camera_node-1] [INFO] [1727705895.756202347] [camera.camera]: RealSense Node Is Up! [realsense2_camera_node-1] [WARN] [1727705915.909334565] [camera.camera]: No stream match for pointcloud chosen texture Process - Color`
I've followed all instructions in _installationjetson.md, except for the steps to build for MIPI driver as I want to connect via USB. It's important to note that
realsense-viewer
is working perfectly. The only known solution for ROS 2, if I understood correctly, is to install the old ROS2 wrapper 3.2.2 and librealsense 2.48.0. However, this is not an option for ROS2 Humble users on Ubuntu 22.04, as the supported versions of v2.48.0 are Ubuntu 16.04/18.04/20.04 with Kernel versions: 4.[4, 8, 10, 13, 15], 4.16(4), 4.18, 5.[0, 3, 4, 8]. I've tried installing v2.48.0, but it didn't work as expected. What should be the solution in my case? Thanks in advance!