IntelRealSense / realsense-ros

ROS Wrapper for Intel(R) RealSense(TM) Cameras
http://wiki.ros.org/RealSense
Apache License 2.0
2.53k stars 1.75k forks source link

Depth image is always 320x240 on Jetson Nano #3187

Closed TakShimoda closed 1 month ago

TakShimoda commented 1 month ago

Required Info
Camera Model D435i
Firmware Version 5.13.0.50
Operating System & Version Ubuntu 20.04
Kernel Version (Linux Only) 4.9.253
Platform NVIDIA Jetson Nano 4GB
SDK Version 2.51.1
Language ROS2
Segment Robot

Issue Description

I noticed that whenever I launch the realsense node with RGB-D, the depth image is always 320x240 resolution. This is even after setting depth_module.depth_profile:=640x480x30 as a launch argument. I also tried depth_module.profile:=640x480x30 but this didn't work either. My full launch is:

ros2 launch realsense2_camera rs_launch.py unite_imu_method:=2 enable_sync:=true enable_color:=true enable_accel:=true enable_gyro:=true gyro_fps:=400 accel_fps:=250 depth_module.depth_profile:=640x480x30 depth_module.infra_profile:=640x480x30 rgb_module.color_profile:=640x480x30 depth_module.emitter_enabled:=0

The strange thing is if I recover the parameter using ros2 param get /camera/camera depth_module.depth_profile, it shows the profile is 640x480x30. However, if I topic echo /camera/camera/depth/camera_info it says the resolution is 320x240, and I can confirm the depth image is that resolution. All other profiles seem to work (e.g. rgb_camera.color_profile:=640x480x30 enable_infra1:=true enable_infra2:=true depth_module.infra_profile:=640x480x30 sets both color and infra parameters to 640x480). What's strange is 320x240 isn't even a profile for the depth module returned by rs-enumerate-devices.

When I use the same setup on my PC (ubuntu 20.04 firmware, 5.15.0-94-generic, firmware 5.13.0.50, SDK 2.54.2 ROS2 foxy), some things work but some don't. When I launch the same command above, it properly outputs the depth image with 640x480 resolution. However, if I try to change the depth profile to anything else returned by rs-enumerate-devices, such as 424x240x3, it will give the warning XXX Hardware Notification:Frames didn't arrived within 5 seconds,1.7241e+12,Warn,Frames Timeout and the depth image won't publish. I know it's not an issue with a wrong profile, as a wrong profile will just give an error output.

I tried a few things to try to fix this:

I was wondering how I can fix this issue. Thanks.

EDIT: I just realized for the PC, I just had to set the infra profile to the same as depth as the infra images are rectified to retrieve the depth image so the issue on my PC is fixed, but the issue persists on the Jetson Nano.

MartyG-RealSense commented 1 month ago

Hi @TakShimoda Have you made any edits to the post-processing filter settings in the rs_launch.py file, please? The depth resolution would usually halve if decimation_filter has been set to true instead of its default of false. This is because the decimation filter 'downsamples' the depth resolution by a default division factor of 2 when enabled, so 640x480 becomes 320x240.

https://github.com/IntelRealSense/realsense-ros/blob/ros2-master/realsense2_camera/launch/rs_launch.py#L78

Also, the rgb_module definition in your launch instruction should be rgb_camera.

rgb_camera.color_profile:=640x480x30

In regard to the high CPU usage, this can occur on Jetson boards if the librealsense SDK's CUDA support has not been enabled. It can be enabled by building librealsense with the Jetson version of the Debian packages (see the link below) or by compiling librealsense from source code with CMake with the flag -DBUILD_WITH_CUDA=ON included in the CMake build instruction.

https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_jetson.md#4-install-with-debian-packages

TakShimoda commented 1 month ago

Hi Marty,

I realized I did have the decimation_filter to true, so I set it to false and it's back to 640x480. For the camera, I realized it's rgb_camera.color_profile on my pc but rgb_camera.profile on the Jetson nano.

As for the Jetson Nano, I did build from debian from the exact same link you provided (my Jetson Nano 4GB is L4T 32.7.2, Jetpack 4.6.2, Cuda 10.2) so I'm assuming it's already using CUDA. When I run my bash script which basically launches the camera and records topics to a ROS2 bag, I try profiling with two commands:

There always seems to be some power issues on my Jetson (you can see the "System throttled due to over-current" message) but that's an issue outside of realsense. I notice that topics also aren't recorded with high frequency. For example, an IMU at 400Hz is usually lower, such as around 360-380Hz, and camera images which should be 30Hz are sometimes 25-29Hz. I'm assuming my system is already using CUDA to the best of its ability, but are there other solutions to improve efficiency for my use case, which is mainly launching the camera and recording raw data? I was going to try ROS2 components and have the camera launcher and bag subscriber loaded and having the messages copied as C++ pointers rather than passed as ROS2 messages. Would I need to use the C++ API to do this? Are there any other suggestions for efficiency?

Thanks

MartyG-RealSense commented 1 month ago

Intel strongly recommend that Jetson Nano users enable the barrel jack power connector using the instructions at the link below. If your Nano has a barrel jack, have you enabled it please?

https://jetsonhacks.com/2019/04/10/jetson-nano-use-more-power/

Yes, if you are not using point clouds or alignment then your CPU load percentage would not benefit from CUDA support being enabled and the processing load would be placed fully on the CPU.

Other than lowering your FPS to 15 or 6 (assuming you do not want to reduce the resolution lower than 640x480), there may not be much else you can do to reduce CPU load.

I note that you include depth_module.infra_profile:=640x480x30 in your launch instruction. The infra topics are disabled by default, so would not be published unless at least enable_infra1:=true was included in the launch instruction or you had edited the launch file to set infra1 to true.

TakShimoda commented 1 month ago

Hello Marty,

For now, I'm powering the Jetson Nano with 2 GPIO pins at 3A 5V each for 30W as I thought this is the most convenient option. For the barrel jack, since the Jetson Nano's on a small robotics platform, I have to get the battery adapter plate and mount it on the robot which might be hard with the limited space, as well as expensive. Are there any performance downsides to using the GPIO pins over barrel jack connector? The dupont wires I use to connect to the GPIO pins have an AWG of 20 so I know there could be some resistance, although when I measure it it's usually around 5.1V, which should be good.

I did set enable_infra1 and enable_infra2 to true, I think it was already set by default in my launch file.

So other than possibly changing the hardware setup for power, do you recommend the ROS2 components method? I may also have to use the realsense on Raspberry Pi 3B+ which are less powerful to capture ARTags on multiple robots so I was wondering if using the C++ pointers to record bags instead of passing messages over the network can speed things up by reducing network congestion.

Thanks

MartyG-RealSense commented 1 month ago

RealSense cameras tend to have problems when used with Raspberry Pi boards. The best that may be able to be achieved is streaming of the basic depth and color streams, without any additional processing such as pointclouds, alignment or post-processing. There may also be situations where the depth stream works but the color stream does not. So I would recommend continuing with Jetson Nano if possible.

If you do not need the infra1 and infra2 streams then having them set to false should reduce the processing burden on the Jetson's CPU. They are not required to be enabled in order for the depth stream to work.

I do not have knowledge about GPIO power on Jetson or potential drawbacks of it compared to using the power jack, unfortunately. In general though, it is recommendable though to plan for providing up to 2A to meet a RealSense camera's power draw needs. The USB ports on the Jetson will be drawing power from the board's power supply.

Recording bag files for a duration longer than about 30 seconds will create bags with file sizes that consume multiple gigabytes of storage space, so may not be ideal for a robot. If you will only be recording for short durations then it might be feasible for the storage device that you will use with the computing board.

I have heard positive things about zero-copy intra-process communication but have not had experience with it myself.

TakShimoda commented 1 month ago

For the raspberry pi, I'll probably only need one image stream so I think I can just disable all other streams (e.g. just use color, disable infrared, IMU, etc..)

If you do not need the infra1 and infra2 streams then having them set to false should reduce the processing burden on the Jetson's CPU. They are not required to be enabled in order for the depth stream to work.

  • I am using infrared 1 and 2, along with RGB-D. I just have a question about them not being required for the depth stream to work: I thought depth images were rectified from the infrared cameras? Does this mean if I have enable_infrared1:=false, it won't publish the topic on ROS but will still use the image for depth?

I am recording rosbags, typically from a 1min-1m30sec, and they are about 1.5GB including all images but I make sure to have adequate storage so that's not really an issue.

I will try the zero-copy intra-process method as it has worked well for me with ROS in the past.

MartyG-RealSense commented 1 month ago

The depth frames are generated from raw left and right infrared frames in the camera hardware before the data is sent along the USB cable to the computer. These raw infrared frames are not the same as the infrared1 and infrared2 streams, which is why depth frames can be produced when the publishing in ROS of the infrared1 and infrared2 topics is disabled.

TakShimoda commented 1 month ago

Thanks Marty, I think all the questions here are answered. I'll open another issue if I encounter anything with implementing components.

MartyG-RealSense commented 1 month ago

You are very welcome. Thanks very much for the update and good luck!