Closed HappySamuel closed 2 months ago
Hi @HappySamuel RealSense depth only uses the Z16 format and does not support another format.
You can convert RealSense data to OpenCV formats with a Python 'node script' though. The RealSense ROS wrapper's show_center_depth.py node script provides an example of doing so.
A node script is run by launching the ROS wrapper first and then launching the Python script file in the terminal after ROS launch has completed.
Hi @MartyG-RealSense
Thanks for the fast response. You mean i can use this script to change the depth_module image format from original Z16 to others? (eg. Y8, Y16, etc)
Best, Samuel
You cannot change the Z16 depth format to another RealSense format such as Y8 or Y16, but you can convert RealSense depth frames into image formats supported by OpenCV, such as CV_16UC1
I am facing issue like below, which state that it doesn't support encoding 16UC1.
[component_container_mt-3] [ERROR] [1724137462.210540163] [NitrosImage]: [convert_to_custom] Unsupported encoding from ROS [16UC1].
[component_container_mt-3] terminate called after throwing an instance of 'std::runtime_error'
[component_container_mt-3] what(): [convert_to_custom] Unsupported encoding from ROS
What kind of encoding shall i change the realsense depth image to?
Best, Samuel
Those errors have not been previously reported in relation to RealSense and I cannot find non-RealSense references about them either, so I unfortunately do not have suggestions to provide regarding resolving them. CV_16UC1 is a valid OpenCV format to convert Z16 format data to.
From the ROS wrapper's base_realsense_node.cpp file:
I read the lines you showed on realsense2_camera/src/base_realsense_node.cpp
and found that there're a lot of format available.
// from rs2_format to OpenCV format
// https://docs.opencv.org/3.4/d1/d1b/group__core__hal__interface.html
// https://docs.opencv.org/2.4/modules/core/doc/basic_structures.html
// CV_<bit-depth>{U|S|F}C(<number_of_channels>)
// where U is unsigned integer type, S is signed integer type, and F is float type.
// For example, CV_8UC1 means a 8-bit single-channel array,
// CV_32FC2 means a 2-channel (complex) floating-point array, and so on.
_rs_format_to_cv_format[RS2_FORMAT_Y8] = CV_8UC1;
_rs_format_to_cv_format[RS2_FORMAT_Y16] = CV_16UC1;
_rs_format_to_cv_format[RS2_FORMAT_Z16] = CV_16UC1;
_rs_format_to_cv_format[RS2_FORMAT_RGB8] = CV_8UC3;
_rs_format_to_cv_format[RS2_FORMAT_BGR8] = CV_8UC3;
_rs_format_to_cv_format[RS2_FORMAT_RGBA8] = CV_8UC4;
_rs_format_to_cv_format[RS2_FORMAT_BGRA8] = CV_8UC4;
_rs_format_to_cv_format[RS2_FORMAT_YUYV] = CV_8UC2;
_rs_format_to_cv_format[RS2_FORMAT_UYVY] = CV_8UC2;
// _rs_format_to_cv_format[RS2_FORMAT_M420] = not supported yet in ROS2
_rs_format_to_cv_format[RS2_FORMAT_RAW8] = CV_8UC1;
_rs_format_to_cv_format[RS2_FORMAT_RAW10] = CV_16UC1;
_rs_format_to_cv_format[RS2_FORMAT_RAW16] = CV_16UC1;
// from rs2_format to ROS2 image msg encoding (format)
// http://docs.ros.org/en/noetic/api/sensor_msgs/html/msg/Image.html
// http://docs.ros.org/en/jade/api/sensor_msgs/html/image__encodings_8h_source.html
_rs_format_to_ros_format[RS2_FORMAT_Y8] = sensor_msgs::image_encodings::MONO8;
_rs_format_to_ros_format[RS2_FORMAT_Y16] = sensor_msgs::image_encodings::MONO16;
_rs_format_to_ros_format[RS2_FORMAT_Z16] = sensor_msgs::image_encodings::TYPE_16UC1;
_rs_format_to_ros_format[RS2_FORMAT_RGB8] = sensor_msgs::image_encodings::RGB8;
_rs_format_to_ros_format[RS2_FORMAT_BGR8] = sensor_msgs::image_encodings::BGR8;
_rs_format_to_ros_format[RS2_FORMAT_RGBA8] = sensor_msgs::image_encodings::RGBA8;
_rs_format_to_ros_format[RS2_FORMAT_BGRA8] = sensor_msgs::image_encodings::BGRA8;
_rs_format_to_ros_format[RS2_FORMAT_YUYV] = sensor_msgs::image_encodings::YUV422_YUY2;
_rs_format_to_ros_format[RS2_FORMAT_UYVY] = sensor_msgs::image_encodings::YUV422;
// _rs_format_to_ros_format[RS2_FORMAT_M420] = not supported yet in ROS2
_rs_format_to_ros_format[RS2_FORMAT_RAW8] = sensor_msgs::image_encodings::TYPE_8UC1;
_rs_format_to_ros_format[RS2_FORMAT_RAW10] = sensor_msgs::image_encodings::TYPE_16UC1;
_rs_format_to_ros_format[RS2_FORMAT_RAW16] = sensor_msgs::image_encodings::TYPE_16UC1;
Is the depth image format must be Z16 ? or can change to other ?
Because the error shown on previous reply is due to trying to send the /camera/depth/image_rect_raw
to isaac_ros_depth_image_proc/PointCloudXyzNode
and it complains that it cannot accept this kind of encoding from ROS [16UC1] Are there any solution for this?
RealSense is so wired around depth being in Z16 format that I cannot think how a different format could work. Z16 has been used for depth in RealSense cameras in librealsense even as far back as 2016.
One ROS2 user at https://github.com/IntelRealSense/realsense-ros/issues/2810#issuecomment-1706559042 took the approach of producing PointCloudXyzNode for depth_image_proc by adding instructions to the launch file. This is a similar approach to how the rs_rgbd.launch file in the ROS1 wrapper added additional code to publish an xyzrgb pointcloud to depth_image_proc.
Hi @MartyG-RealSense
I have tried your suggestion and use the PointCloudXyzNode
from depth_image_proc
, however it greatly slow down the pointcloud generation, perhaps it's due to too much pointcloud generated from the depth image? Do you have any parameters that can reduce some of the pixels? So that the process can be faster.
For example, isaac_ros_depth_image_proc
has a node with the same name PointCloudXyzNode
, which has a parameter
skip: Skips skip number of depth pixels in order to limit the number of pixels converted to points.
Best, Samuel
If decimation_filter.enable is set to true then the Decimation post-processing filter can reduce processing burden by 'downsampling' the resolution in order to reduce the complexity of the depth scene.
If the librealsense SDK's CUDA support is enabled on a Jetson board then depth-color alignment and pointcloud operations can be accelerated automatically by offloading processing from the CPU onto the Jetson board's Nvidia GPU.
The SDK's CUDA support is enabled by default if librealsense is installed from the Jetson version of the Debian packages, or can be enabled manually when building librealsense from source code by including the flag -DBUILD_WITH_CUDA=ON in the CMake build command.
If the librealsense SDK's CUDA support is enabled on a Jetson board then depth-color alignment and pointcloud operations can be accelerated automatically by offloading processing from the CPU onto the Jetson board's Nvidia GPU.
Any weblink for this? I can try it out.
The website JetsonHacks has build scripts for CUDA-enabled package and source code installation.
https://github.com/JetsonHacksNano/installLibrealsense?tab=readme-ov-file#buildlibrealsensesh
Thanks for the sharing. I have follow the link and installed it. Is there any parameter i need to trigger to have CUDA on for realsense-ros?
Besides, if using with CUDA, will the realsense camera images / pointcloud output reach the profile setting (fps)?
Because previously i tried ros2 topic hz
those topics /camera/depth/image_rect_raw
or /camera/infra1/image_rect_raw
or /camera/infra2/image_rect_raw
or /camera/depth/color/points
, but never have them reach the fps in profile setting. Do you know why?
No, you do not need to set a CUDA parameter in the realsense-ros wrapper.. If CUDA support is enabled in the librealsense SDK then it is automatically applied in the wrapper too.
When CUDA is applied and processing is offloaded from the CPU onto the Jetson's graphics GPU, a significant drop in CPU percentage usage can be expected. For example, from 80% when CUDA support is disabled to 30% or less when enabled.
If depth and color are both enabled then there may sometimes be a drop in FPS. If auto_exposure is enabled and an RGB setting called auto_exposure_priority is disabled then this forces the FPS to try to be maintained at a constant rate instead of being permitted to vary.
auto_exposure_priority can be disabled by editing the launch file to add the code below.
<rosparam>
/camera/camera/rgb_camera/auto_exposure_priority: false
</rosparam>
You could also try disabling it in the launch instruction.
ros2 launch realsense2_camera rs_launch.py rgb_camera.auto_exposure_priority:=false
Or during runtime after launch.
ros2 param set /camera/camera rgb_camera.auto_exposure_priority false
Hi @HappySamuel Do you require further assistance with this case, please? Thanks!
Hi @MartyG-RealSense
No more assistance needed. Thank you very much for the guidance. Especially this guide helps a lot.
The website JetsonHacks has build scripts for CUDA-enabled package and source code installation.
https://github.com/JetsonHacksNano/installLibrealsense?tab=readme-ov-file#buildlibrealsensesh
Best Samuel
You are very welcome, @HappySamuel - thanks very much for the update!
Issue Description
I want to modify depth_module format, but couldn't find relevant parameters for it. I use ros2 param list and found below:
However, couldn't find anything that can change the image format for
depth_module
. Any idea how shall i modify image format ?Best, Samuel