IntelRealSense / realsense-ros

ROS Wrapper for Intel(R) RealSense(TM) Cameras
http://wiki.ros.org/RealSense
Apache License 2.0
2.51k stars 1.74k forks source link

How to Modifying depth_module format #3188

Closed HappySamuel closed 1 week ago

HappySamuel commented 3 weeks ago

Required Info
Camera Model D435
Firmware Version 5.16.0.1
Operating System & Version Ubuntu 20.04
Kernel Version (Linux Only) 5.10.192-tegra
Platform NVIDIA Jetson AGX Orin
Librealsense SDK Version 2.55.1
Language python
Segment Robot
ROS Distro Humble
RealSense ROS Wrapper Version 4.55.1

Issue Description

I want to modify depth_module format, but couldn't find relevant parameters for it. I use ros2 param list and found below:

/front_cam/front_cam:
  .color.image_raw.format
  .color.image_raw.jpeg_quality
  .color.image_raw.png_level
  .color.image_raw.tiff.res_unit
  .color.image_raw.tiff.xdpi
  .color.image_raw.tiff.ydpi
  .depth.image_rect_raw.format
  .depth.image_rect_raw.jpeg_quality
  .depth.image_rect_raw.png_level
  .depth.image_rect_raw.tiff.res_unit
  .depth.image_rect_raw.tiff.xdpi
  .depth.image_rect_raw.tiff.ydpi
  .infra1.image_rect_raw.format
  .infra1.image_rect_raw.jpeg_quality
  .infra1.image_rect_raw.png_level
  .infra1.image_rect_raw.tiff.res_unit
  .infra1.image_rect_raw.tiff.xdpi
  .infra1.image_rect_raw.tiff.ydpi
  .infra2.image_rect_raw.format
  .infra2.image_rect_raw.jpeg_quality
  .infra2.image_rect_raw.png_level
  .infra2.image_rect_raw.tiff.res_unit
  .infra2.image_rect_raw.tiff.xdpi
  .infra2.image_rect_raw.tiff.ydpi
  align_depth.enable
  align_depth.frames_queue_size
  angular_velocity_cov
  base_frame_id
  camera_name
  clip_distance
  color_info_qos
  color_qos
  colorizer.color_scheme
  colorizer.enable
  colorizer.frames_queue_size
  colorizer.histogram_equalization_enabled
  colorizer.max_distance
  colorizer.min_distance
  colorizer.stream_filter
  colorizer.stream_format_filter
  colorizer.stream_index_filter
  colorizer.visual_preset
  decimation_filter.enable
  decimation_filter.filter_magnitude
  decimation_filter.frames_queue_size
  decimation_filter.stream_filter
  decimation_filter.stream_format_filter
  decimation_filter.stream_index_filter
  depth_info_qos
  depth_module.auto_exposure_roi.bottom
  depth_module.auto_exposure_roi.left
  depth_module.auto_exposure_roi.right
  depth_module.auto_exposure_roi.top
  depth_module.emitter_always_on
  depth_module.emitter_enabled
  depth_module.emitter_on_off
  depth_module.enable_auto_exposure
  depth_module.error_polling_enabled
  depth_module.exposure
  depth_module.frames_queue_size
  depth_module.gain
  depth_module.global_time_enabled
  depth_module.hdr_enabled
  depth_module.inter_cam_sync_mode
  depth_module.laser_power
  depth_module.output_trigger_enabled
  depth_module.profile
  depth_module.sequence_id
  depth_module.sequence_name
  depth_module.sequence_size
  depth_module.visual_preset
  depth_qos
  device_type
  diagnostics_period
  disparity_filter.enable
  disparity_to_depth.enable
  enable_color
  enable_depth
  enable_infra1
  enable_infra2
  enable_sync
  filter_by_sequence_id.enable
  filter_by_sequence_id.frames_queue_size
  filter_by_sequence_id.sequence_id
  hdr_merge.enable
  hdr_merge.frames_queue_size
  hold_back_imu_for_frames
  hole_filling_filter.enable
  hole_filling_filter.frames_queue_size
  hole_filling_filter.holes_fill
  hole_filling_filter.stream_filter
  hole_filling_filter.stream_format_filter
  hole_filling_filter.stream_index_filter
  infra1_info_qos
  infra1_qos
  infra2_info_qos
  infra2_qos
  initial_reset
  json_file_path
  linear_accel_cov
  pointcloud.allow_no_texture_points
  pointcloud.enable
  pointcloud.filter_magnitude
  pointcloud.frames_queue_size
  pointcloud.ordered_pc
  pointcloud.pointcloud_qos
  pointcloud.stream_filter
  pointcloud.stream_format_filter
  pointcloud.stream_index_filter
  publish_odom_tf
  publish_tf
  qos_overrides./parameter_events.publisher.depth
  qos_overrides./parameter_events.publisher.durability
  qos_overrides./parameter_events.publisher.history
  qos_overrides./parameter_events.publisher.reliability
  reconnect_timeout
  rgb_camera.auto_exposure_priority
  rgb_camera.auto_exposure_roi.bottom
  rgb_camera.auto_exposure_roi.left
  rgb_camera.auto_exposure_roi.right
  rgb_camera.auto_exposure_roi.top
  rgb_camera.backlight_compensation
  rgb_camera.brightness
  rgb_camera.contrast
  rgb_camera.enable_auto_exposure
  rgb_camera.enable_auto_white_balance
  rgb_camera.exposure
  rgb_camera.frames_queue_size
  rgb_camera.gain
  rgb_camera.gamma
  rgb_camera.global_time_enabled
  rgb_camera.hue
  rgb_camera.power_line_frequency
  rgb_camera.profile
  rgb_camera.saturation
  rgb_camera.sharpness
  rgb_camera.white_balance
  rosbag_filename
  serial_no
  spatial_filter.enable
  spatial_filter.filter_magnitude
  spatial_filter.filter_smooth_alpha
  spatial_filter.filter_smooth_delta
  spatial_filter.frames_queue_size
  spatial_filter.holes_fill
  spatial_filter.stream_filter
  spatial_filter.stream_format_filter
  spatial_filter.stream_index_filter
  temporal_filter.enable
  temporal_filter.filter_smooth_alpha
  temporal_filter.filter_smooth_delta
  temporal_filter.frames_queue_size
  temporal_filter.holes_fill
  temporal_filter.stream_filter
  temporal_filter.stream_format_filter
  temporal_filter.stream_index_filter
  tf_publish_rate
  unite_imu_method
  usb_port_id
  use_sim_time
  wait_for_device_timeout

However, couldn't find anything that can change the image format for depth_module. Any idea how shall i modify image format ?

Best, Samuel

MartyG-RealSense commented 3 weeks ago

Hi @HappySamuel RealSense depth only uses the Z16 format and does not support another format.

You can convert RealSense data to OpenCV formats with a Python 'node script' though. The RealSense ROS wrapper's show_center_depth.py node script provides an example of doing so.

https://github.com/IntelRealSense/realsense-ros/blob/ros2-master/realsense2_camera/scripts/show_center_depth.py

A node script is run by launching the ROS wrapper first and then launching the Python script file in the terminal after ROS launch has completed.

HappySamuel commented 3 weeks ago

Hi @MartyG-RealSense

Thanks for the fast response. You mean i can use this script to change the depth_module image format from original Z16 to others? (eg. Y8, Y16, etc)

Best, Samuel

MartyG-RealSense commented 3 weeks ago

You cannot change the Z16 depth format to another RealSense format such as Y8 or Y16, but you can convert RealSense depth frames into image formats supported by OpenCV, such as CV_16UC1

HappySamuel commented 3 weeks ago

I am facing issue like below, which state that it doesn't support encoding 16UC1.

[component_container_mt-3] [ERROR] [1724137462.210540163] [NitrosImage]: [convert_to_custom] Unsupported encoding from ROS [16UC1].
[component_container_mt-3] terminate called after throwing an instance of 'std::runtime_error'
[component_container_mt-3]   what():  [convert_to_custom] Unsupported encoding from ROS

What kind of encoding shall i change the realsense depth image to?

Best, Samuel

MartyG-RealSense commented 3 weeks ago

Those errors have not been previously reported in relation to RealSense and I cannot find non-RealSense references about them either, so I unfortunately do not have suggestions to provide regarding resolving them. CV_16UC1 is a valid OpenCV format to convert Z16 format data to.

From the ROS wrapper's base_realsense_node.cpp file:

https://github.com/IntelRealSense/realsense-ros/blob/ros2-master/realsense2_camera/src/base_realsense_node.cpp#L190

HappySamuel commented 3 weeks ago

I read the lines you showed on realsense2_camera/src/base_realsense_node.cpp and found that there're a lot of format available.

 // from rs2_format to OpenCV format
    // https://docs.opencv.org/3.4/d1/d1b/group__core__hal__interface.html
    // https://docs.opencv.org/2.4/modules/core/doc/basic_structures.html
    // CV_<bit-depth>{U|S|F}C(<number_of_channels>)
    // where U is unsigned integer type, S is signed integer type, and F is float type.
    // For example, CV_8UC1 means a 8-bit single-channel array,
    // CV_32FC2 means a 2-channel (complex) floating-point array, and so on.
    _rs_format_to_cv_format[RS2_FORMAT_Y8] = CV_8UC1;
    _rs_format_to_cv_format[RS2_FORMAT_Y16] = CV_16UC1;
    _rs_format_to_cv_format[RS2_FORMAT_Z16] = CV_16UC1;
    _rs_format_to_cv_format[RS2_FORMAT_RGB8] = CV_8UC3;
    _rs_format_to_cv_format[RS2_FORMAT_BGR8] = CV_8UC3;
    _rs_format_to_cv_format[RS2_FORMAT_RGBA8] = CV_8UC4;
    _rs_format_to_cv_format[RS2_FORMAT_BGRA8] = CV_8UC4;
    _rs_format_to_cv_format[RS2_FORMAT_YUYV] = CV_8UC2;
    _rs_format_to_cv_format[RS2_FORMAT_UYVY] = CV_8UC2;
    // _rs_format_to_cv_format[RS2_FORMAT_M420] = not supported yet in ROS2
    _rs_format_to_cv_format[RS2_FORMAT_RAW8] = CV_8UC1;
    _rs_format_to_cv_format[RS2_FORMAT_RAW10] = CV_16UC1;
    _rs_format_to_cv_format[RS2_FORMAT_RAW16] = CV_16UC1;

    // from rs2_format to ROS2 image msg encoding (format)
    // http://docs.ros.org/en/noetic/api/sensor_msgs/html/msg/Image.html
    // http://docs.ros.org/en/jade/api/sensor_msgs/html/image__encodings_8h_source.html
    _rs_format_to_ros_format[RS2_FORMAT_Y8] = sensor_msgs::image_encodings::MONO8;
    _rs_format_to_ros_format[RS2_FORMAT_Y16] = sensor_msgs::image_encodings::MONO16;
    _rs_format_to_ros_format[RS2_FORMAT_Z16] = sensor_msgs::image_encodings::TYPE_16UC1;
    _rs_format_to_ros_format[RS2_FORMAT_RGB8] = sensor_msgs::image_encodings::RGB8;
    _rs_format_to_ros_format[RS2_FORMAT_BGR8] = sensor_msgs::image_encodings::BGR8;
    _rs_format_to_ros_format[RS2_FORMAT_RGBA8] = sensor_msgs::image_encodings::RGBA8;
    _rs_format_to_ros_format[RS2_FORMAT_BGRA8] = sensor_msgs::image_encodings::BGRA8;
    _rs_format_to_ros_format[RS2_FORMAT_YUYV] = sensor_msgs::image_encodings::YUV422_YUY2;
    _rs_format_to_ros_format[RS2_FORMAT_UYVY] = sensor_msgs::image_encodings::YUV422;
    // _rs_format_to_ros_format[RS2_FORMAT_M420] =  not supported yet in ROS2
    _rs_format_to_ros_format[RS2_FORMAT_RAW8] = sensor_msgs::image_encodings::TYPE_8UC1;
    _rs_format_to_ros_format[RS2_FORMAT_RAW10] = sensor_msgs::image_encodings::TYPE_16UC1;
    _rs_format_to_ros_format[RS2_FORMAT_RAW16] = sensor_msgs::image_encodings::TYPE_16UC1;

Is the depth image format must be Z16 ? or can change to other ?

Because the error shown on previous reply is due to trying to send the /camera/depth/image_rect_raw to isaac_ros_depth_image_proc/PointCloudXyzNode and it complains that it cannot accept this kind of encoding from ROS [16UC1] Are there any solution for this?

MartyG-RealSense commented 3 weeks ago

RealSense is so wired around depth being in Z16 format that I cannot think how a different format could work. Z16 has been used for depth in RealSense cameras in librealsense even as far back as 2016.

One ROS2 user at https://github.com/IntelRealSense/realsense-ros/issues/2810#issuecomment-1706559042 took the approach of producing PointCloudXyzNode for depth_image_proc by adding instructions to the launch file. This is a similar approach to how the rs_rgbd.launch file in the ROS1 wrapper added additional code to publish an xyzrgb pointcloud to depth_image_proc.

https://github.com/IntelRealSense/realsense-ros/blob/ros1-legacy/realsense2_camera/launch/rs_rgbd.launch#L182-L190

HappySamuel commented 2 weeks ago

Hi @MartyG-RealSense

I have tried your suggestion and use the PointCloudXyzNode from depth_image_proc, however it greatly slow down the pointcloud generation, perhaps it's due to too much pointcloud generated from the depth image? Do you have any parameters that can reduce some of the pixels? So that the process can be faster.

For example, isaac_ros_depth_image_proc has a node with the same name PointCloudXyzNode , which has a parameter skip: Skips skip number of depth pixels in order to limit the number of pixels converted to points.

Best, Samuel

MartyG-RealSense commented 2 weeks ago

If decimation_filter.enable is set to true then the Decimation post-processing filter can reduce processing burden by 'downsampling' the resolution in order to reduce the complexity of the depth scene.

If the librealsense SDK's CUDA support is enabled on a Jetson board then depth-color alignment and pointcloud operations can be accelerated automatically by offloading processing from the CPU onto the Jetson board's Nvidia GPU.

The SDK's CUDA support is enabled by default if librealsense is installed from the Jetson version of the Debian packages, or can be enabled manually when building librealsense from source code by including the flag -DBUILD_WITH_CUDA=ON in the CMake build command.

HappySamuel commented 2 weeks ago

If the librealsense SDK's CUDA support is enabled on a Jetson board then depth-color alignment and pointcloud operations can be accelerated automatically by offloading processing from the CPU onto the Jetson board's Nvidia GPU.

Any weblink for this? I can try it out.

MartyG-RealSense commented 2 weeks ago

The website JetsonHacks has build scripts for CUDA-enabled package and source code installation.

https://github.com/JetsonHacksNano/installLibrealsense?tab=readme-ov-file#buildlibrealsensesh

HappySamuel commented 2 weeks ago

Thanks for the sharing. I have follow the link and installed it. Is there any parameter i need to trigger to have CUDA on for realsense-ros?

Besides, if using with CUDA, will the realsense camera images / pointcloud output reach the profile setting (fps)?

Because previously i tried ros2 topic hz those topics /camera/depth/image_rect_raw or /camera/infra1/image_rect_raw or /camera/infra2/image_rect_raw or /camera/depth/color/points , but never have them reach the fps in profile setting. Do you know why?

MartyG-RealSense commented 2 weeks ago

No, you do not need to set a CUDA parameter in the realsense-ros wrapper.. If CUDA support is enabled in the librealsense SDK then it is automatically applied in the wrapper too.

When CUDA is applied and processing is offloaded from the CPU onto the Jetson's graphics GPU, a significant drop in CPU percentage usage can be expected. For example, from 80% when CUDA support is disabled to 30% or less when enabled.

If depth and color are both enabled then there may sometimes be a drop in FPS. If auto_exposure is enabled and an RGB setting called auto_exposure_priority is disabled then this forces the FPS to try to be maintained at a constant rate instead of being permitted to vary.

auto_exposure_priority can be disabled by editing the launch file to add the code below.

<rosparam> 
/camera/camera/rgb_camera/auto_exposure_priority: false 
</rosparam>

You could also try disabling it in the launch instruction.

ros2 launch realsense2_camera rs_launch.py rgb_camera.auto_exposure_priority:=false

Or during runtime after launch.

ros2 param set /camera/camera rgb_camera.auto_exposure_priority false

MartyG-RealSense commented 1 week ago

Hi @HappySamuel Do you require further assistance with this case, please? Thanks!

HappySamuel commented 1 week ago

Hi @MartyG-RealSense

No more assistance needed. Thank you very much for the guidance. Especially this guide helps a lot.

The website JetsonHacks has build scripts for CUDA-enabled package and source code installation.

https://github.com/JetsonHacksNano/installLibrealsense?tab=readme-ov-file#buildlibrealsensesh

Best Samuel

MartyG-RealSense commented 1 week ago

You are very welcome, @HappySamuel - thanks very much for the update!