IntelRealSense / realsense-ros

ROS Wrapper for Intel(R) RealSense(TM) Cameras
http://wiki.ros.org/RealSense
Apache License 2.0
2.51k stars 1.74k forks source link

Realsense node stops publishing image data after calling up other nodes #2502

Closed chivas1000 closed 1 year ago

chivas1000 commented 1 year ago

Hi I'm using ROS2 and realsense D435i to implement a navigation application, the setup are below:

in short, I implement this examples which would brings up the realsense node, nvblox, and VSLAM. which runs well when all nodes in same machine. https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox/blob/main/docs/tutorial-nvblox-vslam-realsense.md

and I divided them for realsense node only running on the jetson board on my robots, nvblox and VSLAM runs in the edge server

robots and edge server are connected with ethernet cables, and I test they can ping and use the talker listener examples from ROS2 well.

Problems: When I start realsense node at the jetson board side, I can see topics and image data in edge server. But when I bring up the nvblox and VSLAM nodes, edge side doesn't see the image but the topic name still there, but jetson side can see the image data.

Is that I have any mistakes in launch files or network setup?

the launch files are as follows:


nvr_debug_jetson.launch.py

Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.

#

NVIDIA CORPORATION and its licensors retain all intellectual property

and proprietary rights in and to this software, related documentation

and any modifications thereto. Any use, reproduction, disclosure or

distribution of this software and related documentation without an express

license agreement from NVIDIA CORPORATION is strictly prohibited.

import os

from ament_index_python.packages import get_package_share_directory

from launch import LaunchDescription from launch.actions import DeclareLaunchArgument from launch.substitutions import LaunchConfiguration from launch_ros.actions import Node from launch_ros.descriptions import ComposableNode from launch_ros.actions import ComposableNodeContainer

def generate_launch_description():

# Realsense
realsense_config_file_path = os.path.join(
    get_package_share_directory('nvblox_examples_bringup'),
    'config', 'realsense.yaml'
)  

realsense_node = ComposableNode(
    namespace="camera",
    package='realsense2_camera',
    plugin='realsense2_camera::RealSenseNodeFactory',
    parameters=[realsense_config_file_path],
)

realsense_splitter_node = ComposableNode(
    namespace="camera",
    name='realsense_splitter_node',
    package='realsense_splitter',
    plugin='nvblox::RealsenseSplitterNode',
    parameters=[{
                'input_qos': 'SENSOR_DATA',
                'output_qos': 'SENSOR_DATA'
    }],
    remappings=[('input/infra_1', '/camera/infra1/image_rect_raw'),
                ('input/infra_1_metadata', '/camera/infra1/metadata'),
                ('input/infra_2', '/camera/infra2/image_rect_raw'),
                ('input/infra_2_metadata', '/camera/infra2/metadata'),
                ('input/depth', '/camera/depth/image_rect_raw'),
                ('input/depth_metadata', '/camera/depth/metadata'),
                ('input/pointcloud', '/camera/depth/color/points'),
                ('input/pointcloud_metadata', '/camera/depth/metadata'),
    ]
)

realsense_container = ComposableNodeContainer(
    name='realsense_container',
    namespace='',
    package='rclcpp_components',
    executable='component_container',
    composable_node_descriptions=[
        realsense_node,
        realsense_splitter_node
    ],
    output='screen'
)

base_link_tf_node = Node(
    package='tf2_ros',
    executable='static_transform_publisher',
    arguments=[
        '0.16', '0', '0.11', '0', '0', '0', '1',
        'base_link', 'camera_link']
)

return LaunchDescription([
    realsense_container,
    base_link_tf_node
])

nvr_debug_edge.launch.py

Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.

#

NVIDIA CORPORATION and its licensors retain all intellectual property

and proprietary rights in and to this software, related documentation

and any modifications thereto. Any use, reproduction, disclosure or

distribution of this software and related documentation without an express

license agreement from NVIDIA CORPORATION is strictly prohibited.

import os

from ament_index_python.packages import get_package_share_directory

from launch import LaunchDescription from launch.actions import DeclareLaunchArgument from launch.substitutions import LaunchConfiguration from launch_ros.actions import Node from launch_ros.descriptions import ComposableNode from launch_ros.actions import ComposableNodeContainer

def generate_launch_description():

# VSLAM
visual_slam_node = Node(
    name='visual_slam_node',
    package='isaac_ros_visual_slam',
    executable='isaac_ros_visual_slam',
    parameters=[{
                'enable_rectified_pose': True,
                'denoise_input_images': False,
                'rectified_images': True,
                'enable_debug_mode': False,
                'debug_dump_path': '/tmp/vslam',
                'enable_slam_visualization': True,
                'enable_landmarks_view': True,
                'enable_observations_view': True,
                'map_frame': 'map',
                'odom_frame': 'odom',
                'base_frame': 'base_link',
                'input_left_camera_frame': 'camera_infra1_frame',
                'input_right_camera_frame': 'camera_infra2_frame',
                'enable_localization_n_mapping': True,
                'publish_odom_to_base_tf': True,
                'publish_map_to_odom_tf': True,
    }],
    remappings=[('stereo_camera/left/image', '/camera/infra1/image_rect_raw'),
                ('stereo_camera/left/camera_info', '/camera/infra1/camera_info'),
                ('stereo_camera/right/image', '/camera/infra2/image_rect_raw'),
                ('stereo_camera/right/camera_info', '/camera/infra2/camera_info')]
)

# Nvblox
nvblox_config = DeclareLaunchArgument(
    'nvblox_config', default_value=os.path.join(
        get_package_share_directory(
            'nvblox_examples_bringup'), 'config', 'nvblox.yaml'
    )
)

nvblox_node = Node(
    package='nvblox_ros',
    executable='nvblox_node',
    parameters=[LaunchConfiguration('nvblox_config')],
    output='screen',
    remappings=[
        ("depth/camera_info", "/camera/depth/camera_info"),
        ("depth/image", "/camera/realsense_splitter_node/output/depth"),
        ("color/camera_info", "/camera/color/camera_info"),
        ("color/image", "/camera/color/image_raw")
    ]
)

# RVIZ
rviz_config_path = os.path.join(get_package_share_directory(
    'nvblox_examples_bringup'), 'config', 'nvblox_vslam_realsense.rviz')

print(rviz_config_path)

rviz = Node(
    package='rviz2',
    executable='rviz2',
    arguments=['-d', rviz_config_path],
    output='screen')

return LaunchDescription([

    nvblox_config,
    visual_slam_node,
    nvblox_node,
    rviz
])
MartyG-RealSense commented 1 year ago

Hi @chivas1000 This setup is outside of my experience but I will do my best to be of assistance.

It looks as though you are starting up the RealSense node and importing configuration values for RealSense options into your launch via the .yaml file realsense.yaml. Is that correct, please?

I also note that everything works well when all nodes are on one machine, but you have problems with the nvblox and VSLAM nodes when dividing the nodes across two machines (the Jetson for the RealSense node and the edge server for nvblox and VSLAM).

In a two-machine networked setup, I would recommend first checking Intel's RealSense ROS guide for connecting cameras across two computers via ROS_MASTER_URI to see whether there is any information that is relevant to your own two-machine setup.

https://github.com/IntelRealSense/realsense-ros/wiki/showcase-of-using-3-cameras-in-2-machines

chivas1000 commented 1 year ago

Hi @MartyG-RealSense yes, the yaml file is the configuration of realsense camera. Thanks for your quick reply, but I think I'm using ROS2 and there are no ROS_MASTER_URI, but I assume something might go wrong in the settings.

MartyG-RealSense commented 1 year ago

If it works when all nodes are on the same machine then it could be assumed that nvblox and VSLAM do not have problems with reading the data in the RealSense topics, except when nvblox and VSLAM are on the second machine (the edge server).

Apparently ROS2 has ROS_DOMAIN_ID instead of ROS_MASTER_URI for connecting across two machines.

https://www.theconstructsim.com/separating-ros2-environments-ros_domain_id-ros2-concepts-in-practice/

chivas1000 commented 1 year ago

If it works when all nodes are on the same machine then it could be assumed that nvblox and VSLAM do not have problems with reading the data in the RealSense topics, except when nvblox and VSLAM are on the second machine (the edge server).

Apparently ROS2 has ROS_DOMAIN_ID instead of ROS_MASTER_URI for connecting across two machines.

https://www.theconstructsim.com/separating-ros2-environments-ros_domain_id-ros2-concepts-in-practice/

yes, and ROS_DOMAIN_ID was set, but that didn't help

and here is the output of the realsense node, it appears that some errors are on, but I don't know if it has to do with this problem.

admin@ubuntu:/workspaces/isaac_ros-dev$ source install/setup.bash admin@ubuntu:/workspaces/isaac_ros-dev$ ros2 launch nvblox_examples_bringup nvr_debug.launch.py [INFO] [launch]: All log files can be found below /home/admin/.ros/log/2022-10-13-10-58-40-532074-ubuntu-30285 [INFO] [launch]: Default logging verbosity is set to INFO [INFO] [component_container-1]: process started with pid [30298] [INFO] [static_transform_publisher-2]: process started with pid [30300] [static_transform_publisher-2] [WARN] [1665658721.809580602] []: Old-style arguments are deprecated; see --help for new-style arguments [static_transform_publisher-2] [INFO] [1665658721.908011870] [static_transform_publisher_okFmVYzYxSyDHaoM]: Spinning until stopped - publishing transform [static_transform_publisher-2] translation: ('0.160000', '0.000000', '0.110000') [static_transform_publisher-2] rotation: ('0.000000', '0.000000', '0.000000', '1.000000') [static_transform_publisher-2] from 'base_link' to 'camera_link' [component_container-1] [INFO] [1665658722.208643569] [realsense_container]: Load Library: /workspaces/isaac_ros-dev/install/realsense2_camera/lib/librealsense2_camera.so [component_container-1] [INFO] [1665658722.362127880] [realsense_container]: Found class: rclcpp_components::NodeFactoryTemplate [component_container-1] [INFO] [1665658722.362456857] [realsense_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate [component_container-1] [INFO] [1665658722.533116839] [camera.camera]: RealSense ROS v3.2.3 [component_container-1] [INFO] [1665658722.533400565] [camera.camera]: Built with LibRealSense v2.51.1 [component_container-1] [INFO] [1665658722.533531771] [camera.camera]: Running with LibRealSense v2.51.1 [INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/camera/camera' in container '/realsense_container' [component_container-1] [INFO] [1665658722.551024226] [realsense_container]: Load Library: /workspaces/isaac_ros-dev/install/realsense_splitter/lib/librealsense_splitter_component.so [component_container-1] [INFO] [1665658722.567078113] [realsense_container]: Found class: rclcpp_components::NodeFactoryTemplate [component_container-1] [INFO] [1665658722.567578362] [realsense_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate [component_container-1] [INFO] [1665658722.606632370] [camera.realsense_splitter_node]: Creating a RealsenseSplitterNode(). [INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/camera/realsense_splitter_node' in container '/realsense_container' [component_container-1] [INFO] [1665658723.096249348] [camera.camera]: Device with serial number 135122070687 was found. [component_container-1] [component_container-1] [INFO] [1665658723.096623606] [camera.camera]: Device with physical ID 2-4.1.3-5 was found. [component_container-1] [INFO] [1665658723.096826272] [camera.camera]: Device with name Intel RealSense D435I was found. [component_container-1] [INFO] [1665658723.098514932] [camera.camera]: Device with port number 2-4.1.3 was found. [component_container-1] [INFO] [1665658723.098679581] [camera.camera]: Device USB type: 3.2

[component_container-1] [INFO] [1665658723.205834930] [camera.camera]: JSON file is not provided [component_container-1] [INFO] [1665658723.205979449] [camera.camera]: Device Name: Intel RealSense D435I [component_container-1] [INFO] [1665658723.206071998] [camera.camera]: Device physical port: 2-4.1.3-5 [component_container-1] [INFO] [1665658723.206209605] [camera.camera]: Device FW version: 05.13.00.50 [component_container-1] [INFO] [1665658723.206289961] [camera.camera]: Device Product ID: 0x0B3A [component_container-1] [INFO] [1665658723.206357612] [camera.camera]: Enable PointCloud: Off [component_container-1] [INFO] [1665658723.206418223] [camera.camera]: Align Depth: Off [component_container-1] [INFO] [1665658723.206476114] [camera.camera]: Sync Mode: Off [component_container-1] [INFO] [1665658723.206601240] [camera.camera]: Device Sensors: [component_container-1] [INFO] [1665658723.375074489] [camera.camera]: Stereo Module was found. [component_container-1] [INFO] [1665658723.508486890] [camera.camera]: RGB Camera was found. [component_container-1] [INFO] [1665658723.510156733] [camera.camera]: Motion Module was found. [component_container-1] [INFO] [1665658723.510575378] [camera.camera]: (Infrared, 0) sensor isn't supported by current device! -- Skipping... [component_container-1] [INFO] [1665658723.510700760] [camera.camera]: (Fisheye, 0) sensor isn't supported by current device! -- Skipping... [component_container-1] [INFO] [1665658723.510776220] [camera.camera]: (Fisheye, 1) sensor isn't supported by current device! -- Skipping... [component_container-1] [INFO] [1665658723.510844447] [camera.camera]: (Fisheye, 2) sensor isn't supported by current device! -- Skipping... [component_container-1] [INFO] [1665658723.510929123] [camera.camera]: (Pose, 0) sensor isn't supported by current device! -- Skipping... [component_container-1] [INFO] [1665658723.511013384] [camera.camera]: (Confidence, 0) sensor isn't supported by current device! -- Skipping... [component_container-1] [INFO] [1665658723.511090251] [camera.camera]: num_filters: 0 [component_container-1] [INFO] [1665658723.511151374] [camera.camera]: Setting Dynamic reconfig parameters. [component_container-1] [INFO] [1665658729.268175134] [camera.camera]: Done Setting Dynamic reconfig parameters. [component_container-1] [INFO] [1665658729.278360921] [camera.camera]: depth stream is enabled - width: 640, height: 480, fps: 30, Format: Z16 [component_container-1] [INFO] [1665658729.280400767] [camera.camera]: infra1 stream is enabled - width: 640, height: 480, fps: 30, Format: Y8 [component_container-1] [INFO] [1665658729.288298792] [camera.camera]: infra2 stream is enabled - width: 640, height: 480, fps: 30, Format: Y8 [component_container-1] [INFO] [1665658729.293281216] [camera.camera]: color stream is enabled - width: 640, height: 480, fps: 30, Format: RGB8

[component_container-1] [WARN] [1665658730.271933798] [camera.camera]: Hardware Notification:Depth stream start failure,1.66566e+12,Error,Hardware Error [component_container-1] [INFO] [1665658730.398371067] [camera.camera]: SELECTED BASE:Depth, 0 [component_container-1] [INFO] [1665658730.421108359] [camera.camera]: Device Serial No: 135122070687 [component_container-1] [INFO] [1665658730.421718149] [camera.camera]: RealSense Node Is Up! [component_container-1] [WARN] [1665658730.527182918] [camera.camera]: [component_container-1] 13/10 10:58:50,529 WARNING [281473113725232] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: Resource temporarily unavailable, number: 11 [component_container-1] 13/10 10:58:50,580 WARNING [281473113725232] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: Resource temporarily unavailable, number: 11 [component_container-1] 13/10 10:58:50,742 WARNING [281473113725232] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: Resource temporarily unavailable, number: 11 [component_container-1] 13/10 10:58:50,793 WARNING [281473113725232] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: Resource temporarily unavailable, number: 11 [component_container-1] 13/10 10:58:50,930 WARNING [281473113725232] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: Resource temporarily unavailable, number: 11 [component_container-1] 13/10 10:58:50,985 WARNING [281473113725232] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: Resource temporarily unavailable, number: 11 [component_container-1] 13/10 10:58:51,014 ERROR [281472225241392] (uvc-streamer.cpp:106) uvc streamer watchdog triggered on endpoint: 130 [component_container-1] 13/10 10:58:51,243 WARNING [281473113725232] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: Resource temporarily unavailable, number: 11 [component_container-1] 13/10 10:59:17,944 WARNING [281473113725232] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: Resource temporarily unavailable, number: 11 [component_container-1] 13/10 10:59:23,003 WARNING [281473113725232] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: Resource temporarily unavailable, number: 11 [component_container-1] 13/10 10:59:30,066 WARNING [281473113725232] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: Resource temporarily unavailable, number: 11 [component_container-1] 13/10 10:59:41,357 ERROR [281472200063280] (uvc-streamer.cpp:106) uvc streamer watchdog triggered on endpoint: 131 [component_container-1] 13/10 10:59:47,687 ERROR [281472200063280] (uvc-streamer.cpp:106) uvc streamer watchdog triggered on endpoint: 131 [component_container-1] 13/10 10:59:56,148 WARNING [281473113725232] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: Resource temporarily unavailable, number: 11 [component_container-1] 13/10 10:59:57,015 ERROR [281472200063280] (uvc-streamer.cpp:106) uvc streamer watchdog triggered on endpoint: 131 [component_container-1] 13/10 11:00:02,345 ERROR [281472200063280] (uvc-streamer.cpp:106) uvc streamer watchdog triggered on endpoint: 131 [component_container-1] 13/10 11:00:17,683 ERROR [281472795650352] (uvc-streamer.cpp:106) uvc streamer watchdog triggered on endpoint: 132 [component_container-1] 13/10 11:00:18,003 ERROR [281472200063280] (uvc-streamer.cpp:106) uvc streamer watchdog triggered on endpoint: 131 [component_container-1] 13/10 11:00:24,238 WARNING [281473113725232] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: Resource temporarily unavailable, number: 11 [component_container-1] 13/10 11:00:26,291 WARNING [281473113725232] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: Resource temporarily unavailable, number: 11 [component_container-1] 13/10 11:00:29,677 ERROR [281472795650352] (uvc-streamer.cpp:106) uvc streamer watchdog triggered on endpoint: 132 [component_container-1] [WARN] [1665658831.352656482] [camera.camera]: Hardware Notification:Right MIPI error,1.66566e+12,Error,Hardware Error [component_container-1] 13/10 11:00:53,983 ERROR [281472200063280] (uvc-streamer.cpp:106) uvc streamer watchdog triggered on endpoint: 131 [component_container-1] 13/10 11:01:01,324 ERROR [281472795650352] (uvc-streamer.cpp:106) uvc streamer watchdog triggered on endpoint: 132 [component_container-1] [WARN] [1665658862.374305032] [camera.camera]: Hardware Notification:Right MIPI error,1.66566e+12,Error,Hardware Error [component_container-1] 13/10 11:01:20,647 ERROR [281472795650352] (uvc-streamer.cpp:106) uvc streamer watchdog triggered on endpoint: 132 [component_container-1] 13/10 11:01:22,313 ERROR [281472795650352] (uvc-streamer.cpp:106) uvc streamer watchdog triggered on endpoint: 132 [component_container-1] 13/10 11:01:24,299 ERROR [281472200063280] (uvc-streamer.cpp:106) uvc streamer watchdog triggered on endpoint: 131 [component_container-1] 13/10 11:01:30,642 ERROR [281472795650352] (uvc-streamer.cpp:106) uvc streamer watchdog triggered on endpoint: 132 [component_container-1] 13/10 11:01:35,639 ERROR [281472795650352] (uvc-streamer.cpp:106) uvc streamer watchdog triggered on endpoint: 132 [component_container-1] 13/10 11:01:40,291 ERROR [281472200063280] (uvc-streamer.cpp:106) uvc streamer watchdog triggered on endpoint: 131 [component_container-1] 13/10 11:01:40,637 ERROR [281472795650352] (uvc-streamer.cpp:106) uvc streamer watchdog triggered on endpoint: 132 [component_container-1] [WARN] [1665658901.400340923] [camera.camera]: Hardware Notification:Right MIPI error,1.66566e+12,Error,Hardware Error [component_container-1] 13/10 11:01:43,956 ERROR [281472200063280] (uvc-streamer.cpp:106) uvc streamer watchdog triggered on endpoint: 131

MartyG-RealSense commented 1 year ago

When control_transfer returned error, index: 768, error: Resource temporarily unavailable, number: 11 is generating continuously then it can indicate that there is a serious communication problem with the camera.

As the control_transfer warnings are generated first, it is possible that whatever is causing them is also causing the subsequent 'uvc streamer' warnings.

Do you receive as many warnings if you set initial_reset to true in your yaml configuration file so that the camera is reset at launch?

chivas1000 commented 1 year ago

Hi @MartyG-RealSense

When I start only realsense node at jetson container, and ros2 topic echo depth topic on the server, it only shows the first frame and stops, neither other image topics, but I ros2 topic echo at the jetson, the depth can shows.

Another tips is that I installed my system in the sd card at jetson, together with the error logs occured by realsense node, so is that the sd card I/O bottlenecked the buffer of transfer which disobeyed the QoS and dropped the connection?

while my jetson can receive about 25fps messages, the server only receives 3 messages and stopped: Untitled Untitled(1)

so do the inferred images, when I only echo inferred 1 at server, it receives and bandwidth is about 9MB/s 9.22 MB/s from 100 messages Message size mean: 0.31 MB min: 0.31 MB max: 0.31 MB but when I echo another inferred 2, it drops and stopped transmitting, so I think when bw over 10MB/s, the jetson can't handle transmission and dropped connection.

I think the problem might be similar to this which you answered: https://support.intelrealsense.com/hc/en-us/community/posts/4939637103891--messenger-libusb-cpp-42-control-transfer-returned-error-index-768-error-Resource-temporarily-unavailable-number-11?sort_by=votes

so is that the hardware issue and if it can fixed with switching the system from sd card to SSD and reducing depth frame rate? Or, I might just using an Intel nuc to run realsense node and transfer image topics since it better deal with realsense camera?

========================================================== here is my realsense yaml files:

device_type: '' serial_no: '' usb_port_id: ''

rgb_camera: profile: '640x480x15' color_qos: "SENSOR_DATA"

depth_module: profile: '640x480x15' emitter_enabled: 1 emitter_on_off: true depth_qos: "SENSOR_DATA" depth_info_qos: "SYSTEM_DEFAULT"

infra_qos: "SENSOR_DATA"

enable_accel: false enable_color: true enable_depth: true enable_gyro: false enable_infra1: true enable_infra2: true

pointcloud: enable: false

pointcloud_texture_index: 0 pointcloud_texture_stream: RS2_STREAM_ANY

enable_sync: false align_depth: false

MartyG-RealSense commented 1 year ago

An SD card will have significantly lower data read / write speeds than an SSD drive, so there is certainly the possibility of a bottleneck occurring when data is being written to SD card. This is sometimes seen when recording streams to a storage device if the storage device has a slow access speed that cannot keep up with the volume of data being generated. The bottleneck can result in dropped frames in the recording.

chivas1000 commented 1 year ago

An SD card will have significantly lower data read / write speeds than an SSD drive, so there is certainly the possibility of a bottleneck occurring when data is being written to SD card. This is sometimes seen when recording streams to a storage device if the storage device has a slow access speed that cannot keep up with the volume of data being generated. The bottleneck can result in dropped frames in the recording.

Dear @MartyG-RealSense Hi and I got some updates,

first, switching ssd not help(actually the application runs in RAM so I assume there maybe few to do with storage)

second, when I put reverse of the publisher and subscriber(edge server as a publisher and jetson the subscriber), it works... Untitled

so I supposed that this might be the laggy CPUs that AGX Xavier has, and it cannot deal with depth images and IR images once it needs to put on the transmission.

Hopefully some one familiar with ROS and realsense camera can explain this in detail...

really thanks for your support @MartyG-RealSense and what I want to ask is that, do I use the nuc-10i7 are able to process these 3 data streams?(depths and 2 IR), or is that anything I can do?

Appriciate for your help.

MartyG-RealSense commented 1 year ago

Your mention of the system working when publisher and subscriber are reversed reminds me of a case at https://github.com/IntelRealSense/librealsense/issues/8213#issuecomment-769048556 involving a NUC and a laptop where it worked fine if the NUC was the subscriber but not if the laptop was the subscriber. The problem was found to be with how the USB cabling was configured. Though the situation in which your computers work correctly is the opposite (NUC as publisher and Jetson as subscriber).

Nvidia Jetson AGX has a strong hardware specification. An Nvidia Jetson will perform best though with librealsense or the RealSense ROS wrapper if librealsense's CUDA GPU acceleration support has been enabled. This can be done if using the instructions at the link below to build librealsense from dedicated Jetson packages, or build it from source code with CMake with the flag -DBUILD_WITH_CUDA=true included in the build instruction.

https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_jetson.md

chivas1000 commented 1 year ago

Your mention of the system working when publisher and subscriber are reversed reminds me of a case at IntelRealSense/librealsense#8213 (comment) involving a NUC and a laptop where it worked fine if the NUC was the subscriber but not if the laptop was the subscriber. The problem was found to be with how the USB cabling was configured. Though the situation in which your computers work correctly is the opposite (NUC as publisher and Jetson as subscriber).

Nvidia Jetson AGX has a strong hardware specification. An Nvidia Jetson will perform best though with librealsense or the RealSense ROS wrapper if librealsense's CUDA GPU acceleration support has been enabled. This can be done if using the instructions at the link below to build librealsense from dedicated Jetson packages, or build it from source code with CMake with the flag -DBUILD_WITH_CUDA=true included in the build instruction.

https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_jetson.md

Dear @MartyG-RealSense , got updates in these days:

Since right now there no installable dkms for arm64 for jetpack 5.0.2 and its kernel. I switched to x86(nuc10i7) and installed Ubuntu 20.04.5 LTS, and kernel is 5.15.0-52-generic, installed ROS2 foxy and installed realsense-ros at ROS2 branch: https://github.com/IntelRealSense/realsense-ros/tree/ros2-beta

noting that the difference would be at STEP 2 for apt binary install: https://github.com/IntelRealSense/librealsense/blob/master/doc/distribution_linux.md#installing-the-packages here, I don't do sudo apt-get install librealsense2-dkms since there should be no avaliable for this kernel, instead I install the releases you've post: https://github.com/mengyui/librealsense2-dkms/releases/tag/initial-support-for-kernel-5.15 to install DKMS, except this all steps are normal (just go over the steps in realsense ROS2)

Don't know if it is truly the CPU reason in Xavier, but I find it clearly laggy for the xavier in everyway than nuc. Xavier with newest jetpack version, would be in a new kernel that the sdk doesn't support yet. so for someone who might run into trouble like this, I think the easiest way would be switching to x86 since there are more support.

Anyway, thanks for you help.

MartyG-RealSense commented 1 year ago

Thanks very much @chivas1000 for your detailed update. There are no plans to create arm64 DKMS packages for Ubuntu 20.04 and Intel's recommendation is to build for arm64 on 20.04 from source code instead.

chivas1000 commented 1 year ago

Thanks very much @chivas1000 for your detailed update. There are no plans to create arm64 DKMS packages for Ubuntu 20.04 and Intel's recommendation is to build for arm64 on 20.04 from source code instead.

But I didn't find ways to build DKMS, only the release .deb or binary, would you please give me some advice on building the DKMS from source?

MartyG-RealSense commented 1 year ago

The terms 'DKMS package' and 'Debian package' mean the same thing.

A DKMS installation is not built from source code. Instead, a source code build typically refers to downloading a folder containing the source code and then building that source code folder into a working SDK using the CMake tool.

As you are using a Jetson and a NUC PC, there are two different sets of installation instructions for each for building from Debian packages or from source code.

NUC PC

Build from Debian packages: https://github.com/IntelRealSense/librealsense/blob/master/doc/distribution_linux.md

Build from source code: https://github.com/IntelRealSense/librealsense/blob/master/doc/installation.md

Jetson

Build from Debian packages: https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_jetson.md#4-install-with-debian-packages

Build from source code: https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_jetson.md#building-from-source-using-rsusb-backend

MartyG-RealSense commented 1 year ago

Hi @chivas1000 Do you require further assistance with this case, please? Thanks!

chivas1000 commented 1 year ago

Hi @chivas1000 Do you require further assistance with this case, please? Thanks!

No further help is needed, Thank you for your support.

MartyG-RealSense commented 1 year ago

You are very welcome. Thanks very much for the update!