stereolabs / zed-ros-wrapper

ROS wrapper for the ZED SDK
https://www.stereolabs.com/docs/ros/
MIT License
441 stars 389 forks source link

Bug about ZED2 camera program crash when starting on NX board #902

Closed weishuang12138 closed 10 months ago

weishuang12138 commented 1 year ago

Preliminary Checks

Description

I have zed2.yaml set up to allow detection of objects, but when I launch zed2.launch file at this detection step, the program crashes, ending the process. The details are as follows:

[ INFO] [1689648033.030183155]: Positional tracking -> OK [OK]
================================================================================REQUIRED process [zed2/zed_node-2] has died!
process has died [pid 5161, exit code -11, cmd /home/nx/zed_ws/devel/lib/zed_wrapper/zed_wrapper_node __name:=zed_node __log:=/home/nx/.ros/log/6a8a029e-2514-11ee-8481-ef866d2da2f7/zed2-zed_node-2.log].
log file: /home/nx/.ros/log/6a8a029e-2514-11ee-8481-ef866d2da2f7/zed2-zed_node-2*.log
Initiating shutdown!
================================================================================
[zed2/zed_node-2] killing on exit

Steps to Reproduce

1.Through my checking and searching, I found that it was because zed2.launch did not automatically subscribe to objects topics. 2.So I added a node subscribing to the objects topic and added it to zed2.launch file as follows:

zed_subscriber_node.cpp

#include <ros/ros.h>
#include <zed_interfaces/Object.h>
#include <zed_interfaces/ObjectsStamped.h>
void Callback(const zed_interfaces::ObjectsStamped::ConstPtr& msg)
{
    //ROS_INFO("Recevied");
}

int main(int argc, char** argv){
    ros::init(argc, argv, "zed_bug_fixing_node");
    ros::NodeHandle n;
    ros::Subscriber sub = n.subscribe("zed2/zed_node/obj_det/objects",1,Callback);
    ros::spin();
    return 0;
}

test.launch

<launch>
    <arg name="svo_file"             default="" /> <!-- <arg name="svo_file" default="path/to/svo/file.svo"> -->
    <arg name="stream"               default="" /> <!-- <arg name="stream" default="<ip_address>:<port>"> -->

    <arg name="camera_model"         default="zed2" />

    <!-- Launch ZED camera wrapper -->
    <include file="$(find zed_wrapper)/launch/$(arg camera_model).launch">
        <arg name="camera_model"        value="$(arg camera_model)" />
        <arg name="svo_file"            value="$(arg svo_file)" />
        <arg name="stream"              value="$(arg stream)" />
    </include>

    <!-- Fixing bug -->
    <node pkg="zed_wrapper" type="zed_subscriber_node" name="zed_bug_fixing_node" output="screen" >

</launch>

Expected Result

I have also used ZED2 in other operating CPU systems such as X64, but there has not been such a problem, I would like to know whether the official forgot to subscribe to the topic on the ARM system, so feedback this problem.

Actual Result

I added my own code to fix the problem, but I hope officials can fix the bug in a simpler way.

ZED Camera model

ZED2

Environment

OS:Linux
CPU:ARM
GPU Nvidia Jetson Orin NX
ZED SDK version:4.0
Other info:ROS Noetic

Anything else?

No response

Myzhar commented 1 year ago

Hi @weishuang12138 please share your customized zed2.yaml file.

weishuang12138 commented 1 year ago

This is my zed2.yaml file:

general:
    camera_model:               'zed2'
    resolution:                 0           # '0': HD2K, '1': HD1080, '3': HD720, '5': VGA, '6': AUTO
    grab_frame_rate:            15          # Frequency of frame grabbing for internal SDK operations

depth:
    min_depth:                  0.3             # Min: 0.2, Max: 3.0 - Default 0.7 - Note: reducing this value wil require more computational power and GPU memory
    max_depth:                  20.0            # Max: 40.0

sensors:
    sensors_timestamp_sync:     false                           # Synchronize Sensors messages timestamp with latest received frame
    max_pub_rate:               200.                            # max frequency of publishing of sensors data. MAX: 400. - MIN: grab rate
    publish_imu_tf:             true                            # publish `IMU -> <cam_name>_left_camera_frame` TF

object_detection:
    od_enabled:                 true            # True to enable Object Detection [not available for ZED]
    model:                      2               # '0': MULTI_CLASS_BOX - '1': MULTI_CLASS_BOX_ACCURATE - '2': HUMAN_BODY_FAST - '3': HUMAN_BODY_ACCURATE - '4': MULTI_CLASS_BOX_MEDIUM - '5': HUMAN_BODY_MEDIUM - '6': PERSON_HEAD_BOX
    confidence_threshold:       50              # Minimum value of the detection confidence of an object [0,100]
    max_range:                  15.             # Maximum detection range
    object_tracking_enabled:    true            # Enable/disable the tracking of the detected objects
    body_fitting:               false           # Enable/disable body fitting for 'HUMAN_BODY_X' models
    mc_people:                  true            # Enable/disable the detection of persons for 'MULTI_CLASS_BOX_X' models
    mc_vehicle:                 false            # Enable/disable the detection of vehicles for 'MULTI_CLASS_BOX_X' models
    mc_bag:                     false            # Enable/disable the detection of bags for 'MULTI_CLASS_BOX_X' models
    mc_animal:                  false            # Enable/disable the detection of animals for 'MULTI_CLASS_BOX_X' models
    mc_electronics:             false            # Enable/disable the detection of electronic devices for 'MULTI_CLASS_BOX_X' models
    mc_fruit_vegetable:         false            # Enable/disable the detection of fruits and vegetables for 'MULTI_CLASS_BOX_X' models
    mc_sport:                   false            # Enable/disable the detection of sport-related objects for 'MULTI_CLASS_BOX_X' models

And this is my common.yaml file:


# params/common.yaml
# Common parameters to Stereolabs ZED and ZED mini cameras
---

# Dynamic parameters cannot have a namespace
brightness:                 4                                   # Dynamic
contrast:                   4                                   # Dynamic
hue:                        0                                   # Dynamic
saturation:                 4                                   # Dynamic
sharpness:                  4                                   # Dynamic
gamma:                      8                                   # Dynamic - Requires SDK >=v3.1
auto_exposure_gain:         true                                # Dynamic
gain:                       100                                 # Dynamic - works only if `auto_exposure_gain` is false
exposure:                   100                                 # Dynamic - works only if `auto_exposure_gain` is false
auto_whitebalance:          true                                # Dynamic
whitebalance_temperature:   42                                  # Dynamic - works only if `auto_whitebalance` is false
depth_confidence:           30                                  # Dynamic
depth_texture_conf:         100                                 # Dynamic
pub_frame_rate:             15.0                                # Dynamic - frequency of publishing of video and depth data
point_cloud_freq:           10.0                                # Dynamic - frequency of the pointcloud publishing (equal or less to `grab_frame_rate` value)

general:
    camera_name:                zed                             # A name for the camera (can be different from camera model and node name and can be overwritten by the launch file)
    zed_id:                     0
    serial_number:              0
    gpu_id:                     -1
    base_frame:                 'base_link'                     # must be equal to the frame_id used in the URDF file
    verbose:                    false                           # Enable info message by the ZED SDK
    svo_compression:            2                               # `0`: LOSSLESS, `1`: AVCHD, `2`: HEVC
    self_calib:                 true                            # enable/disable self calibration at starting
    camera_flip:                false

video:
    img_downsample_factor:      0.5                             # Resample factor for images [0.01,1.0] The SDK works with native image sizes, but publishes rescaled image.

depth:
    quality:                    3                               # '0': NONE, '1': PERFORMANCE, '2': QUALITY, '3': ULTRA, '4': NEURAL
    depth_stabilization:        1                               # [0-100] - 0: Disabled
    openni_depth_mode:          false                           # 'false': 32bit float meters, 'true': 16bit uchar millimeters
    depth_downsample_factor:    0.5                             # Resample factor for depth data matrices [0.01,1.0] The SDK works with native data sizes, but publishes rescaled matrices (depth map, point cloud, ...)

pos_tracking:
    pos_tracking_enabled:       true                            # True to enable positional tracking from start
    imu_fusion:                 true                            # enable/disable IMU fusion. When set to false, only the optical odometry will be used.
    publish_tf:                 true                            # publish `odom -> base_link` TF
    publish_map_tf:             true                            # publish `map -> odom` TF
    map_frame:                  'map'                           # main frame
    odometry_frame:             'odom'                          # odometry frame
    area_memory_db_path:        'zed_area_memory.area'          # file loaded when the node starts to restore the "known visual features" map. 
    save_area_memory_db_on_exit: false                          # save the "known visual features" map when the node is correctly closed to the path indicated by `area_memory_db_path`
    area_memory:                true                            # Enable to detect loop closure
    floor_alignment:            false                           # Enable to automatically calculate camera/floor offset
    initial_base_pose:          [0.0,0.0,0.0, 0.0,0.0,0.0]      # Initial position of the `base_frame` -> [X, Y, Z, R, P, Y]
    init_odom_with_first_valid_pose: true                       # Enable to initialize the odometry with the first valid pose
    path_pub_rate:              2.0                             # Camera trajectory publishing frequency
    path_max_count:             -1                              # use '-1' for unlimited path size
    two_d_mode:                 false                           # Force navigation on a plane. If true the Z value will be fixed to "fixed_z_value", roll and pitch to zero
    fixed_z_value:              0.00                            # Value to be used for Z coordinate if `two_d_mode` is true    

mapping:
    mapping_enabled:            false                           # True to enable mapping and fused point cloud publication
    resolution:                 0.05                            # maps resolution in meters [0.01f, 0.2f]
    max_mapping_range:          -1                              # maximum depth range while mapping in meters (-1 for automatic calculation) [2.0, 20.0]
    fused_pointcloud_freq:      1.0                             # frequency of the publishing of the fused colored point cloud
    clicked_point_topic:        '/clicked_point'                # Topic published by Rviz when a point of the cloud is clicked. Used for plane detection

sensors:
    sensors_timestamp_sync:     false                           # Synchronize Sensors messages timestamp with latest received frame
    max_pub_rate:               200.                            # max frequency of publishing of sensors data. MAX: 400. - MIN: grab rate
    publish_imu_tf:             true                            # publish `IMU -> <cam_name>_left_camera_frame` TF

object_detection:
    od_enabled:                 false                           # True to enable Object Detection [not available for ZED]
    model:                      2                               # '0': MULTI_CLASS_BOX - '1': MULTI_CLASS_BOX_ACCURATE
    confidence_threshold:       50                              # Minimum value of the detection confidence of an object [0,100]
    max_range:                  15.                             # Maximum detection range
    object_tracking_enabled:    fasle                            # Enable/disable the tracking of the detected objects
    mc_people:                  false                            # Enable/disable the detection of persons for 'MULTI_CLASS_BOX_X' models
    mc_vehicle:                 fasle                            # Enable/disable the detection of vehicles for 'MULTI_CLASS_BOX_X' models
    mc_bag:                     fasle                            # Enable/disable the detection of bags for 'MULTI_CLASS_BOX_X' models
    mc_animal:                  fasle                            # Enable/disable the detection of animals for 'MULTI_CLASS_BOX_X' models
    mc_electronics:             fasle                            # Enable/disable the detection of electronic devices for 'MULTI_CLASS_BOX_X' models
    mc_fruit_vegetable:         fasle                            # Enable/disable the detection of fruits and vegetables for 'MULTI_CLASS_BOX_X' models
    mc_sport:                   fasle                            # Enable/disable the detection of sport-related objects for 'MULTI_CLASS_BOX_X' models
Myzhar commented 1 year ago

Please remove object detection params from common.yaml or from zed2.yaml Please also fix all the fasle in common.yaml

weishuang12138 commented 12 months ago

I have tried according to your method, but this problem still exists.

common.yaml:

# params/common.yaml
# Common parameters to Stereolabs ZED and ZED mini cameras
---

# Dynamic parameters cannot have a namespace
brightness:                 4                                   # Dynamic
contrast:                   4                                   # Dynamic
hue:                        0                                   # Dynamic
saturation:                 4                                   # Dynamic
sharpness:                  4                                   # Dynamic
gamma:                      8                                   # Dynamic - Requires SDK >=v3.1
auto_exposure_gain:         true                                # Dynamic
gain:                       100                                 # Dynamic - works only if `auto_exposure_gain` is false
exposure:                   100                                 # Dynamic - works only if `auto_exposure_gain` is false
auto_whitebalance:          true                                # Dynamic
whitebalance_temperature:   42                                  # Dynamic - works only if `auto_whitebalance` is false
depth_confidence:           30                                  # Dynamic
depth_texture_conf:         100                                 # Dynamic
pub_frame_rate:             15.0                                # Dynamic - frequency of publishing of video and depth data
point_cloud_freq:           10.0                                # Dynamic - frequency of the pointcloud publishing (equal or less to `grab_frame_rate` value)

general:
    camera_name:                zed                             # A name for the camera (can be different from camera model and node name and can be overwritten by the launch file)
    zed_id:                     0
    serial_number:              0
    gpu_id:                     -1
    base_frame:                 'base_link'                     # must be equal to the frame_id used in the URDF file
    verbose:                    false                           # Enable info message by the ZED SDK
    svo_compression:            2                               # `0`: LOSSLESS, `1`: AVCHD, `2`: HEVC
    self_calib:                 true                            # enable/disable self calibration at starting
    camera_flip:                false

video:
    img_downsample_factor:      0.5                             # Resample factor for images [0.01,1.0] The SDK works with native image sizes, but publishes rescaled image.

depth:
    quality:                    3                               # '0': NONE, '1': PERFORMANCE, '2': QUALITY, '3': ULTRA, '4': NEURAL
    depth_stabilization:        1                               # [0-100] - 0: Disabled
    openni_depth_mode:          false                           # 'false': 32bit float meters, 'true': 16bit uchar millimeters
    depth_downsample_factor:    0.5                             # Resample factor for depth data matrices [0.01,1.0] The SDK works with native data sizes, but publishes rescaled matrices (depth map, point cloud, ...)

pos_tracking:
    pos_tracking_enabled:       true                            # True to enable positional tracking from start
    imu_fusion:                 true                            # enable/disable IMU fusion. When set to false, only the optical odometry will be used.
    publish_tf:                 true                            # publish `odom -> base_link` TF
    publish_map_tf:             true                            # publish `map -> odom` TF
    map_frame:                  'map'                           # main frame
    odometry_frame:             'odom'                          # odometry frame
    area_memory_db_path:        'zed_area_memory.area'          # file loaded when the node starts to restore the "known visual features" map. 
    save_area_memory_db_on_exit: false                          # save the "known visual features" map when the node is correctly closed to the path indicated by `area_memory_db_path`
    area_memory:                true                            # Enable to detect loop closure
    floor_alignment:            false                           # Enable to automatically calculate camera/floor offset
    initial_base_pose:          [0.0,0.0,0.0, 0.0,0.0,0.0]      # Initial position of the `base_frame` -> [X, Y, Z, R, P, Y]
    init_odom_with_first_valid_pose: true                       # Enable to initialize the odometry with the first valid pose
    path_pub_rate:              2.0                             # Camera trajectory publishing frequency
    path_max_count:             -1                              # use '-1' for unlimited path size
    two_d_mode:                 false                           # Force navigation on a plane. If true the Z value will be fixed to "fixed_z_value", roll and pitch to zero
    fixed_z_value:              0.00                            # Value to be used for Z coordinate if `two_d_mode` is true    

mapping:
    mapping_enabled:            false                           # True to enable mapping and fused point cloud publication
    resolution:                 0.05                            # maps resolution in meters [0.01f, 0.2f]
    max_mapping_range:          -1                              # maximum depth range while mapping in meters (-1 for automatic calculation) [2.0, 20.0]
    fused_pointcloud_freq:      1.0                             # frequency of the publishing of the fused colored point cloud
    clicked_point_topic:        '/clicked_point'                # Topic published by Rviz when a point of the cloud is clicked. Used for plane detection

sensors:
    sensors_timestamp_sync:     false                           # Synchronize Sensors messages timestamp with latest received frame
    max_pub_rate:               200.                            # max frequency of publishing of sensors data. MAX: 400. - MIN: grab rate
    publish_imu_tf:             true                            # publish `IMU -> <cam_name>_left_camera_frame` TF

zed2.yaml:

# params/zed2.yaml
# Parameters for Stereolabs ZED2 camera
---

general:
    camera_model:               'zed2'
    resolution:                 0           # '0': HD2K, '1': HD1080, '3': HD720, '5': VGA, '6': AUTO
    grab_frame_rate:            15          # Frequency of frame grabbing for internal SDK operations

depth:
    min_depth:                  0.3             # Min: 0.2, Max: 3.0 - Default 0.7 - Note: reducing this value wil require more computational power and GPU memory
    max_depth:                  20.0            # Max: 40.0

object_detection:
    od_enabled:                 true            # True to enable Object Detection [not available for ZED]
    model:                      2               # '0': MULTI_CLASS_BOX - '1': MULTI_CLASS_BOX_ACCURATE - '2': HUMAN_BODY_FAST - '3': HUMAN_BODY_ACCURATE - '4': MULTI_CLASS_BOX_MEDIUM - '5': HUMAN_BODY_MEDIUM - '6': PERSON_HEAD_BOX
    confidence_threshold:       50              # Minimum value of the detection confidence of an object [0,100]
    max_range:                  15.             # Maximum detection range
    object_tracking_enabled:    true            # Enable/disable the tracking of the detected objects
    body_fitting:               false           # Enable/disable body fitting for 'HUMAN_BODY_X' models
    mc_people:                  true            # Enable/disable the detection of persons for 'MULTI_CLASS_BOX_X' models
    mc_vehicle:                 false            # Enable/disable the detection of vehicles for 'MULTI_CLASS_BOX_X' models
    mc_bag:                     false            # Enable/disable the detection of bags for 'MULTI_CLASS_BOX_X' models
    mc_animal:                  false            # Enable/disable the detection of animals for 'MULTI_CLASS_BOX_X' models
    mc_electronics:             false            # Enable/disable the detection of electronic devices for 'MULTI_CLASS_BOX_X' models
    mc_fruit_vegetable:         false            # Enable/disable the detection of fruits and vegetables for 'MULTI_CLASS_BOX_X' models
    mc_sport:                   false            # Enable/disable the detection of sport-related objects for 'MULTI_CLASS_BOX_X' models
Myzhar commented 12 months ago

A question that I should have asked before: is the AI model pre-optimized before launching the node or is it optimized by the node at the very beginning?

amineKourta commented 11 months ago

I have the same problem on a fresh install of Jetpack 5.1.1 on a Xavier AGX. I only activated the object detection on the common.yaml.

object_detection: od_enabled: true # True to enable Object Detection [not available for ZED]

When I went to investigate the log files I found this


  File "/opt/ros/noetic/lib/python3/dist-packages/rosmaster/threadpool.py", line 218, in run
    result = cmd(*args)
  File "/opt/ros/noetic/lib/python3/dist-packages/rosmaster/master_api.py", line 210, in publisher_update_task
    ret = xmlrpcapi(api).publisherUpdate('/master', topic, pub_uris)
  File "/usr/lib/python3.8/xmlrpc/client.py", line 1109, in __call__
    return self.__send(self.__name, args)
  File "/usr/lib/python3.8/xmlrpc/client.py", line 1450, in __request
    response = self.__transport.request(
  File "/usr/lib/python3.8/xmlrpc/client.py", line 1153, in request
    return self.single_request(host, handler, request_body, verbose)
  File "/usr/lib/python3.8/xmlrpc/client.py", line 1165, in single_request
    http_conn = self.send_request(host, handler, request_body, verbose)
  File "/usr/lib/python3.8/xmlrpc/client.py", line 1278, in send_request
    self.send_content(connection, request_body)
  File "/usr/lib/python3.8/xmlrpc/client.py", line 1308, in send_content
    connection.endheaders(request_body)
  File "/usr/lib/python3.8/http/client.py", line 1251, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/usr/lib/python3.8/http/client.py", line 1011, in _send_output
    self.send(msg)
  File "/usr/lib/python3.8/http/client.py", line 951, in send
    self.connect()
  File "/usr/lib/python3.8/http/client.py", line 922, in connect
    self.sock = self._create_connection(
  File "/usr/lib/python3.8/socket.py", line 808, in create_connection
    raise err
  File "/usr/lib/python3.8/socket.py", line 796, in create_connection
    sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused

[rosmaster.master][INFO] 2023-07-31 14:42:37,806: publisherUpdate[/tf_static] -> http://ubuntu:45877/ ['http://ubuntu:45877/']
[rosmaster.master][INFO] 2023-07-31 14:42:37,807: publisherUpdate[/tf_static] -> http://ubuntu:45877/ ['http://ubuntu:45877/']: sec=0.00, exception=[Errno 111] Connection refused
[rosmaster.threadpool][ERROR] 2023-07-31 14:42:37,807: Traceback (most recent call last):
  File "/opt/ros/noetic/lib/python3/dist-packages/rosmaster/threadpool.py", line 218, in run
    result = cmd(*args)
  File "/opt/ros/noetic/lib/python3/dist-packages/rosmaster/master_api.py", line 210, in publisher_update_task
    ret = xmlrpcapi(api).publisherUpdate('/master', topic, pub_uris)
  File "/usr/lib/python3.8/xmlrpc/client.py", line 1109, in __call__
    return self.__send(self.__name, args)
  File "/usr/lib/python3.8/xmlrpc/client.py", line 1450, in __request
    response = self.__transport.request(
  File "/usr/lib/python3.8/xmlrpc/client.py", line 1153, in request
    return self.single_request(host, handler, request_body, verbose)
  File "/usr/lib/python3.8/xmlrpc/client.py", line 1165, in single_request
    http_conn = self.send_request(host, handler, request_body, verbose)
  File "/usr/lib/python3.8/xmlrpc/client.py", line 1278, in send_request
    self.send_content(connection, request_body)
  File "/usr/lib/python3.8/xmlrpc/client.py", line 1308, in send_content
    connection.endheaders(request_body)
  File "/usr/lib/python3.8/http/client.py", line 1251, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/usr/lib/python3.8/http/client.py", line 1011, in _send_output
    self.send(msg)
  File "/usr/lib/python3.8/http/client.py", line 951, in send
    self.connect()
  File "/usr/lib/python3.8/http/client.py", line 922, in connect
    self.sock = self._create_connection(
  File "/usr/lib/python3.8/socket.py", line 808, in create_connection
    raise err
  File "/usr/lib/python3.8/socket.py", line 796, in create_connection
    sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused

[rosmaster.master][INFO] 2023-07-31 14:42:38,098: -PUB [/rosout_agg] /rosout http://ubuntu:41965/
[rosmaster.master][INFO] 2023-07-31 14:42:38,099: -SUB [/rosout] /rosout http://ubuntu:41965/
[rosmaster.master][INFO] 2023-07-31 14:42:38,100: -SERVICE [/rosout/get_loggers] /rosout rosrpc://ubuntu:35965
[rosmaster.master][INFO] 2023-07-31 14:42:38,101: -SERVICE [/rosout/set_logger_level] /rosout rosrpc://ubuntu:35965
[rosmaster.master][INFO] 2023-07-31 14:42:38,103: -CACHEDPARAM [/rosout/omit_topics] by /rosout
[rosmaster.main][INFO] 2023-07-31 14:42:38,145: keyboard interrupt, will exit
[rosmaster.main][INFO] 2023-07-31 14:42:38,146: stopping master...
[rospy.core][INFO] 2023-07-31 14:42:38,146: signal_shutdown [atexit]```
amineKourta commented 11 months ago

@weishuang12138 can you please share your solution ? thanks Does it work with tracking ? And did you try deactivating the tracking ?

github-actions[bot] commented 10 months ago

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment otherwise it will be automatically closed in 5 days