IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.45k stars 4.8k forks source link

D435 cannot avoid obstacles after detecting obstacles #9294

Open ju-mingyue opened 3 years ago

ju-mingyue commented 3 years ago
Required Info
Camera Model D435
Firmware Version 05.12.13.50
Operating System & Version Linux (Ubuntu18.04.5)
Kernel Version (Linux Only) 4.9.201
Platform NVIDIA JetsonNano B01
SDK Version 2.41.0
Language ROS packages
Segment Robot

Issue Description

Hello MartyG-RealSense, I use the D435 camera to avoid obstacles for the robot. I converted the D435 depth information into a lidar signal and merged it with the lidar on the robot, but there is a problem. The lidar can avoid the obstacle after detecting the obstacle, but What is the reason why the robot cannot avoid obstacles only if the camera detects the object, but the lidar cannot detect the object? I would like to consult you. Do you have any other suggestions for D435 obstacle avoidance? Because none of the information I have found solves the current problem, I very much hope to get your help. If you can help me, I will be very grateful to you.

ju-mingyue commented 3 years ago

@MartyG-RealSense

MartyG-RealSense commented 3 years ago

Hi @ju-mingyue If it seems that the RealSense camera is disrupting the lidar (which you can test for if the disruption to the lidar's obstacle detection does not occur if the RealSense camera is inactive), it may be because of the RealSense camera's infrared projection crossing paths with the lidar's beam. 400 Series cameras do not experience interference from non-RealSense cameras, but they can cause interference to non-RealSense 400 Series devices that use infrared.

If you have the D435 and lidar mounted in a vertical arrangement where both cameras are facing straight ahead then you could try tilting the upper camera upward and the lower camera downward to reduce the extent that the beams cross over, or turn off the D435's projector (the D435 should still be able to use the scene's ambient light to process depth information if it is a well lit area).

The RealSense T265 Tracking Camera can be paired with a D435 to provide robot navigation because the T265 is equipped with an IR Cut filter that enables it to block out the 400 Series camera's infrared projection, though I can appreciate that changing the lidar for a T265 may not be practical for your project. If you do wish to explore that possibility (D435 for obstacle detection / avoidance and T265 for navigation), Intel have a seminar on YouTube at the link below.

https://www.youtube.com/watch?v=62vm0_RZ1nU

In regard to a D435-only solution: the link below has the example of a research paper published in 2021 that used a D435 on a wall-painting robot with RTABMAP and a RANSAC algorithm for the purposes of navigation and wall avoidance.

https://www.researchgate.net/publication/349061847_Painting_Path_Planning_for_a_Painting_Robot_with_a_RealSense_Depth_Sensor

ju-mingyue commented 3 years ago

@MartyG-RealSense
Thank you very much for your help, but you said turn off the D435's projector (the D435 should still be able to use the scene's ambient light to process depth information if it is a well lit area), I don't know how to do it. Hope you can guide me. For D435 obstacle avoidance, how should I adjust the parameters so that the robot can avoid obstacles within this parameter range?

MartyG-RealSense commented 3 years ago

If you are using the RealSense ROS wrapper then disabling the IR Emitter is discussed in the link below.

https://github.com/IntelRealSense/realsense-ros/issues/817

In regard to settings for obstacle avoidance, some people set a higher minimum depth sensing distance than normal (for example, 0.5 meters). This can help to ignore close-range obstructions mounted on the robot's frame or mis-detecting the floor directly in front of the robot as an obstacle that triggers the robot to stop moving.

The RealSense ROS wrapper does not have an option to set minimum depth distance (MinZ) though (just a maximum distance MaxZ using the clip_distance parameter). It is possible though to set a minimum distance in RTABMAP by using its RangeMin option.

https://github.com/IntelRealSense/realsense-ros/issues/1859

In regard to parameter adjustment, could you tell me which method of obstacle detection you have chosen to use please? This will enable me to provide advice for that particular method.

ju-mingyue commented 3 years ago

@MartyG-RealSense Regarding parameter adjustment, I first use depth information to avoid obstacles ,The code is as follows: `

<remap from="scan" to="scan2" />
<param name="output_frame_id" value="d435_depth_frame"/>
<param name="range_min" value="1.0"/>
<param name="range_max" value="5.0"/>
<remap from="image" to="/d435/depth/image_rect_raw"/>`
MartyG-RealSense commented 3 years ago

If you are using /d435/depth/image_rect_raw as the source of depth data for depthimage_to_laserscan, I guess that you could seek to improve the accuracy of the depth data generated by the RealSense ROS wrapper. One way to do this may be to load a configuration json file (also known as a Visual Preset in librealsense) in the ROS wrapper's launch file or in its roslaunch instruction in order to configure the balance of the camera's settings for improved accuracy.

https://github.com/IntelRealSense/realsense-ros/issues/411

ju-mingyue commented 3 years ago

@MartyG-RealSense I did use using /d435/depth/image_rect_raw as the source of depth data for depthimage_to_laserscan to avoid obstacles, but the camera program I run is the rs_d400_and_t265.launch file. I am not very clear about how the json file you mentioned is configured. At the same time, I used ira_laser_tools to merge the laser radar signal scan1 and scan2 after D435 into one.

MartyG-RealSense commented 3 years ago

Are you using a T265, please? I understood that you were using a D435 and a lidar camera.

ju-mingyue commented 3 years ago

请问你用的是T265吗?我了解到您使用的是 D435 和激光雷达相机。 Yes, I use the T265 camera to read the parameters of the odometer, and then use D435 as obstacle avoidance, mainly used to detect obstacles above (such as a large table)

MartyG-RealSense commented 3 years ago

Okay. I can only answer questions about your D435, as I am not involved in support of the T265 Tracking Camera model. I therefore also cannot provide advice about the rs_d400_and_t265.launch ROS launch file.

You can create a custom json file in the RealSense Viewer by configuring settings and then exporting them to a json file using the json export option.

image

Alternatively, you can obtain pre-made json configuration files from the SDK's Visual Presets documentation page.

https://dev.intelrealsense.com/docs/d400-series-visual-presets#section-preset-table

ju-mingyue commented 3 years ago

@MartyG-RealSense Hello, what is the most mainstream way to use D435 combined with lidar to avoid obstacles? Is it to convert the D435 depth information into a lidar signal, or directly use the point cloud information generated by the 435 camera to avoid obstacles? Or use other methods, I hope you can provide some suggestions, because the effect of using depthimage_to_laserscan is not very satisfactory now. 深度截图_选择区域_20210628102115 If I use point cloud information for obstacle avoidance, how do I need to configure the camera parameters? Do I directly pass the point cloud information of the camera to the autonomous navigation package? Or do I need some other programs to perform the conversion? I desperately need your help. This problem has troubled me for a week. 深度截图_选择区域_20210628121611 Is it correct that I set the point cloud information in this way in the local map?

MartyG-RealSense commented 3 years ago

If your 'lidar' camera is a T265 (which is not actually lidar technology) then trying to convert that into a lidar signal with depthimage_to_laserscan may not be helpful, as the T265 does not have built-in depth sensing like the D435 does. You have to simulate depth measurement using its fisheye sensors and the Python script in the link below, and its depth quality is not as good as the real depth sensors on the 400 Series cameras.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/t265_stereo.py

As mentioned above, I am not able to provide advice about the T265 though as I am not involved in support of that RealSense model. You may find Intel's Better Together guide about using 400 Series and T265 together to be useful reading.

https://www.intelrealsense.com/depth-and-tracking-combined-get-started/

ju-mingyue commented 3 years ago

@MartyG-RealSense I am glad to see your reply. At present, I use point cloud to avoid obstacles. I can see the point cloud information, but the robot cannot respond to obstacles. What I don’t understand now is that D435 uses point cloud to avoid obstacles. How to adjust the distance between obstacles and obstacles. Whether to directly transfer the point cloud information to move_base directly, the following is my node diagram: 深度截图_选择区域_20210628182739

Now I am using the following program:

''' #include "ros/ros.h"

include <sensor_msgs/PointCloud2.h>

include <pcl_conversions/pcl_conversions.h>

include <pcl/point_cloud.h>

include <pcl/point_types.h>

include <pcl/filters/voxel_grid.h>

include <pcl/filters/statistical_outlier_removal.h>

ros::Publisher pub_cloud;

class process_point_cloud { const sensor_msgs::PointCloud2ConstPtr& input_cloud; float voxel_filter_size = 0.01;

public:

sensor_msgs::PointCloud2 voxel_filtered_cloud;
// sensor_msgs::PointCloud2 statistical_outlier_removed_cloud;

process_point_cloud(const sensor_msgs::PointCloud2ConstPtr& cloud) : input_cloud(cloud) {}
~process_point_cloud() {}

void voxel_filter()
{
    // Container for original & filtered data
    pcl::PCLPointCloud2* cloud = new pcl::PCLPointCloud2; 
    pcl::PCLPointCloud2ConstPtr cloudPtr(cloud);
    pcl::PCLPointCloud2 cloud_filtered;

    // Convert to PCL data type
    pcl_conversions::toPCL(*input_cloud, *cloud);

    // Perform the actual filtering
    pcl::VoxelGrid<pcl::PCLPointCloud2> sor;
    sor.setInputCloud(cloudPtr);
    sor.setLeafSize(voxel_filter_size, voxel_filter_size, voxel_filter_size);
    sor.filter(cloud_filtered);

    // Convert to ROS data type
    pcl_conversions::fromPCL(cloud_filtered, voxel_filtered_cloud);

}

// void statistical_outlier_removal()
// {
//     pcl::StatisticalOutlierRemoval<pcl::PCLPointCloud2> sor;
//     sor.setInputCloud(voxel_filtered_cloud);
//     sor.setMeanK(20); 
//     sor.setStddevMulThresh(2); //2
//     // sor.setNegative (true);
//     sor.filter (statistical_outlier_removed_cloud);

// }

};

void cloudCallback(const sensor_msgs::PointCloud2ConstPtr& input_cloud) { process_point_cloud p1(input_cloud); p1.voxel_filter(); pub_cloud.publish(p1.voxel_filtered_cloud);

}

int main(int argc, char **argv) { ros::init(argc, argv, "obstacle_detection"); ros::NodeHandle n;

int queue_size = 1; // Queue size 1 so that latest information is passed on

// Subscribes to the point cloud published from the camera
ros::Subscriber sub_cloud = n.subscribe("/d435/depth/color/points", queue_size, cloudCallback);  

// Publisher to publish the output point cloud 
pub_cloud = n.advertise<sensor_msgs::PointCloud2> ("/obstacle_detection/point_cloud", 1);

ros::spin();
return 0;

}

''' So I want to know where I have the problem? Thank you for your help.

MartyG-RealSense commented 3 years ago

Are you asking how to adjust the minimum and maximum depth sensing distance, please? If so, if you are using PCL then you could define a distance range using PCL's PassThrough Filter.

https://pcl.readthedocs.io/en/latest/passthrough.html

ju-mingyue commented 3 years ago

@MartyG-RealSense Hello, I can see the point cloud image in the map now, and I also added the obstacle information of the camera in the local map. But when passing obstacles, that is, the robot is still unable to evade, I need to convert the image of the point cloud into a 2D raster after looking up the data. How do I need to convert this? At the same time, I compared the link you sent to me using PCL to avoid obstacles. I think it is not much different from the program I used. The following is my map situation and local map parameter configuration: 深度截图_选择区域_20210629105706

''' local_costmap: global_frame: map robot_base_frame: base_link update_frequency: 5.0 publish_frequency: 5.0 static_map: false rolling_window: true width: 6.0 height: 6.0 resolution: 0.05

inflation_radius: 0.2

transform_tolerance: 0.5
plugins:
 - {name: obstacle_layer,  type: "costmap_2d::ObstacleLayer"}
 - {name: realsense_layer, type: "costmap_2d::VoxelLayer"}
 - {name: local_inflation_layer, type: "costmap_2d::InflationLayer"}'''

Below is costmap_common_params.yaml: ''' map_type: costmap obstacle_layer:
enabled: true
obstacle_range: 2.5 raytrace_range: 3.0

max_obstacle_height: 2

min_obstacle_height: 0.0
meter_scoring: true combination_method: 1 observation_sources: laser_scan_sensor laser_scan_sensor: {sensor_frame: laser, data_type: LaserScan, topic: scan, marking: true, clearing: true}

laser_scan_sensor: {sensor_frame: d435_depth_frame, data_type: LaserScan, topic: scan, marking: true, clearing: true}

pointCloud2: {sensor_frame: pointcloud, data_type: PointCloud2, topic: d435_depth_point, marking: true, clearing: true}

realsense_layer: enabled: true
obstacle_range: 2.5 raytrace_range: 3.0 max_obstacle_height: 2
min_obstacle_height: 0.0
meter_scoring: true combination_method: 1 observation_sources: pointcloud_sensor pointcloud_sensor: {sensor_frame: obstacle_detection, data_type: PointCloud2, topic: /obstacle_detection/point_cloud, marking: true, clearing: true} global_inflation_layer:
enabled: true cost_scaling_factor: 2.0 # default 10.0 inflation_radius: 0.4 #0.2
local_inflation_layer:
enabled: true cost_scaling_factor: 5.0 # default 10.0 inflation_radius: 0.1 #0.2

static_layer:
enabled: true ''' For the current situation, I cannot convert the point cloud information monitored by the camera into a raster in real time and load it into the localcostmap, so it is judged as obstacle-free for the robot. I have checked a lot of information and can't think of other ways. I really hope to get your help. You have worked hard.

MartyG-RealSense commented 3 years ago

Would Intel's occupancy ROS plugin from their D435 & T265 'Better Together' guide that I quoted earlier produce the point cloud conversion that you require?

https://github.com/IntelRealSense/realsense-ros/tree/occupancy-mapping/occupancy

ju-mingyue commented 3 years ago

@MartyG-RealSense

深度截图_选择区域_20210630104958

Hello, unfortunately, when using the occupancy package, the problem in the above figure appears. I don’t know how to modify the occupancy node.

深度截图_选择区域_20210630110333

Now I am very confused, occupancy cannot subscribe to the depth information topic of the depth camera.

深度截图_选择区域_20210630110633

At the same time, there is this error. I don’t know how to solve it. Will it affect the operation of the program?

MartyG-RealSense commented 3 years ago

When using the occupancy module, I believe that the next step is to calibrate the two cameras together. In the Calibrating the cameras together section of the 'Better Together' guide, links are provided to a 3D-printing blueprint for a mount and a urdf file for the two cameras. I have posted those links below.

Mount https://github.com/IntelRealSense/realsense-ros/blob/occupancy-mapping/realsense2_camera/meshes/mount_t265_d435.stl

urdf file https://github.com/IntelRealSense/realsense-ros/blob/occupancy-mapping/realsense2_camera/urdf/mount_t265_d435.urdf.xacro

If you have your own mount already and do not wish to use the Intel blueprint, the link below has details of how to perform a custom calibration between the cameras for your own mount.

https://github.com/IntelRealSense/librealsense/pull/4355

Combinacijus commented 3 years ago

深度截图_选择区域_20210630104958

This error seem like a problem with tf and not occupancy node. There should be static (usually static) transforms between frames t265_pose_frame and d435_depth_optical_frame and whatever fixed frame you set in rviz. Usually frames and their positions/transforms are specified relative to other frames in URDF files then tf is published by robot_state_publisher node or manually by static_transform_publisher.

To find problems with tf run rosrun rqt_tf_tree rqt_tf_tree and check if all frames are connected by arrows like this or similar image

Also make sure that you specify correct tf frame sensor_frame: in costmap yaml file

pointcloud_sensor: {sensor_frame: SENSOR_FRAME, data_type: PointCloud2, topic: /obstacle_detection/point_cloud, marking: true, clearing: true}
ju-mingyue commented 3 years ago

@Combinacijus
Hello,Gintaras , I am glad to receive your reply and help me solve a big problem. I have adjusted the TF, but now I am facing a new problem, as shown in the following link: https://github.com/IntelRealSense/librealsense/issues/9326

MartyG-RealSense commented 3 years ago

@Combinacijus Thanks so much for your kind help to @ju-mingyue - it's very appreciated!

@ju-mingyue As mentioned earlier in https://github.com/IntelRealSense/librealsense/issues/9294#issuecomment-869172623 please bear in mind that I am not involved in support for the T265 camera model and so cannot assist if tagged into T265-related questions.

ju-mingyue commented 3 years ago

@MartyG-RealSense Dear MartyG ,I'm very sorry to bother you again, Regarding the problem of D435 for obstacle avoidance, my previous thoughts may be too complicated. It may only be necessary to obtain in-depth topics, perform calculations or conversions through a program, and then output a parameter to publish it, right? Do you have any guidance or suggestions? For example, related links or technical guidance materials.

MartyG-RealSense commented 3 years ago

It's no trouble at all @ju-mingyue

You could look at the SAWR robot project. It is several years old but provides a complete software and hardware blueprint for setting up a RealSense robot in ROS that has obstacle detection as part of its navigation.

https://software.intel.com/content/www/us/en/develop/articles/build-an-autonomous-mobile-robot-with-the-intel-realsense-camera-ros-and-sawr.html

A much more modern RealSense mobile robot blueprint is Erwhi Hedgehog

https://gbr1.github.io/erwhi_hedgehog.html

ju-mingyue commented 3 years ago

@MartyG-RealSense Hello. Thank you very much for the link. It helped me a lot. There is still another problem. The obstacles I generated using depthimage-to-laserscan have been added to the local map, but they cannot be cleared in time, which leads to problems with the robot path. Do you have any good suggestions on this issue? And after joining 435, the operation of the robot is very unstable. What issues should I pay attention to in the parameter configuration?

MartyG-RealSense commented 3 years ago

Is the discussion in the link below helpful to your problem, please?

https://answers.ros.org/question/347490/call-clear-costmap-service-periodically/

In regard to the D435, performance problems are unlikely to be related to IMU as the D435 model does not have an IMU. What kind of environmental conditions are your robot being used in (e.g outdoors in strong sunlight)?

ju-mingyue commented 3 years ago

@MartyG-RealSense Dear MartyG ,Thank you for your prompt reply, I follow the link to be added, as follows: `
realsense_layer:

enabled: true                                           
obstacle_range: 2.5
raytrace_range: 3.0
max_obstacle_height: 1.5                              
min_obstacle_height: 0.1                              
min_detect_distance: 0.15
max_detect_distance: 2
meter_scoring: true
voxel_decay:           0.75     #seconds if linear, e^n if exponential
decay_model:           0      #0=linear, 1=exponential, -1=persistent
voxel_size:            0.05   #meters
track_unknown_space:   true   #default space is unknown
observation_persistence: 0.0  #seconds
max_obstacle_height:   2.0    #meters
unknown_threshold:     15     #voxel height
mark_threshold:        0      #voxel height
update_footprint_enabled: true
combination_method:    1      #1=max, 0=override
obstacle_range:        3.0    #meters
origin_z:              0.0    #meters
publish_voxel_map:     true   # default off
transform_tolerance:   0.2    # seconds
mapping_mode:          false  # default off, saves map not for navigation
map_save_duration:     60     #default 60s, how often to autosave
combination_method: 1
observation_sources: realsense_scan_sensorA  realsense_scan_sensorB
#point_cloud_sensor: {sensor_frame: rtabmap, data_type: PointCloud2, topic: /rtabmap/local_grid_obstacle, marking: 
true, clearing: true} 
realsense_scan_sensorA: {sensor_frame: d435_L_depth_frame, data_type: LaserScan, topic: /realsense_scan1, marking: 
true, clearing: false }
realsense_scan_sensorB: {sensor_frame: d435_R_depth_frame, data_type: LaserScan, topic: /realsense_scan2, marking: 
true, clearing: false}
expected_update_rate: 0.0    #default 0, if not updating at this rate at least, remove from buffer
observation_persistence: 0.0 #default 0, use all measurements taken during now-value, 0=latest
inf_is_valid: false          #default false, for laser scans
clear_after_reading: true    #default false, clear the buffer after the layer gets readings from it
voxel_filter: true           #default off, apply voxel filter to sensor, recommend on
voxel_min_points: 0          #default 0, minimum points per voxel for voxel filter`

But the obstacles detected by the camera still cannot be removed.So I gave up the above method. It is also possible that there is a problem with which of my files? The lidar signal output by my d435 after conversion is shown in the figure below: 深度截图_gnome-terminal-server_20210713133806

My local map parameters as follows:

`local_costmap: global_frame: map map_type: costmap robot_base_frame: base_link

update_frequency: 10.0

publish_frequency: 10.0

static_map: false

rolling_window: true

width: 4.0 height: 4.0 resolution: 0.05

inflation_radius: 0.2

transform_tolerance: 0.5

plugins:

At present, the obstacles added by D435 in local_costmap cannot be removed, but the obstacles added by lidar can be removed. Now I can’t find a better solution.
深度截图_选择区域_20210713154931

Here is the obstacle added by the camera, the map will not be cleared after the obstacle is removed

I hope to get your help, thank you very much.

MartyG-RealSense commented 3 years ago

There are a lot of nan (invalid values - 'not a number') in your D435's converted lidar signal output, though I do not know whether that pattern is normal or not. Some information about nan values when using depthimage_to_laserscan can be found in the link below.

https://answers.ros.org/question/347223/handle-nan-values-from-depthimag_to_laser/

ju-mingyue commented 3 years ago

@MartyG-RealSense Dear MartyG,Sorry to disturb you again For the link you gave me, I have carried out a parameter test, but the effect has not improved. I am not sure why this happens after D435 is converted into a lidar signal. The obstacles still cannot be cleared, but the lidar itself The added obstacles can be removed. After my test, I need to add the configuration of clearing the obstacle layer in the move_base.cpp file, because the move_base.cpp file clears the obstacle_layer obstacle layer by default, so the easiest way is to add the obstacle layer of the camera to the obstacle_layer obstacle layer. But now there is a new question, why does the lidar signal always drift? Is there interference with the converted signal of d435? Do you have a way to solve this problem? 深度截图_选择区域_20210714093116 深度截图_选择区域_20210714162140

MartyG-RealSense commented 3 years ago

It is no trouble @ju-mingyue - thanks very much for sharing with the RealSense community the method that you used to make progress.

Drift is a natural part of robot navigation. Drift accumulates as the robot travels further from its start point. The discussion in the link below is a good reference about this.

https://github.com/IntelRealSense/librealsense/issues/3970

If a robot's navigation is not corrected then eventually it can become lost - a phenomenon called 'kidnapped robot'. Performing relocalization can correct drift.

ju-mingyue commented 2 years ago

@MartyG-RealSense
First of all, I am very grateful for your help, but now d435 has a new problem, I hope to get your answer https://github.com/IntelRealSense/librealsense/issues/9556

MartyG-RealSense commented 2 years ago

I have posted a reply on your new case. Thanks!